WorldWideScience

Sample records for ground segment cost

  1. Design of ground segments for small satellites

    Science.gov (United States)

    Mace, Guy

    1994-01-01

    New concepts must be implemented when designing a Ground Segment (GS) for small satellites to conform to their specific mission characteristics: low cost, one main instrument, spacecraft autonomy, optimized mission return, etc. This paper presents the key cost drivers of such ground segments, the main design features, and the comparison of various design options that can meet the user requirements.

  2. The LOFT Ground Segment

    DEFF Research Database (Denmark)

    Bozzo, E.; Antonelli, A.; Argan, A.;

    2014-01-01

    targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT...... we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We describe the expected GS contributions from ESA and the LOFT consortium. A review is provided of the planned LOFT data products and the details of the data flow, archiving...

  3. The LOFT Ground Segment

    CERN Document Server

    Bozzo, E; Argan, A; Barret, D; Binko, P; Brandt, S; Cavazzuti, E; Courvoisier, T; Herder, J W den; Feroci, M; Ferrigno, C; Giommi, P; Götz, D; Guy, L; Hernanz, M; Zand, J J M in't; Klochkov, D; Kuulkers, E; Motch, C; Lumb, D; Papitto, A; Pittori, C; Rohlfs, R; Santangelo, A; Schmid, C; Schwope, A D; Smith, P J; Webb, N A; Wilms, J; Zane, S

    2014-01-01

    LOFT, the Large Observatory For X-ray Timing, was one of the ESA M3 mission candidates that completed their assessment phase at the end of 2013. LOFT is equipped with two instruments, the Large Area Detector (LAD) and the Wide Field Monitor (WFM). The LAD performs pointed observations of several targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT Burst alert System additionally identifies on-board bright impulsive events (e.g., Gamma-ray Bursts, GRBs) and broadcasts the corresponding position and trigger time to the ground using a dedicated system of ~15 VHF receivers. All WFM data are planned to be made public immediately. In this contribution we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We...

  4. The Envisat-1 ground segment

    Science.gov (United States)

    Harris, Ray; Ashton, Martin

    1995-03-01

    The European Space Agency (ESA) Earth Remote Sensing Satellite (ERS-1 and ERS-2) missions will be followed by the Polar Orbit Earth Mission (POEM) program. The first of the POEM missions will be Envisat-1. ESA has completed the design phase of the ground segment. This paper presents the main elements of that design. The main part of this paper is an overview of the Payload Data Segment (PDS) which is the core of the Envisat-1 ground segment, followed by two further sections which describe in more detail the facilities to be offered by the PDS for archiving and for user servcies. A further section describes some future issues for ground segment development. Logica was the prime contractor of a team of 18 companies which undertook the ESA financed architectural design study of the Envisat-1 ground segment. The outputs of the study included detailed specifications of the components that will acquire, process, archive and disseminate the payload data, together with the functional designs of the flight operations and user data segments.

  5. Figure-Ground Segmentation Using Factor Graphs.

    Science.gov (United States)

    Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr

    2009-06-04

    Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation.We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach.

  6. Superpixel Cut for Figure-Ground Image Segmentation

    Science.gov (United States)

    Yang, Michael Ying; Rosenhahn, Bodo

    2016-06-01

    Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.

  7. Multivariable parametric cost model for space and ground telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd

    2016-09-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost (X) D (1.75 +/- 0.05) λ (-0.5 +/- 0.25) T-0.25 e (-0.04) Y Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).

  8. WSO-UV ground segment for observation optimisation

    Science.gov (United States)

    Basargina, O.; Sachkov, M.; Kazakevich, Y.; Kanev, E.; Sichevskij, S.

    2016-07-01

    The World Space Observatory-Ultraviolet (WSO-UV) is a Russian-Spanish space mission born as a response to the growing up demand for UV facilities by the astronomical community. Main components of the WSO-UV Ground Segment, Mission Control Centre and Science Operation Centre, are being developed by international cooperation In this paper the fundamental components of WSO-UV ground segment are described. Also approaches to optimize observatory scheduling problem are discussed.

  9. Foveated Figure-Ground Segmentation and Its Role in Recognition

    OpenAIRE

    Björkman, Mårten; Eklundh, Jan-Olof

    2005-01-01

    Figure-ground segmentation and recognition are two interrelated processes. In this paper we present a method for foveated segmentation and evaluate it in the context of a binocular real-time recognition system. Segmentation is solved as a binary labeling problem using priors derived from the results ofa simplistic disparity method. Doing so we are able to cope with situations when the disparity range is very wide, situations that has rarely been considered, but appear frequently for narrow-fi...

  10. Comparison of algorithms for ultrasound image segmentation without ground truth

    Science.gov (United States)

    Sikka, Karan; Deserno, Thomas M.

    2010-02-01

    Image segmentation is a pre-requisite to medical image analysis. A variety of segmentation algorithms have been proposed, and most are evaluated on a small dataset or based on classification of a single feature. The lack of a gold standard (ground truth) further adds to the discrepancy in these comparisons. This work proposes a new methodology for comparing image segmentation algorithms without ground truth by building a matrix called region-correlation matrix. Subsequently, suitable distance measures are proposed for quantitative assessment of similarity. The first measure takes into account the degree of region overlap or identical match. The second considers the degree of splitting or misclassification by using an appropriate penalty term. These measures are shown to satisfy the axioms of a quasi-metric. They are applied for a comparative analysis of synthetic segmentation maps to show their direct correlation with human intuition of similar segmentation. Since ultrasound images are difficult to segment and usually lack a ground truth, the measures are further used to compare the recently proposed spectral clustering algorithm (encoding spatial and edge information) with standard k-means over abdominal ultrasound images. Improving the parameterization and enlarging the feature space for k-means steadily increased segmentation quality to that of spectral clustering.

  11. The IXV Ground Segment design, implementation and operations

    Science.gov (United States)

    Martucci di Scarfizzi, Giovanni; Bellomo, Alessandro; Musso, Ivano; Bussi, Diego; Rabaioli, Massimo; Santoro, Gianfranco; Billig, Gerhard; Gallego Sanz, José María

    2016-07-01

    The Intermediate eXperimental Vehicle (IXV) is an ESA re-entry demonstrator that performed, on the 11th February of 2015, a successful re-entry demonstration mission. The project objectives were the design, development, manufacturing and on ground and in flight verification of an autonomous European lifting and aerodynamically controlled re-entry system. For the IXV mission a dedicated Ground Segment was provided. The main subsystems of the IXV Ground Segment were: IXV Mission Control Center (MCC), from where monitoring of the vehicle was performed, as well as support during pre-launch and recovery phases; IXV Ground Stations, used to cover IXV mission by receiving spacecraft telemetry and forwarding it toward the MCC; the IXV Communication Network, deployed to support the operations of the IXV mission by interconnecting all remote sites with MCC, supporting data, voice and video exchange. This paper describes the concept, architecture, development, implementation and operations of the ESA Intermediate Experimental Vehicle (IXV) Ground Segment and outlines the main operations and lessons learned during the preparation and successful execution of the IXV Mission.

  12. NASA's mobile satellite communications program; ground and space segment technologies

    Science.gov (United States)

    Naderi, F.; Weber, W. J.; Knouse, G. H.

    1984-10-01

    This paper describes the Mobile Satellite Communications Program of the United States National Aeronautics and Space Administration (NASA). The program's objectives are to facilitate the deployment of the first generation commercial mobile satellite by the private sector, and to technologically enable future generations by developing advanced and high risk ground and space segment technologies. These technologies are aimed at mitigating severe shortages of spectrum, orbital slot, and spacecraft EIRP which are expected to plague the high capacity mobile satellite systems of the future. After a brief introduction of the concept of mobile satellite systems and their expected evolution, this paper outlines the critical ground and space segment technologies. Next, the Mobile Satellite Experiment (MSAT-X) is described. MSAT-X is the framework through which NASA will develop advanced ground segment technologies. An approach is outlined for the development of conformal vehicle antennas, spectrum and power-efficient speech codecs, and modulation techniques for use in the non-linear faded channels and efficient multiple access schemes. Finally, the paper concludes with a description of the current and planned NASA activities aimed at developing complex large multibeam spacecraft antennas needed for future generation mobile satellite systems.

  13. NASA's mobile satellite communications program; ground and space segment technologies

    Science.gov (United States)

    Naderi, F.; Weber, W. J.; Knouse, G. H.

    1984-01-01

    This paper describes the Mobile Satellite Communications Program of the United States National Aeronautics and Space Administration (NASA). The program's objectives are to facilitate the deployment of the first generation commercial mobile satellite by the private sector, and to technologically enable future generations by developing advanced and high risk ground and space segment technologies. These technologies are aimed at mitigating severe shortages of spectrum, orbital slot, and spacecraft EIRP which are expected to plague the high capacity mobile satellite systems of the future. After a brief introduction of the concept of mobile satellite systems and their expected evolution, this paper outlines the critical ground and space segment technologies. Next, the Mobile Satellite Experiment (MSAT-X) is described. MSAT-X is the framework through which NASA will develop advanced ground segment technologies. An approach is outlined for the development of conformal vehicle antennas, spectrum and power-efficient speech codecs, and modulation techniques for use in the non-linear faded channels and efficient multiple access schemes. Finally, the paper concludes with a description of the current and planned NASA activities aimed at developing complex large multibeam spacecraft antennas needed for future generation mobile satellite systems.

  14. Management of the science ground segment for the Euclid mission

    Science.gov (United States)

    Zacchei, Andrea; Hoar, John; Pasian, Fabio; Buenadicha, Guillermo; Dabin, Christophe; Gregorio, Anna; Mansutti, Oriana; Sauvage, Marc; Vuerli, Claudio

    2016-07-01

    Euclid is an ESA mission aimed at understanding the nature of dark energy and dark matter by using simultaneously two probes (weak lensing and baryon acoustic oscillations). The mission will observe galaxies and clusters of galaxies out to z 2, in a wide extra-galactic survey covering 15000 deg2, plus a deep survey covering an area of 40 deg². The payload is composed of two instruments, an imager in the visible domain (VIS) and an imager-spectrometer (NISP) covering the near-infrared. The launch is planned in Q4 of 2020. The elements of the Euclid Science Ground Segment (SGS) are the Science Operations Centre (SOC) operated by ESA and nine Science Data Centres (SDCs) in charge of data processing, provided by the Euclid Consortium (EC), formed by over 110 institutes spread in 15 countries. SOC and the EC started several years ago a tight collaboration in order to design and develop a single, cost-efficient and truly integrated SGS. The distributed nature, the size of the data set, and the needed accuracy of the results are the main challenges expected in the design and implementation of the SGS. In particular, the huge volume of data (not only Euclid data but also ground based data) to be processed in the SDCs will require distributed storage to avoid data migration across SDCs. This paper describes the management challenges that the Euclid SGS is facing while dealing with such complexity. The main aspect is related to the organisation of a geographically distributed software development team. In principle algorithms and code is developed in a large number of institutes, while data is actually processed at fewer centers (the national SDCs) where the operational computational infrastructures are maintained. The software produced for data handling, processing and analysis is built within a common development environment defined by the SGS System Team, common to SOC and ECSGS, which has already been active for several years. The code is built incrementally through

  15. GOES-R Ground Segment Technical Reference Model

    Science.gov (United States)

    Krause, R. G.; Burnett, M.; Khanna, R.

    2012-12-01

    NOAA Geostationary Environmental Operational Satellite -R Series (GOES-R) Ground Segment Project (GSP) has developed a Technical Reference Model (TRM) to support the documentation of technologies that could form the basis for a set of requirements that could support the evolution towards a NESDIS enterprise ground system. Architecture and technologies in this TRM can be applied or extended to other ground systems for planning and development. The TRM maps GOES-R technologies to the Office of Management and Budget's (OMB) Federal Enterprise Architecture (FEA) Consolidated Reference Model (CRM) V 2.3 Technical Services Standard (TSS). The FEA TRM categories are the framework for the GOES-R TRM. This poster will present the GOES-R TRM.

  16. COST STRUCTURE IN PUBLICLY TRADED COMPANIES IN THE FOOTWEAR SEGMENT

    Directory of Open Access Journals (Sweden)

    Itzhak David Simão Kavesk

    2014-12-01

    Full Text Available The administration costs of an organization composes part of its strategic policy and contribute to the identification of operational risks, that is why strategic cost management and knowledge of fixed and variable costs are critical. Therefore, the objective of this work is to identify the cost structure (fixed costs and variable costs of publicly traded companies in the footwear segment. The research is descriptive, conducted through document analysis and quantitative approach. The sample consists of the publicly traded companies in the footwear segment of the BM&FBovespa, recorded in the period 2009-2011. To perform the study we use the quarterly information disclosed by companies. The results show that the cost structure of companies in the footwear segment are similar, the contribution margin varies from 22 % to 30 % and the costs and expenses vary mostly. According to the evidence, we conclude that the publicly traded companies in the footwear segment have considerable flexibility in their strategies considering that reductions in demand are ccompanied by a reduction of its costs and expenses, favoring positive results even in adverse scenarios.

  17. SILEX ground segment control facilities and flight operations

    Science.gov (United States)

    Demelenne, Benoit; Tolker-Nielsen, Toni; Guillen, Jean-Claude

    1999-04-01

    The European Space Agency is going to conduct an inter orbit link experiment which will connect a low Earth orbiting satellite and a Geostationary satellite via optical terminals. This experiment has been called SILEX (Semiconductor Inter satellite Link Experiment). Two payloads have been built. One called PASTEL (PASsager de TELecommunication) has been embarked on the French Earth observation satellite SPOT4 which has been launched successfully in March 1998. The future European experimental data relay satellite ARTEMIS (Advanced Relay and TEchnology MISsion), which will route the data to ground, will carry the OPALE terminal (Optical Payload Experiment). The European Space Agency is responsible for the operation of both terminals. Due to the complexity and experimental character of this new optical technology, the development, preparation and validation of the ground segment control facilities required a long series of technical and operational qualification tests. This paper is presenting the operations concept and the early results of the PASTEL in orbit operations.

  18. Microstrip Resonator for High Field MRI with Capacitor-Segmented Strip and Ground Plane

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Boer, Vincent; Petersen, Esben Thade

    2017-01-01

    ) segmenting stripe and ground plane of the resonator with series capacitors. The design equations for capacitors providing symmetric current distribution are derived. The performance of two types of segmented resonators are investigated experimentally. To authors’ knowledge, a microstrip resonator, where both......, strip and ground plane are capacitor-segmented, is shown here for the first time....

  19. The Galileo Ground Segment Integrity Algorithms: Design and Performance

    Directory of Open Access Journals (Sweden)

    Carlos Hernández Medel

    2008-01-01

    Full Text Available Galileo, the European Global Navigation Satellite System, will provide to its users highly accurate global positioning services and their associated integrity information. The element in charge of the computation of integrity messages within the Galileo Ground Mission Segment is the integrity processing facility (IPF, which is developed by GMV Aerospace and Defence. The main objective of this paper is twofold: to present the integrity algorithms implemented in the IPF and to show the achieved performance with the IPF software prototype, including aspects such as: implementation of the Galileo overbounding concept, impact of safety requirements on the algorithm design including the threat models for the so-called feared events, and finally the achieved performance with real GPS and simulated Galileo scenarios.

  20. Towards a Multi-Variable Parametric Cost Model for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd

    2016-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost approximately (X) D(exp (1.75 +/- 0.05)) lambda(exp(-0.5 +/- 0.25) T(exp -0.25) e (exp (-0.04)Y). Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).

  1. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  2. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  3. Data processing and visualisation in the Rosetta Science Ground Segment

    Science.gov (United States)

    Geiger, Bernhard

    2016-09-01

    Rosetta is the first space mission to rendezvous with a comet. The spacecraft encountered its target 67P/Churyumov-Gerasimenko in 2014 and currently escorts the comet through a complete activity cycle during perihelion passage. The Rosetta Science Ground Segment (RSGS) is in charge of planning and coordinating the observations carried out by the scientific instruments on board the Rosetta spacecraft. We describe the data processing system implemented at the RSGS in order to support data analysis and science operations planning. The system automatically retrieves and processes telemetry data in near real-time. The generated products include spacecraft and instrument housekeeping parameters, scientific data for some instruments, and derived quantities. Based on spacecraft and comet trajectory information a series of geometric variables are calculated in order to assess the conditions for scheduling the observations of the scientific instruments and analyse the respective measurements obtained. Images acquired by the Rosetta Navigation Camera are processed and distributed in near real-time to the instrument team community. A quicklook web-page displaying the images allows the RSGS team to monitor the state of the comet and the correct acquisition and downlink of the images. Consolidated datasets are later delivered to the long-term archive.

  4. Retina Lesion and Microaneurysm Segmentation using Morphological Reconstruction Methods with Ground-Truth Data

    Energy Technology Data Exchange (ETDEWEB)

    Karnowski, Thomas Paul [ORNL; Govindaswamy, Priya [Oak Ridge National Laboratory (ORNL); Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK); Abramoff, M.D. [University of Iowa

    2008-01-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  5. Retina Lesion and Microaneurysm Segmentation using Morphological Reconstruction Methods with Ground-Truth Data

    Energy Technology Data Exchange (ETDEWEB)

    Karnowski, Thomas Paul [ORNL; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [ORNL; Muthusamy Govindasamy, Vijaya Priya [ORNL

    2009-09-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  6. Development of access-based metrics for site location of ground segment in LEO missions

    Directory of Open Access Journals (Sweden)

    Hossein Bonyan Khamseh

    2010-09-01

    Full Text Available The classical metrics of ground segment site location do not take account of the pattern of ground segment access to the satellite. In this paper, based on the pattern of access between the ground segment and the satellite, two metrics for site location of ground segments in Low Earth Orbits (LEO missions were developed. The two developed access-based metrics are total accessibility duration and longest accessibility gap in a given period of time. It is shown that repeatability cycle is the minimum necessary time interval to study the steady behavior of the two proposed metrics. System and subsystem characteristics of the satellite represented by each of the metrics are discussed. Incorporation of the two proposed metrics, along with the classical ones, in the ground segment site location process results in financial saving in satellite development phase and reduces the minimum required level of in-orbit autonomy of the satellite. To show the effectiveness of the proposed metrics, simulation results are included for illustration.

  7. Robust entropy-guided image segmentation for ground detection in GPR

    Science.gov (United States)

    Roberts, J.; Shkolnikov, Y.; Varsanik, J.; Chevalier, T.

    2013-06-01

    Identifying the ground within a ground penetrating radar (GPR) image is a critical component of automatic and assisted target detection systems. As these systems are deployed to more challenging environments they encounter rougher terrain and less-ideal data, both of which can cause standard ground detection methods to fail. This paper presents a means of improving the robustness of ground detection by adapting a technique from image processing in which images are segmented by local entropy. This segmentation provides the rough location of the air-ground interface, which can then act as a "guide" for more precise but fragile techniques. The effectiveness of this two-step "coarse/fine" entropyguided detection strategy is demonstrated on GPR data from very rough terrain, and its application beyond the realm of GPR data processing is discussed.

  8. Reverse Classification Accuracy: Predicting Segmentation Performance in the Absence of Ground Truth.

    Science.gov (United States)

    Valindria, Vanya V; Lavdas, Ioannis; Bai, Wenjia; Kamnitsas, Konstantinos; Aboagye, Eric O; Rockall, Andrea G; Rueckert, Daniel; Glocker, Ben

    2017-08-01

    When integrating computational tools, such as automatic segmentation, into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data and, in particular, to detect when an automatic method fails. However, this is difficult to achieve due to the absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross validation, because validation data are often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared with a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA, we take the predicted segmentation from a new image to train a reverse classifier, which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as a part of large-scale image analysis studies.

  9. Balancing the fit and logistics costs of market segmentations

    NARCIS (Netherlands)

    Turkensteen, M.; Sierksma, G.; Wieringa, J.E.

    2011-01-01

    Segments are typically formed to serve distinct groups of consumers with differentiated marketing mixes, that better fit their specific needs and wants. However, buyers in a segment are not necessarily geographically closely located. Serving a geographically dispersed segment with one marketing mix

  10. Balancing the fit and logistics costs of market segmentations

    NARCIS (Netherlands)

    Turkensteen, M.; Sierksma, G.; Wieringa, J.E.

    2011-01-01

    Segments are typically formed to serve distinct groups of consumers with differentiated marketing mixes, that better fit their specific needs and wants. However, buyers in a segment are not necessarily geographically closely located. Serving a geographically dispersed segment with one marketing mix

  11. Objective Performance Evaluation of Video Segmentation Algorithms with Ground-Truth

    Institute of Scientific and Technical Information of China (English)

    杨高波; 张兆扬

    2004-01-01

    While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation. Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy. Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames. The experimental results show the feasibility of our approach. Moreover, it is computationally more efficient than previous methods. It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.

  12. Behavior of full-scale concrete segmented pipelines under permanent ground displacements

    Science.gov (United States)

    Kim, Junhee; O'Connor, Sean; Nadukuru, Srinivasa; Lynch, Jerome P.; Michalowski, Radoslaw; Green, Russell A.; Pour-Ghaz, Mohammed; Weiss, W. Jason; Bradshaw, Aaron

    2010-03-01

    Concrete pipelines are one of the most popular underground lifelines used for the transportation of water resources. Unfortunately, this critical infrastructure system remains vulnerable to ground displacements during seismic and landslide events. Ground displacements may induce significant bending, shear, and axial forces to concrete pipelines and eventually lead to joint failures. In order to understand and model the typical failure mechanisms of concrete segmented pipelines, large-scale experimentation is necessary to explore structural and soil-structure behavior during ground faulting. This paper reports on the experimentation of a reinforced concrete segmented concrete pipeline using the unique capabilities of the NEES Lifeline Experimental and Testing Facilities at Cornell University. Five segments of a full-scale commercial concrete pressure pipe (244 cm long and 37.5 cm diameter) are constructed as a segmented pipeline under a compacted granular soil in the facility test basin (13.4 m long and 3.6 m wide). Ground displacements are simulated through translation of half of the test basin. A dense array of sensors including LVDT's, strain gages, and load cells are installed along the length of the pipeline to measure the pipeline response while the ground is incrementally displaced. Accurate measures of pipeline displacements and strains are captured up to the compressive and flexural failure of the pipeline joints.

  13. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    Directory of Open Access Journals (Sweden)

    Sungdae Sim

    2012-12-01

    Full Text Available Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  14. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    Science.gov (United States)

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  15. Software performance in segmenting ground-glass and solid components of subsolid nodules in pulmonary adenocarcinomas.

    Science.gov (United States)

    Cohen, Julien G; Goo, Jin Mo; Yoo, Roh-Eul; Park, Chang Min; Lee, Chang Hyun; van Ginneken, Bram; Chung, Doo Hyun; Kim, Young Tae

    2016-12-01

    To evaluate the performance of software in segmenting ground-glass and solid components of subsolid nodules in pulmonary adenocarcinomas. Seventy-three pulmonary adenocarcinomas manifesting as subsolid nodules were included. Two radiologists measured the maximal axial diameter of the ground-glass components on lung windows and that of the solid components on lung and mediastinal windows. Nodules were segmented using software by applying five (-850 HU to -650 HU) and nine (-130 HU to -500 HU) attenuation thresholds. We compared the manual and software measurements of ground-glass and solid components with pathology measurements of tumour and invasive components. Segmentation of ground-glass components at a threshold of -750 HU yielded mean differences of +0.06 mm (p = 0.83, 95 % limits of agreement, 4.51 to 4.67) and -2.32 mm (p software (at -350 HU) and pathology measurements and between the manual (lung and mediastinal windows) and pathology measurements were -0.12 mm (p = 0.74, -5.73 to 5.55]), 0.15 mm (p = 0.73, -6.92 to 7.22), and -1.14 mm (p Software segmentation of ground-glass and solid components in subsolid nodules showed no significant difference with pathology. • Software can effectively segment ground-glass and solid components in subsolid nodules. • Software measurements show no significant difference with pathology measurements. • Manual measurements are more accurate on lung windows than on mediastinal windows.

  16. Ground truth delineation for medical image segmentation based on Local Consistency and Distribution Map analysis.

    Science.gov (United States)

    Cheng, Irene; Sun, Xinyao; Alsufyani, Noura; Xiong, Zhihui; Major, Paul; Basu, Anup

    2015-01-01

    Computer-aided detection (CAD) systems are being increasingly deployed for medical applications in recent years with the goal to speed up tedious tasks and improve precision. Among others, segmentation is an important component in CAD systems as a preprocessing step to help recognize patterns in medical images. In order to assess the accuracy of a CAD segmentation algorithm, comparison with ground truth data is necessary. To-date, ground truth delineation relies mainly on contours that are either manually defined by clinical experts or automatically generated by software. In this paper, we propose a systematic ground truth delineation method based on a Local Consistency Set Analysis approach, which can be used to establish an accurate ground truth representation, or if ground truth is available, to assess the accuracy of a CAD generated segmentation algorithm. We validate our computational model using medical data. Experimental results demonstrate the robustness of our approach. In contrast to current methods, our model also provides consistency information at distributed boundary pixel level, and thus is invariant to global compensation error.

  17. Seismic fragility formulations for segmented buried pipeline systems including the impact of differential ground subsidence

    Energy Technology Data Exchange (ETDEWEB)

    Pineda Porras, Omar Andrey [Los Alamos National Laboratory; Ordaz, Mario [UNAM, MEXICO CITY

    2009-01-01

    Though Differential Ground Subsidence (DGS) impacts the seismic response of segmented buried pipelines augmenting their vulnerability, fragility formulations to estimate repair rates under such condition are not available in the literature. Physical models to estimate pipeline seismic damage considering other cases of permanent ground subsidence (e.g. faulting, tectonic uplift, liquefaction, and landslides) have been extensively reported, not being the case of DGS. The refinement of the study of two important phenomena in Mexico City - the 1985 Michoacan earthquake scenario and the sinking of the city due to ground subsidence - has contributed to the analysis of the interrelation of pipeline damage, ground motion intensity, and DGS; from the analysis of the 48-inch pipeline network of the Mexico City's Water System, fragility formulations for segmented buried pipeline systems for two DGS levels are proposed. The novel parameter PGV{sup 2}/PGA, being PGV peak ground velocity and PGA peak ground acceleration, has been used as seismic parameter in these formulations, since it has shown better correlation to pipeline damage than PGV alone according to previous studies. By comparing the proposed fragilities, it is concluded that a change in the DGS level (from Low-Medium to High) could increase the pipeline repair rates (number of repairs per kilometer) by factors ranging from 1.3 to 2.0; being the higher the seismic intensity the lower the factor.

  18. Phantom-based ground-truth generation for cerebral vessel segmentation and pulsatile deformation analysis

    Science.gov (United States)

    Schetelig, Daniel; Säring, Dennis; Illies, Till; Sedlacik, Jan; Kording, Fabian; Werner, René

    2016-03-01

    Hemodynamic and mechanical factors of the vascular system are assumed to play a major role in understanding, e.g., initiation, growth and rupture of cerebral aneurysms. Among those factors, cardiac cycle-related pulsatile motion and deformation of cerebral vessels currently attract much interest. However, imaging of those effects requires high spatial and temporal resolution and remains challenging { and similarly does the analysis of the acquired images: Flow velocity changes and contrast media inflow cause vessel intensity variations in related temporally resolved computed tomography and magnetic resonance angiography data over the cardiac cycle and impede application of intensity threshold-based segmentation and subsequent motion analysis. In this work, a flow phantom for generation of ground-truth images for evaluation of appropriate segmentation and motion analysis algorithms is developed. The acquired ground-truth data is used to illustrate the interplay between intensity fluctuations and (erroneous) motion quantification by standard threshold-based segmentation, and an adaptive threshold-based segmentation approach is proposed that alleviates respective issues. The results of the phantom study are further demonstrated to be transferable to patient data.

  19. Local Histogram of Figure/Ground Segmentations for Dynamic Background Subtraction

    Directory of Open Access Journals (Sweden)

    Bineng Zhong

    2010-01-01

    Full Text Available We propose a novel feature, local histogram of figure/ground segmentations, for robust and efficient background subtraction (BGS in dynamic scenes (e.g., waving trees, ripples in water, illumination changes, camera jitters, etc.. We represent each pixel as a local histogram of figure/ground segmentations, which aims at combining several candidate solutions that are produced by simple BGS algorithms to get a more reliable and robust feature for BGS. The background model of each pixel is constructed as a group of weighted adaptive local histograms of figure/ground segmentations, which describe the structure properties of the surrounding region. This is a natural fusion because multiple complementary BGS algorithms can be used to build background models for scenes. Moreover, the correlation of image variations at neighboring pixels is explicitly utilized to achieve robust detection performance since neighboring pixels tend to be similarly affected by environmental effects (e.g., dynamic scenes. Experimental results demonstrate the robustness and effectiveness of the proposed method by comparing with four representatives of the state of the art in BGS.

  20. Local Histogram of Figure/Ground Segmentations for Dynamic Background Subtraction

    Directory of Open Access Journals (Sweden)

    Yuan Xiaotong

    2010-01-01

    Full Text Available Abstract We propose a novel feature, local histogram of figure/ground segmentations, for robust and efficient background subtraction (BGS in dynamic scenes (e.g., waving trees, ripples in water, illumination changes, camera jitters, etc.. We represent each pixel as a local histogram of figure/ground segmentations, which aims at combining several candidate solutions that are produced by simple BGS algorithms to get a more reliable and robust feature for BGS. The background model of each pixel is constructed as a group of weighted adaptive local histograms of figure/ground segmentations, which describe the structure properties of the surrounding region. This is a natural fusion because multiple complementary BGS algorithms can be used to build background models for scenes. Moreover, the correlation of image variations at neighboring pixels is explicitly utilized to achieve robust detection performance since neighboring pixels tend to be similarly affected by environmental effects (e.g., dynamic scenes. Experimental results demonstrate the robustness and effectiveness of the proposed method by comparing with four representatives of the state of the art in BGS.

  1. The Cost of Supplying Segmented Consumers From a Central Facility

    DEFF Research Database (Denmark)

    Turkensteen, Marcel; Klose, Andreas

    consider three measures of dispersion of demand points: the average distance between demand points, the maximum distance and the surface size.In our distribution model, all demand points are restocked from a central facility. The observed logistics costs are determined using the tour length estimations......Organizations regularly face the strategic marketing decision which groups of consumers they should target. A potential problem, highlighted in Steenkamp et al. (2002), is that the target consumers may be so widely dispersed that an organization cannot serve its customers cost-effectively. We...... measure if there are many stops on a route and with our average distance measure if there are relatively few....

  2. Image segmentation techniques for improved processing of landmine responses in ground-penetrating radar data

    Science.gov (United States)

    Torrione, Peter A.; Collins, Leslie

    2007-04-01

    As ground penetrating radar sensor phenomenology improves, more advanced statistical processing approaches become applicable to the problem of landmine detection in GPR data. Most previous studies on landmine detection in GPR data have focused on the application of statistics and physics based prescreening algorithms, new feature extraction approaches, and improved feature classification techniques. In the typical framework, prescreening algorithms provide spatial location information of anomalous responses in down-track / cross-track coordinates, and feature extraction algorithms are then tasked with generating low-dimensional information-bearing feature sets from these spatial locations. However in time-domain GPR, a significant portion of the data collected at prescreener flagged locations may be unrelated to the true anomaly responses - e.g. ground bounce response, responses either temporally "before" or "after" the anomalous response, etc. The ability to segment the information-bearing region of the GPR image from the background of the image may thus provide improved performance for feature-based processing of anomaly responses. In this work we will explore the application of Markov random fields (MRFs) to the problem of anomaly/background segmentation in GPR data. Preliminary results suggest the potential for improved feature extraction and overall performance gains via application of image segmentation approaches prior to feature extraction.

  3. The Cost of Supplying Segmented Consumers From a Central Facility

    DEFF Research Database (Denmark)

    Turkensteen, Marcel; Klose, Andreas

    Organizations regularly face the strategic marketing decision which groups of consumers they should target. A potential problem, highlighted in Steenkamp et al. (2002), is that the target consumers may be so widely dispersed that an organization cannot serve its customers cost-effectively. We...

  4. Technology Infusion of CodeSonar into the Space Network Ground Segment (RII07)

    Science.gov (United States)

    Benson, Markland

    2008-01-01

    The NASA Software Assurance Research Program (in part) performs studies as to the feasibility of technologies for improving the safety, quality, reliability, cost, and performance of NASA software. This study considers the application of commercial automated source code analysis tools to mission critical ground software that is in the operations and sustainment portion of the product lifecycle.

  5. Segmentation of low‐cost high efficiency oxide‐based thermoelectric materials

    DEFF Research Database (Denmark)

    Le, Thanh Hung; Van Nong, Ngo; Linderoth, Søren;

    2015-01-01

    efficiency of TE oxides has been a major drawback limiting these materials to broaden applications. In this work, theoretical calculations are used to predict how segmentation of oxide and semimetal materials, utilizing the benefits of both types of materials, can provide high efficiency, high temperature...... segmented legs based p-type Ca3Co4O9 and n-type ZnO oxides excluding electrical and thermal losses. It is found that the maximum efficiency of segmented unicouple could be linearly decreased with increasing the interfacial contact resistance. The obtained results provide useful tool for designing a low...... oxide-based segmented legs. The materials for segmentation are selected by their compatibility factors and their conversion efficiency versus material cost, i.e., “efficiency ratio”. Numerical modelling results showed that conversion efficiency could reach values of more than 10% for unicouples using...

  6. Ground Water Atlas of the United States: Segment 8, Montana, North Dakota, South Dakota, Wyoming

    Science.gov (United States)

    Whitehead, R.L.

    1996-01-01

    The States of Montana, North Dakota, South Dakota, and Wyoming compose the 392,764-square-mile area of Segment 8, which is in the north-central part of the continental United States. The area varies topographically from the high rugged mountain ranges of the Rocky Mountains in western Montana and Wyoming to the gently undulating surface of the Central Lowland in eastern North Dakota and South Dakota (fig. 1). The Black Hills in southwestern South Dakota and northeastern Wyoming interrupt the uniformity of the intervening Great Plains. Segment 8 spans the Continental Divide, which is the drainage divide that separates streams that generally flow westward from those that generally flow eastward. The area of Segment 8 is drained by the following major rivers or river systems: the Green River drains southward to join the Colorado River, which ultimately discharges to the Gulf of California; the Clark Fork and the Kootenai Rivers drain generally westward by way of the Columbia River to discharge to the Pacific Ocean; the Missouri River system and the North Platte River drain eastward and southeastward to the Mississippi River, which discharges to the Gulf of Mexico; and the Red River of the North and the Souris River drain northward through Lake Winnipeg to ultimately discharge to Hudson Bay in Canada. These rivers and their tributaries are an important source of water for public-supply, domestic and commercial, agricultural, and industrial uses. Much of the surface water has long been appropriated for agricultural use, primarily irrigation, and for compliance with downstream water pacts. Reservoirs store some of the surface water for flood control, irrigation, power generation, and recreational purposes. Surface water is not always available when and where it is needed, and ground water is the only other source of supply. Ground water is obtained primarily from wells completed in unconsolidated-deposit aquifers that consist mostly of sand and gravel, and from wells

  7. Ground Water Atlas of the United States: Segment 1, California, Nevada

    Science.gov (United States)

    Planert, Michael; Williams, John S.

    1995-01-01

    California and Nevada compose Segment 1 of the Ground Water Atlas of the United States. Segment 1 is a region of pronounced physiographic and climatic contrasts. From the Cascade Mountains and the Sierra Nevada of northern California, where precipitation is abundant, to the Great Basin in Nevada and the deserts of southern California, which have the most arid environments in the United States, few regions exhibit such a diversity of topography or environment. Since the discovery of gold in the mid-1800's, California has experienced a population, industrial, and agricultural boom unrivaled by that of any other State. Water needs in California are very large, and the State leads the United States in agricultural and municipal water use. The demand for water exceeds the natural water supply in many agricultural and nearly all urban areas. As a result, water is impounded by reservoirs in areas of surplus and transported to areas of scarcity by an extensive network of aqueducts. Unlike California, which has a relative abundance of water, development in Nevada has been limited by a scarcity of recoverable freshwater. The Truckee, the Carson, the Walker, the Humboldt, and the Colorado Rivers are the only perennial streams of significance in the State. The individual basin-fill aquifers, which together compose the largest known ground-water reserves, receive little annual recharge and are easily depleted. Nevada is sparsely populated, except for the Las Vegas, the Reno-Sparks, and the Carson City areas, which rely heavily on imported water for public supplies. Although important to the economy of Nevada, agriculture has not been developed to the same degree as in California due, in large part, to a scarcity of water. Some additional ground-water development might be possible in Nevada through prudent management of the basin-fill aquifers and increased utilization of ground water in the little-developed carbonate-rock aquifers that underlie the eastern one-half of the State

  8. Consolidated Ground Segment Requirements for a UHF Radar for the ESSAS

    Science.gov (United States)

    Muller, Florent; Vera, Juan

    2009-03-01

    ESA has launched a nine months long study to define the requirements associated to the ground segment of a UHF (300-3000 MHz) radar system. The study has been awarded in open competition to a consortium led by Onera, associated to the Spanish companies Indra and its sub-contractor Deimos. After a phase of consolidation of the requirements, different monostatic and bistatic concepts of radars will be proposed and evaluated. Two concepts will be selected for further design studies. ESA will then select the best one, for detailed design as well as cost and performance evaluation. The aim of this paper is to present the results of the first phase of the study concerning the consolidation of the radar system requirements. The main mission for the system is to be able to build and maintain a catalogue of the objects in low Earth orbit (apogee lower than 2000km) in an autonomous way, for different sizes of objects, depending on the future successive development phases of the project. The final step must give the capability of detecting and tracking 10cm objects, with a possible upgrade to 5 cm objects. A demonstration phase must be defined for 1 m objects. These different steps will be considered during all the phases of the study. Taking this mission and the different steps of the study as a starting point, the first phase will define a set of requirements for the radar system. It was finished at the end of January 2009. First part will describe the constraints derived from the targets and their environment. Orbiting objects have a given distribution in space, and their observability and detectability are based on it. It is also related to the location of the radar system But they are also dependant on the natural propagation phenomenon, especially ionospheric issues, and the characteristics of the objects. Second part will focus on the mission itself. To carry out the mission, objects must be detected and tracked regularly to refresh the associated orbital parameters

  9. Ground Water Atlas of the United States: Segment 3, Kansas, Missouri, Nebraska

    Science.gov (United States)

    Miller, James A.; Appel, Cynthia L.

    1997-01-01

    The three States-Kansas, Missouri, and Nebraska-that comprise Segment 3 of this Atlas are in the central part of the United States. The major rivers that drain these States are the Niobrara, the Platte, the Kansas, the Arkansas, and the Missouri; the Mississippi River is the eastern boundary of the area. These rivers supply water for many uses but ground water is the source of slightly more than one-half of the total water withdrawn for all uses within the three-State area. The aquifers that contain the water consist of consolidated sedimentary rocks and unconsolidated deposits that range in age from Cambrian through Quaternary. This chapter describes the geology and hydrology of each of the principal aquifers throughout the three-State area. Some water enters Segment 3 as inflow from rivers and aquifers that cross the segment boundaries, but precipitation, as rain and snow, is the primary source of water within the area. Average annual precipitation (1951-80) increases from west to east and ranges from about 16 to 48 inches (fig. 1). The climate of the western one-third of Kansas and Nebraska, where the average annual precipitation generally is less than 20 inches per year, is considered to be semiarid. This area receives little precipitation chiefly because it is distant from the Gulf of Mexico, which is the principal source of moisture-laden air for the entire segment, but partly because it is located in the rain shadow of the Rocky Mountains. Average annual precipitation is greatest in southeastern Missouri. Much of the precipitation is returned to the atmosphere by evapotranspiration, which is the combination of evaporation from the land surface and surface-water bodies, and transpiration from plants. Some of the precipitation either flows directly into streams as overland runoff or percolates into the soil and then moves downward into aquifers where it is stored for a time and subsequently released as base flow to streams. Average annual runoff, which is the

  10. General introduction on payloads, ground segment and data application of Fengyun 3A

    Institute of Scientific and Technical Information of China (English)

    Peng ZHANG; Jun YANG; Chaohua DONG; Naimeng LU; Zhongdong YANG; Jinmin SHI

    2009-01-01

    Fengyun 3 series are the second-generation polar-orbiting meteorological satellites of China. The first satellite of Fengyun 3 series, FY-3A, is a research and development satellite with 11 payloads onboard. FY-3A was launched successfully at 11 a.m. On May 27, 2008. Since the launch, FY-3A data have been applied to the services on the flood season and the Beijing 2008 Olympic Games. In this paper, the platform, payloads, and ground segment designs are introduced. Some typical images during the on-orbit commission test are rendered. Improvements of FY-3A on Earth observations are summarized at the end by comparing them with FY-1D, the last satellite of Fengyun 1 series.

  11. Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes

    Science.gov (United States)

    Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda

    2012-01-01

    Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.

  12. Cost analysis of ground-water supplies in the North Atlantic region, 1970

    Science.gov (United States)

    Cederstrom, Dagfin John

    1973-01-01

    The cost of municipal and industrial ground water (or, more specifically, large supplies of ground water) at the wellhead in the North Atlantic Region in 1970 generally ranged from 1.5 to 5 cents per thousand gallons. Water from crystalline rocks and shale is relatively expensive. Water from sandstone is less so. Costs of water from sands and gravels in glaciated areas and from Coastal Plain sediments range from moderate to very low. In carbonate rocks costs range from low to fairly high. The cost of ground water at the wellhead is low in areas of productive aquifers, but owing to the cost of connecting pipe, costs increase significantly in multiple-well fields. In the North Atlantic Region, development of small to moderate supplies of ground water may offer favorable cost alternatives to planners, but large supplies of ground water for delivery to one point cannot generally be developed inexpensively. Well fields in the less productive aquifers may be limited by costs to 1 or 2 million gallons a day, but in the more favorable aquifers development of several tens of millions of gallons a day may be practicable and inexpensive. Cost evaluations presented cannot be applied to any one specific well or specific site because yields of wells in any one place will depend on the local geologic and hydrologic conditions; however, with such cost adjustments as may be necessary, the methodology presented should have wide applicability. Data given show the cost of water at the wellhead based on the average yield of several wells. The cost of water delivered by a well field includes costs of connecting pipe and of wells that have the yields and spacings specified. Cost of transport of water from the well field to point of consumption and possible cost of treatment are not evaluated. In the methodology employed, costs of drilling and testing, pumping equipment, engineering for the well field, amortization at 5% percent interest, maintenance, and cost of power are considered. The

  13. The CryoSat-2 Payload Data Ground Segment and Data Processing

    Science.gov (United States)

    Frommknecht, Bjoern; Parrinello, Tommaso; Badessi, Stefano; Mizzi, Loretta; Torroni, Vittorio

    2016-04-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a baseline 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Scope of this paper is to describe the Cryosat-2 Ground Segment present configuration and its main function to satisfy the Cryosat-2 mission requirements. In particular, the paper will highlight the current status of the processing of the SIRAL instrument L1b and L2 products, both for ocean and ice products, in terms of completeness and availability. Additional information will be also given on the PDGS current status and planned evolutions, including product and processor updates and associated reprocessing campaigns.

  14. Measured and estimated ground reaction forces for multi-segment foot models.

    Science.gov (United States)

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L; Richards, James G

    2010-12-01

    Accurate measurement of ground reaction forces under discrete areas of the foot is important in the development of more advanced foot models, which can improve our understanding of foot and ankle function. To overcome current equipment limitations, a few investigators have proposed combining a pressure mat with a single force platform and using a proportionality assumption to estimate subarea shear forces and free moments. In this study, two adjacent force platforms were used to evaluate the accuracy of the proportionality assumption on a three segment foot model during normal gait. Seventeen right feet were tested using a targeted walking approach, isolating two separate joints: transverse tarsal and metatarsophalangeal. Root mean square (RMS) errors in shear forces up to 6% body weight (BW) were found using the proportionality assumption, with the highest errors (peak absolute errors up to 12% BW) occurring between the forefoot and toes in terminal stance. The hallux exerted a small braking force in opposition to the propulsive force of the forefoot, which was unaccounted for by the proportionality assumption. While the assumption may be suitable for specific applications (e.g. gait analysis models), it is important to understand that some information on foot function can be lost. The results help highlight possible limitations of the assumption. Measured ensemble average subarea shear forces during normal gait are also presented for the first time.

  15. An individual and dynamic Body Segment Inertial Parameter validation method using ground reaction forces.

    Science.gov (United States)

    Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice

    2014-05-01

    Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods.

  16. Automatic segmentation of ground-glass opacities in lung CT images by using Markov random field-based algorithms.

    Science.gov (United States)

    Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo

    2012-06-01

    Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.

  17. Improvement of band segmentation in Epo images via column shift transformation with cost functions.

    Science.gov (United States)

    Stolc, S; Bajla, I

    2006-04-01

    In recent years, the development of methodology and laboratory techniques for doping control (DC) of recombinant erythropoietin (rEpo) has become one of the most important topics pursued by doping control laboratories accredited by World Anti-Doping Agency (WADA). The software system GASepo has been developed within the international WADA project as a support for Epo doping control. Although a great number of functions for automatic image processing have been involved in this software, for Epo images with considerably distorted bands additional effort is required from the user to interactively correct the results of improper band segmentation. In this paper a problem of geometrically distorted bands is addressed from the viewpoint of how to transform the lanes in distorted Epo images in order to reach better band segmentation. A method of band straightening via column shift transformation has been proposed that is formulated as an optimization procedure with cost functions. The method involves several novel approaches: two-stage optimization procedure, four cost functions and selection of relevant columns. The developed band straightening algorithm (BSA) has been tested on real Epo images with distorted bands. Based on the evaluation scheme involving the GASepo software itself a recommendation is made for implementation of the method with the cost function based on correlation matrix. Estimates of computational complexity of the individual steps of BSA are also given.

  18. Ground Water Atlas of the United States: Segment 4, Oklahoma, Texas

    Science.gov (United States)

    Ryder, Paul D.

    1996-01-01

    The two States, Oklahoma and Texas, that compose Segment 4 of this Atlas are located in the south-central part of the Nation. These States are drained by numerous rivers and streams, the largest being the Arkansas, the Canadian, the Red, the Sabine, the Trinity, the Brazos, the Colorado, and the Pecos Rivers and the Rio Grande. Many of these rivers and their tributaries supply large amounts of water for human use, mostly in the eastern parts of the two States. The large perennial streams in the east with their many associated impoundments coincide with areas that have dense populations. Large metropolitan areas such as Oklahoma City and Tulsa, Okla., and Dallas, Fort Worth, Houston, and Austin, Tex., are supplied largely or entirely by surface water. However, in 1985 more than 7.5 million people, or about 42 percent of the population of the two States, depended on ground water as a source of water supply. The metropolitan areas of San Antonio and El Paso, Tex., and numerous smaller communities depend largely or entirely on ground water for their source of supply. The ground water is contained in aquifers that consist of unconsolidated deposits and consolidated sedimentary rocks. This chapter describes the geology and hydrology of each of the principal aquifers throughout the two-State area. Precipitation is the source of all the water in Oklahoma and Texas. Average annual precipitation ranges from about 8 inches per year in southwestern Texas to about 56 inches per year in southeastern Texas (fig. 1). In general, precipitation increases rather uniformly from west to east in the two States. Much of the precipitation either flows directly into rivers and streams as overland runoff or indirectly as base flow that discharges from aquifers where the water has been stored for some time. Accordingly, the areal distribution of average annual runoff from 1951 to 1980 (fig. 2) reflects that of average annual precipitation. Average annual runoff in the two-State area ranges

  19. Probabilistic prediction of expected ground condition and construction time and costs in road tunnels

    Directory of Open Access Journals (Sweden)

    A. Mahmoodzadeh

    2016-10-01

    Full Text Available Ground condition and construction (excavation and support time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.

  20. Probabilistic prediction of expected ground condition and construction time and costs in road tunnels

    Institute of Scientific and Technical Information of China (English)

    A. Mahmoodzadeh; S. Zare

    2016-01-01

    Ground condition and construction (excavation and support) time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC) simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected con-struction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.

  1. How human resource organization can enhance space information acquisition and processing: the experience of the VENESAT-1 ground segment

    Science.gov (United States)

    Acevedo, Romina; Orihuela, Nuris; Blanco, Rafael; Varela, Francisco; Camacho, Enrique; Urbina, Marianela; Aponte, Luis Gabriel; Vallenilla, Leopoldo; Acuña, Liana; Becerra, Roberto; Tabare, Terepaima; Recaredo, Erica

    2009-12-01

    Built in cooperation with the P.R of China, in October 29th of 2008, the Bolivarian Republic of Venezuela launched its first Telecommunication Satellite, the so called VENESAT-1 (Simón Bolívar Satellite), which operates in C (covering Center America, The Caribbean Region and most of South America), Ku (Bolivia, Cuba, Dominican Republic, Haiti, Paraguay, Uruguay, Venezuela) and Ka bands (Venezuela). The launch of VENESAT-1 represents the starting point for Venezuela as an active player in the field of space science and technology. In order to fulfill mission requirements and to guarantee the satellite's health, local professionals must provide continuous monitoring, orbit calculation, maneuvers preparation and execution, data preparation and processing, as well as data base management at the VENESAT-1 Ground Segment, which includes both a primary and backup site. In summary, data processing and real time data management are part of the daily activities performed by the personnel at the ground segment. Using published and unpublished information, this paper presents how human resource organization can enhance space information acquisition and processing, by analyzing the proposed organizational structure for the VENESAT-1 Ground Segment. We have found that the proposed units within the organizational structure reflect 3 key issues for mission management: Satellite Operations, Ground Operations, and Site Maintenance. The proposed organization is simple (3 hierarchical levels and 7 units), and communication channels seem efficient in terms of facilitating information acquisition, processing, storage, flow and exchange. Furthermore, the proposal includes a manual containing the full description of personnel responsibilities and profile, which efficiently allocates the management and operation of key software for satellite operation such as the Real-time Data Transaction Software (RDTS), Data Management Software (DMS), and Carrier Spectrum Monitoring Software (CSM

  2. Technology, Safety and Costs of Decommissioning a Reference Low-Level Waste Burial Ground. Main Report

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, E. S.; Holter, G. M.

    1980-06-01

    Safety and cost information are developed for the conceptual decommissioning of commercial low-level waste (LLW) burial grounds. Two generic burial grounds, one located on an arid western site and the other located on a humid eastern site, are used as reference facilities for the study. The two burial grounds are assumed to have the same site capacity for waste, the same radioactive waste inventory, and similar trench characteristics and operating procedures. The climate, geology. and hydrology of the two sites are chosen to be typical of real western and eastern sites. Volume 1 (Main Report) contains background information and study results in summary form.

  3. Technology, Safety and Costs of Decommissioning a Reference Low-Level Waste Burial Ground. Appendices

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-06-01

    Safety and cost information are developed for the conceptual decommissioning of commercial low-level waste (LLW) burial grounds. Two generic burial grounds, one located on an arid western site and the other located on a humid eastern site, are used as reference facilities for the study. The two burial grounds are assumed to have the same site capacity for waste, the same radioactive waste inventory, and similar trench characteristics and operating procedures. The climate, geology. and hydrology of the two sites are chosen to be typical of real western and eastern sites. Volume 2 (Appendices) contains the detailed analyses and data needed to support the results given in Volume 1.

  4. Senior year high school pupils’ segmentation based on the benefits and costs considered in decesion making process of educational choices

    OpenAIRE

    Mihai-Florin BĂCILĂ

    2012-01-01

    Attracting financial resources is an important issue for universities. Because for universities is impossible to cover from state budget the whole costs of education, the importance of attracting students has increased. As the pupils’ needs and wishes are heterogeneous, in order to better understand the decision making process, universities must divide the educational market in smaller homogeneous segments. Although the segmentation concept and the decision making process of educational choic...

  5. Portable end-to-end ground system for low-cost mission support

    Science.gov (United States)

    Lam, Barbara

    1996-11-01

    This paper presents a revolutionary architecture of the end-to-end ground system to reduce overall mission support costs. The present ground system of the Jet Propulsion Laboratory (JPL) is costly to operate, maintain, deploy, reproduce, and document. In the present climate of shrinking NASA budgets, this proposed architecture takes on added importance as it should dramatically reduce all of the above costs. Currently, the ground support functions (i.e., receiver, tracking, ranging, telemetry, command, monitor and control) are distributed among several subsystems that are housed in individual rack-mounted chassis. These subsystems can be integrated into one portable laptop system using established Multi Chip Module (MCM) packaging technology and object-based software libraries. The large scale integration of subsystems into a small portable system connected to the World Wide Web (WWW) will greatly reduce operations, maintenance and reproduction costs. Several of the subsystems can be implemented using Commercial Off-The-Shelf (COTS) products further decreasing non-recurring engineering costs. The inherent portability of the system will open up new ways for using the ground system at the "point-of-use" site as opposed to maintaining several large centralized stations. This eliminates the propagation delay of the data to the Principal Investigator (PI), enabling the capture of data in real-time and performing multiple tasks concurrently from any location in the world. Sample applications are to use the portable ground system in remote areas or mobile vessels for real-time correlation of satellite data with earth-bound instruments; thus, allowing near real-time feedback and control of scientific instruments. This end-to-end portable ground system will undoubtedly create opportunities for better scientific observation and data acquisition.

  6. Cost-Effective Control of Ground-Level Ozone Pollution in and around Beijing

    Institute of Scientific and Technical Information of China (English)

    Xie Xuxuan; Zhang Shiqiu; Xu Jianhua; Wu Dan; Zhu Tong

    2012-01-01

    Ground level ozone pollution has become a significant air pollution problem in Beijing. Because of the complex way in which ozone is formed, it is difficult for policy makers to identify optimal control options on a cost-effective basis. This paper identi- fies and assesses a range of options for addressing this problem. We apply the Ambient Least Cost Model and compare the eco- nomic costs of control options, then recommend the most effective sequence to realize pollution control at the lowest cost. The study finds that installing of Stage II gasoline vapor recovery system at Beijing's 1446 gasoline stations would be the most cost-effective option. Overall, options to reduce ozone pollution by cutting ve- hicular emissions are much more cost-effective than options to "clean up" coal-fired power plants.

  7. Feedback enhances feedforward figure-ground segmentation by changing firing mode

    OpenAIRE

    Hans Supèr; August Romeo

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-gro...

  8. Automated choroid segmentation in three-dimensional 1 - μ m wide-view OCT images with gradient and regional costs

    Science.gov (United States)

    Shi, Fei; Tian, Bei; Zhu, Weifang; Xiang, Dehui; Zhou, Lei; Xu, Haobo; Chen, Xinjian

    2016-12-01

    Choroid thickness and volume estimated from optical coherence tomography (OCT) images have emerged as important metrics in disease management. This paper presents an automated three-dimensional (3-D) method for segmenting the choroid from 1-μm wide-view swept source OCT image volumes, including the Bruch's membrane (BM) and the choroidal-scleral interface (CSI) segmentation. Two auxiliary boundaries are first detected by modified Canny operators and then the optical nerve head is detected and removed. The BM and the initial CSI segmentation are achieved by 3-D multiresolution graph search with gradient-based cost. The CSI is further refined by adding a regional cost, calculated from the wavelet-based gradual intensity distance. The segmentation accuracy is quantitatively evaluated on 32 normal eyes by comparing with manual segmentation and by reproducibility test. The mean choroid thickness difference from the manual segmentation is 19.16±4.32 μm, the mean Dice similarity coefficient is 93.17±1.30%, and the correlation coefficients between fovea-centered volumes obtained on repeated scans are larger than 0.97.

  9. Molecular species identification of Central European ground beetles (Coleoptera: Carabidae using nuclear rDNA expansion segments and DNA barcodes

    Directory of Open Access Journals (Sweden)

    Raupach Michael J

    2010-09-01

    Full Text Available Abstract Background The identification of vast numbers of unknown organisms using DNA sequences becomes more and more important in ecological and biodiversity studies. In this context, a fragment of the mitochondrial cytochrome c oxidase I (COI gene has been proposed as standard DNA barcoding marker for the identification of organisms. Limitations of the COI barcoding approach can arise from its single-locus identification system, the effect of introgression events, incomplete lineage sorting, numts, heteroplasmy and maternal inheritance of intracellular endosymbionts. Consequently, the analysis of a supplementary nuclear marker system could be advantageous. Results We tested the effectiveness of the COI barcoding region and of three nuclear ribosomal expansion segments in discriminating ground beetles of Central Europe, a diverse and well-studied invertebrate taxon. As nuclear markers we determined the 18S rDNA: V4, 18S rDNA: V7 and 28S rDNA: D3 expansion segments for 344 specimens of 75 species. Seventy-three species (97% of the analysed species could be accurately identified using COI, while the combined approach of all three nuclear markers provided resolution among 71 (95% of the studied Carabidae. Conclusion Our results confirm that the analysed nuclear ribosomal expansion segments in combination constitute a valuable and efficient supplement for classical DNA barcoding to avoid potential pitfalls when only mitochondrial data are being used. We also demonstrate the high potential of COI barcodes for the identification of even closely related carabid species.

  10. MONTE CARLO SIMULATION FOR MODELING THE EFFECT OF GROUND SEGMENT LOCATION ON IN-ORBIT RESPONSIVENESS OF LEO SUNSYNCHRONOUS SATELLITE S

    Institute of Scientific and Technical Information of China (English)

    M. Navabi; Hossein Bonyan Khamseh

    2011-01-01

    Responsiveness is a challenge for space systems to sustain competitive advantage over alternate non-spaceborne technologies.For a satellite in its operational orbit,in-orbit responsiveness is defined as the capability of the satellite to respond to a given demand in a timely manner.In this paper,it is shown that Average Wait Time (AWT) to pick up user demand from ground segment is the appropriate metric to evaluate the effect of ground segment location on in-orbit responsiveness of Low Earth Orbit (LEO) sunsynchronous satellites.This metric depends on pattern of ground segment access to satellite and distribution of user demands in time domain.A mathematical model is presented to determine pattern of ground segment access to satellite and concept of cumulative distribution function is used to simulate distribution of user demands for markets with different total demand scenarios.Monte Carlo simulations are employed to take account of uncertainty in distribution and total volume of user demands.Sampling error and standard deviation are used to ensure validity of AWT metric obtained from Monte Carlo simulations.Incorporation of the proposed metric in the ground segment site location process results in more responsive satellite systems which,in turn,lead to greater customer satisfaction levels and attractiveness of spaceborne systems for different applications.Finally,simulation results for a case study are presented.

  11. Steerable Space Fed Lens Array for Low-Cost Adaptive Ground Station Applications

    Science.gov (United States)

    Lee, Richard Q.; Popovic, Zoya; Rondineau, Sebastien; Miranda, Felix A.

    2007-01-01

    The Space Fed Lens Array (SFLA) is an alternative to a phased array antenna that replaces large numbers of expensive solid-state phase shifters with a single spatial feed network. SFLA can be used for multi-beam application where multiple independent beams can be generated simultaneously with a single antenna aperture. Unlike phased array antennas where feed loss increases with array size, feed loss in a lens array with more than 50 elements is nearly independent of the number of elements, a desirable feature for large apertures. In addition, SFLA has lower cost as compared to a phased array at the expense of total volume and complete beam continuity. For ground station applications, both of these tradeoff parameters are not important and can thus be exploited in order to lower the cost of the ground station. In this paper, we report the development and demonstration of a 952-element beam-steerable SFLA intended for use as a low cost ground station for communicating and tracking of a low Earth orbiting satellite. The dynamic beam steering is achieved through switching to different feed-positions of the SFLA via a beam controller.

  12. Space and ground segment performance of the FORMOSAT-3/COSMIC mission: four years in orbit

    Directory of Open Access Journals (Sweden)

    C.-J. Fong

    2011-01-01

    Full Text Available The FORMOSAT-3/COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate mission consisting of six Low-Earth-Orbit (LEO satellites is the world's first demonstration constellation using radio occultation signals from Global Positioning System (GPS satellites. The radio occultation signals are retrieved in near real-time for global weather/climate monitoring, numerical weather prediction, and space weather research. The mission has processed on average 1400 to 1800 high-quality atmospheric sounding profiles per day. The atmospheric radio occultation soundings data are assimilated into operational numerical weather prediction models for global weather prediction, including typhoon/hurricane/cyclone forecasts. The radio occultation data has shown a positive impact on weather predictions at many national weather forecast centers. A proposed follow-on mission transitions the program from the current experimental research system to a significantly improved real-time operational system, which will reliably provide 8000 radio occultation soundings per day. The follow-on mission as planned will consist of 12 satellites with a data latency of 45 min, which will provide greatly enhanced opportunities for operational forecasts and scientific research. This paper will address the FORMOSAT-3/COSMIC system and mission overview, the spacecraft and ground system performance after four years in orbit, the lessons learned from the encountered technical challenges and observations, and the expected design improvements for the new spacecraft and ground system.

  13. An Efficient Optical Observation Ground Network is the Fundamental basis for any Space Based Debris Observation Segment

    Science.gov (United States)

    Cibin, L.; Chiarini, M.; Annoni, G.; Milani, A.; Bernardi, F.; Dimare, L.; Valsecchi, G.; Rossi, A.; Ragazzoni, R.; Salinari, P.

    2013-08-01

    A matter which is strongly debated in the SSA Community, concerns the observation of Space Debris from Space [1]. This topic has been preliminary studied by our Team for LEO, MEO and GEO orbital belts, allowing to remark a fundamental concept, residing in the fact that to be suitable to provide a functionality unavailable from ground in a cost to performance perspective, any Space Based System must operate in tight collaboration with an efficient Optical Ground Observation Network. In this work an analysis of the different functionalities which can be implemented with this approach for every orbital belt is illustrated, remarking the different achievable targets in terms of population size as a function of the observed orbits. Further, a preliminary definition of the most interesting missions scenarios, together with considerations and assessments on the observation strategy and P/L characteristics are presented.

  14. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo [Dept. of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul (Korea, Republic of)

    2013-08-15

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  15. A 20 GHz low noise, low cost receiver for digital satellite communication system, ground terminal applications

    Science.gov (United States)

    Allen, Glen

    1988-01-01

    A 45 month effort for the development of a 20 GHz, low-noise, low-cost receiver for digital, satellite communication system, ground terminal applications is discussed. Six proof-of-concept receivers were built in two lots of three each. Performance was generally consistent between the two lots. Except for overall noise figure, parameters were within or very close to specification. While noise figure was specified as 3.5 dB, typical performance was measured at 3.0 to 5.5 dB, over the full temperature range of minus 30 C to plus 75 C.

  16. 76 FR 53377 - Cost Accounting Standards; Allocation of Home Office Expenses to Segments

    Science.gov (United States)

    2011-08-26

    ... BUDGET Office of Federal Procurement Policy 48 CFR Part 9904 Cost Accounting Standards; Allocation of... Procurement Policy (OFPP), Cost Accounting Standards Board (Board). ACTION: Notice of Discontinuation of Rulemaking. SUMMARY: The Office of Federal Procurement Policy (OFPP), Cost Accounting Standards (CAS)...

  17. Concept of ground facilities and the analyses of the factors for cost estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. Y.; Choi, H. J.; Choi, J. W.; Kim, S. K.; Cho, D. K

    2007-09-15

    The geologic disposal of spent fuels generated from the nuclear power plants is the only way to protect the human beings and the surrounding environments present and future. The direct disposal of the spent fuels from the nuclear power plants is considered, and a Korean Reference HLW disposal System(KRS) suitable for our representative geological conditions have been developed. In this study, the concept of the spent fuel encapsulation process as a key of the above ground facilities for deep geological disposal was established. To do this, the design requirements, such as the functions and the spent fuel accumulations, were reviewed. Also, the design principles and the bases were established. Based on the requirements and the bases, the encapsulation process of the spent fuel from receiving spent fuel of nuclear power plants to transferring canister into the underground repository was established. Simulation for the above-ground facility in graphic circumstances through KRS design concept and disposal scenarios for spent nuclear fuel showed that an appropriate process was performed based on facility design concept and required for more improvement on construction facility by actual demonstration test. And, based on the concept of the above ground facilities for the Korean Reference HLW disposal System, the analyses of the factors for the cost estimation was carried out.

  18. How Perception Guides Action: Figure-Ground Segmentation Modulates Integration of Context Features into S-R Episodes.

    Science.gov (United States)

    Frings, Christian; Rothermund, Klaus

    2017-03-23

    Perception and action are closely related. Responses are assumed to be represented in terms of their perceptual effects, allowing direct links between action and perception. In this regard, the integration of features of stimuli (S) and responses (R) into S-R bindings is a key mechanism for action control. Previous research focused on the integration of object features with response features while neglecting the context in which an object is perceived. In 3 experiments, we analyzed whether contextual features can also become integrated into S-R episodes. The data showed that a fundamental principle of visual perception, figure-ground segmentation, modulates the binding of contextual features. Only features belonging to the figure region of a context but not features forming the background were integrated with responses into S-R episodes, retrieval of which later on had an impact upon behavior. Our findings suggest that perception guides the selection of context features for integration with responses into S-R episodes. Results of our study have wide-ranging implications for an understanding of context effects in learning and behavior. (PsycINFO Database Record

  19. Gebiss: an ImageJ plugin for the specification of ground truth and the performance evaluation of 3d segmentation algorithms

    Directory of Open Access Journals (Sweden)

    Yee Kwo

    2011-06-01

    Full Text Available Abstract Background Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss/.

  20. Comprehensive Cost Minimization in Distribution Networks Using Segmented-time Feeder Reconfiguration and Reactive Power Control of Distributed Generators

    DEFF Research Database (Denmark)

    Chen, Shuheng; Hu, Weihao; Chen, Zhe

    2016-01-01

    In this paper, an efficient methodology is proposed to deal with segmented-time reconfiguration problem of distribution networks coupled with segmented-time reactive power control of distributed generators. The target is to find the optimal dispatching schedule of all controllable switches...... and distributed generators’ reactive powers in order to minimize comprehensive cost. Corresponding constraints, including voltage profile, maximum allowable daily switching operation numbers (MADSON), reactive power limits, and so on, are considered. The strategy of grouping branches is used to simplify...... (FAHPSO) is implemented in VC++ 6.0 program language. A modified version of the typical 70-node distribution network and several real distribution networks are used to test the performance of the proposed method. Numerical results show that the proposed methodology is an efficient method for comprehensive...

  1. Market segmentation and industry overcapacity considering input resources and environmental costs through the lens of governmental intervention.

    Science.gov (United States)

    Jiang, Zhou; Jin, Peizhen; Mishra, Nishikant; Song, Malin

    2017-07-25

    The problems with China's regional industrial overcapacity are often influenced by local governments. This study constructs a framework that includes the resource and environmental costs to analyze overcapacity using the non-radial direction distance function and the price method to measure industrial capacity utilization and market segmentation in 29 provinces in China from 2002 to 2014. The empirical analysis of the spatial panel econometric model shows that (1) the industrial capacity utilization in China's provinces has a ladder-type distribution with a gradual decrease from east to west and there is a severe overcapacity in the traditional heavy industry areas; (2) local government intervention has serious negative effects on regional industry utilization and factor market segmentation more significantly inhibits the utilization rate of regional industry than commodity market segmentation; (3) economic openness improves the utilization rate of industrial capacity while the internet penetration rate and regional environmental management investment have no significant impact; and(4) a higher degree of openness and active private economic development have a positive spatial spillover effect, while there is a significant negative spatial spillover effect from local government intervention and industrial structure sophistication. This paper includes the impact of resources and the environment in overcapacity evaluations, which should guide sustainable development in emerging economies.

  2. Constrained low-cost GPS/INS filter with encoder bias estimation for ground vehicles' applications

    Science.gov (United States)

    Abdel-Hafez, Mamoun F.; Saadeddin, Kamal; Amin Jarrah, Mohammad

    2015-06-01

    In this paper, a constrained, fault-tolerant, low-cost navigation system is proposed for ground vehicle's applications. The system is designed to provide a vehicle navigation solution at 50 Hz by fusing the measurements of the inertial measurement unit (IMU), the global positioning system (GPS) receiver, and the velocity measurement from wheel encoders. A high-integrity estimation filter is proposed to obtain a high accuracy state estimate. The filter utilizes vehicle velocity constraints measurement to enhance the estimation accuracy. However, if the velocity measurement of the encoder is biased, the accuracy of the estimate is degraded. Therefore, a noise estimation algorithm is proposed to estimate a possible bias in the velocity measurement of the encoder. Experimental tests, with simulated biases on the encoder's readings, are conducted and the obtained results are presented. The experimental results show the enhancement in the estimation accuracy when the simulated bias is estimated using the proposed method.

  3. A simple, low-cost method to monitor duration of ground water pumping.

    Science.gov (United States)

    Massuel, S; Perrin, J; Wajid, M; Mascre, C; Dewandel, B

    2009-01-01

    Monitoring ground water withdrawals for agriculture is a difficult task, while agricultural development leads frequently to overexploitation of the aquifers. To fix the problem, sustainable management is required based on the knowledge of water uses. This paper introduces a simple and inexpensive direct method to determine the duration of pumping of a well by measuring the temperature of its water outlet pipe. A pumping phase is characterized by a steady temperature value close to ground water temperature. The method involves recording the temperature of the outlet pipe and identifying the different stages of pumping. It is based on the use of the low-cost and small-size Thermochron iButton temperature logger and can be applied to any well, provided that a water outlet pipe is accessible. The temperature time series are analyzed to determine the duration of pumping through manual and automatic posttreatments. The method was tested and applied in South India for irrigation wells using electricity-powered pumps. The duration of pumping obtained by the iButton method is fully consistent with the duration of power supply (1.5% difference).

  4. Electromagnetic simulators for Ground Penetrating Radar applications developed in COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonios; Warren, Craig; Antonijevic, Sinisa; Doric, Vicko; Poljak, Dragan

    2017-04-01

    Founded in 1971, COST (European COoperation in Science and Technology) is the first and widest European framework for the transnational coordination of research activities. It operates through Actions, science and technology networks with a duration of four years. The main objective of the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (4 April 2013 - 3 October 2017) is to exchange and increase knowledge and experience on Ground-Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe a wider use of this technique. Research activities carried out in TU1208 include all aspects of the GPR technology and methodology: design, realization and testing of radar systems and antennas; development and testing of surveying procedures for the monitoring and inspection of structures; integration of GPR with other non-destructive testing approaches; advancement of electromagnetic-modelling, inversion and data-processing techniques for radargram analysis and interpretation. GPR radargrams often have no resemblance to the subsurface or structures over which the profiles were recorded. Various factors, including the innate design of the survey equipment and the complexity of electromagnetic propagation in composite scenarios, can disguise complex structures recorded on reflection profiles. Electromagnetic simulators can help to understand how target structures get translated into radargrams. They can show the limitations of GPR technique, highlight its capabilities, and support the user in understanding where and in what environment GPR can be effectively used. Furthermore, electromagnetic modelling can aid the choice of the most proper GPR equipment for a survey, facilitate the interpretation of complex datasets and be used for the design of new antennas. Electromagnetic simulators can be employed to produce synthetic radargrams with the purposes of testing new data-processing, imaging and inversion algorithms, or assess

  5. A compact and low cost TT&C S-Band Ground Station for low orbit satellites

    Science.gov (United States)

    Pacola, Luiz C.; Ferrari, Carlos A.

    Instituto Nacional de Pesquisas Espaciais (INPE's) S-Band Ground Station for satellite control and monitoring is revised consdiering the current software and hardware technology. A Ground Station concept for low orbit satellites is presented. The front-end uses a small antenna and low cost associated equipment without loss of performance. The baseband equipment is highly standardized and developed on a personal computer IBM compatible using extensively Digital Signal Processing (DSP). A link budget for ranging, telecommand and telemetry is also presented.

  6. A Cost-Effectiveness Analysis of Clopidogrel for Patients with Non-ST-Segment Elevation Acute Coronary Syndrome in China.

    Science.gov (United States)

    Cui, Ming; Tu, Chen Chen; Chen, Er Zhen; Wang, Xiao Li; Tan, Seng Chuen; Chen, Can

    2016-09-01

    There are a number of economic evaluation studies of clopidogrel for patients with non-ST-segment elevation acute coronary syndrome (NSTEACS) published from the perspective of multiple countries in recent years. However, relevant research is quite limited in China. We aimed to estimate the long-term cost effectiveness for up to 1-year treatment with clopidogrel plus acetylsalicylic acid (ASA) versus ASA alone for NSTEACS from the public payer perspective in China. This analysis used a Markov model to simulate a cohort of patients for quality-adjusted life years (QALYs) gained and incremental cost for lifetime horizon. Based on the primary event rates, adherence rate, and mortality derived from the CURE trial, hazard functions obtained from published literature were used to extrapolate the overall survival to lifetime horizon. Resource utilization, hospitalization, medication costs, and utility values were estimated from official reports, published literature, and analysis of the patient-level insurance data in China. To assess the impact of parameters' uncertainty on cost-effectiveness results, one-way sensitivity analyses were undertaken for key parameters, and probabilistic sensitivity analysis (PSA) was conducted using the Monte Carlo simulation. The therapy of clopidogrel plus ASA is a cost-effective option in comparison with ASA alone for the treatment of NSTEACS in China, leading to 0.0548 life years (LYs) and 0.0518 QALYs gained per patient. From the public payer perspective in China, clopidogrel plus ASA is associated with an incremental cost of 43,340 China Yuan (CNY) per QALY gained and 41,030 CNY per LY gained (discounting at 3.5% per year). PSA results demonstrated that 88% of simulations were lower than the cost-effectiveness threshold of 150,721 CYN per QALY gained. Based on the one-way sensitivity analysis, results are most sensitive to price of clopidogrel, but remain well below this threshold. This analysis suggests that treatment with

  7. Energetic costs of parasitism in the Cape ground squirrel Xerus inauris

    Science.gov (United States)

    Scantlebury, M; Waterman, J.M; Hillegass, M; Speakman, J.R; Bennett, N.C

    2007-01-01

    Parasites have been suggested to influence many aspects of host behaviour. Some of these effects may be mediated via their impact on host energy budgets. This impact may include effects on both energy intake and absorption as well as components of expenditure, including resting metabolic rate (RMR) and activity (e.g. grooming). Despite their potential importance, the energy costs of parasitism have seldom been directly quantified in a field setting. Here we pharmacologically treated female Cape ground squirrels (Xerus inauris) with anti-parasite drugs and measured the change in body composition, the daily energy expenditure (DEE) using doubly labelled water, the RMR by respirometry and the proportions of time spent looking for food, feeding, moving and grooming. Post-treatment animals gained an average 19 g of fat or approximately 25 kJ d−1. DEE averaged 382 kJ d−1 prior to and 375 kJ d−1 post treatment (p>0.05). RMR averaged 174 kJ d−1 prior to and 217 kJ d−1 post treatment (p<0.009). Post-treatment animals spent less time looking for food and grooming, but more time on feeding. A primary impact of infection by parasites could be suppression of feeding behaviour and, hence, total available energy resources. The significant elevation of RMR after treatment was unexpected. One explanation might be that parasites produce metabolic by-products that suppress RMR. Overall, these findings suggest that impacts of parasites on host energy budgets are complex and are not easily explained by simple effects such as stimulation of a costly immune response. There is currently no broadly generalizable framework available for predicting the energetic consequences of parasitic infection. PMID:17613450

  8. Decreasing costs of ground data processing system development using a software product line

    Science.gov (United States)

    Chaffin, Brian

    2005-01-01

    In this paper, I describe software product lines and why a Ground Data Processing System should use one. I also describe how to develop a software product line, using examples from an imaginary Ground Data Processing System.

  9. Civil Engineering Applications of Ground Penetrating Radar: Research Perspectives in COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Slob, Evert; Tosti, Fabio

    2013-04-01

    can be used by GPR operators to identify the signatures generated by uncommon targets or by composite structures. Repeated evaluations of the electromagnetic field scattered by known targets can be performed by a forward solver, in order to estimate - through comparison with measured data - the physics and geometry of the region investigated by the GPR. It is possible to identify three main areas, in the GPR field, that have to be addressed in order to promote the use of this technology in the civil engineering. These are: a) increase of the system sensitivity to enable the usability in a wider range of conditions; b) research novel data processing algorithms/analysis tools for the interpretation of GPR results; c) contribute to the development of new standards and guidelines and to training of end users, that will also help to increase the awareness of operators. In this framework, the COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar", proposed by Lara Pajewski, "Roma Tre" University, Rome, Italy, has been approved in November 2012 and is going to start in April 2013. It is a 4-years ambitious project already involving 17 European Countries (AT, BE, CH, CZ, DE, EL, ES, FI, FR, HR, IT, NL, NO, PL, PT, TR, UK), as well as Australia and U.S.A. The project will be developed within the frame of a unique approach based on the integrated contribution of University researchers, software developers, geophysics experts, Non-Destructive Testing equipment designers and producers, end users from private companies and public agencies. The main objective of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst promoting the effective use of this safe and non-destructive technique in the monitoring of systems. In this interdisciplinary Action, advantages and limitations of GPR will be highlighted, leading to the identification of gaps in knowledge and technology

  10. Fast Method for Segmenting Indoor Obstacle with Ground%一种室内障碍物与地面分割的快速方法

    Institute of Scientific and Technical Information of China (English)

    卜燕; 王姮; 张华; 刘桂华; 李志雄

    2016-01-01

    The ground is usually used to provide environmental information of map creation and navigation for indoor mobile robots because it contains rich information. Considering the strong interference caused by light reflection, it is difficult to distinguish the ground surface under similar color environment,so the high intensity light reflection areas are defined as “defect” to be detected. By filling defect with its periphery information,the ground color uniformity can be effectively enhanced. Combining with the HSV joint density,color segmentation is conducted,and using regional characteristics of ground position, the segmentation of obstacle with ground is obtained precisely. Experiments show that the proposed approach features simple operation,wide range,high precision,and ease to implement obstacle avoidance for robot in real time.%因地面含有丰富信息,常用来为室内移动机器人提供地图创建与导航的环境信息。考虑到光线反射对地面造成的干扰较强,在相似颜色环境下地面区分的难度较大,因此将高强光反射区定义为“缺陷”进行检测。利用其周边信息填充缺陷,有效增强了地面颜色的统一性。结合HSV联合密度进行彩色分割,利用地面位置区域特性,可准确获得地面与障碍物间的分割。试验表明,提出的方法具有运算简单、范围广、准确度高、便于机器人实时避障等优点。

  11. GPS and InSAR observations of ground deformation in the northern Malawi (Nyasa) rift from the SEGMeNT project

    Science.gov (United States)

    Durkin, W. J., IV; Pritchard, M. E.; Elliott, J.; Zheng, W.; Saria, E.; Ntambila, D.; Chindandali, P. R. N.; Nooner, S. L.; Henderson, S. T.

    2016-12-01

    We describe new ground deformation observations from the SEGMeNT (Study of Extension and maGmatism in Malawi aNd Tanzania) spanning the northern sector of the Malawi (Nyasa) rift, which is one of the few places in the world suitable for a comprehensive study of early rifting processes. We installed 12 continuous GPS sensors spanning 700 km across the rift including Tanzania, Malawi, and Zambia to measure the width and gradient within the actively deforming zone. Most of these stations have 3 or more years of data now, although a few have shorter time series because of station vandalism. Spanning a smaller area, but with higher spatial resolution, we have created a time series of ground deformation using 150 interferograms from the Japanese ALOS-1 satellite spanning June 2007 to December 2010. We also present interferograms from other satellites including ERS, Envisat, and Sentinel spanning shorter time intervals. The observations include the 2009-2010 Karonga earthquake sequence and associated postseismic deformation as seen by multiple independent satellite lines-of-sight, that we model using a fault geometry determined using relocated aftershocks recorded by a local seismic array. We have not found any ground deformation at the Rungwe volcanic province from InSAR within our detection threshold ( 2 cm/yr), but we have observed localized seasonal ground movements exceeding 8 cm that are associated with subsidence in the dry season and uplift at the beginning of the wet season.

  12. Cost-effective monitoring of ground motion by joint use of a single-frequency GPS and a MEMS accelerometer

    Science.gov (United States)

    Tu, Rui; Wang, Rongjiang; Ge, Maorong; Walter, Thomas R.; Ramatschi, Markus; Milkereit, Claus; Bindi, Dino; Dahm, Torsten

    2014-05-01

    Real-time detection and precise estimation of strong ground motion are crucial for rapid assessment and early warning of geohazards such as earthquakes, landslides, and volcanic activity. This challenging task can be accomplished by combining GPS and accelerometer measurements because of their complementary capabilities to resolve broadband ground motion signals. However, for implementing an operational monitoring network of such joint measurement systems, cost-effective techniques need to be developed and rigorously tested. We propose a new approach for joint processing of single-frequency GPS and MEMS (micro-electro-mechanical systems) accelerometer data in real time. To demonstrate the performance of our method, we describe results from outdoor experiments under controlled conditions. For validation, we analysed dual-frequency GPS data and images recorded by a video camera. The results of the different sensors agree very well, suggesting that real-time broadband information of ground motion can be provided by using single-frequency GPS and MEMS accelerometers. Reference: Tu, R., R. Wang, M. Ge, T. R. Walter, M. Ramatschi, C. Milkereit, D. Bindi, and T. Dahm (2013), Cost-effective monitoring of ground motion related to earthquakes, landslides, or volcanic activity by joint use of a single-frequency GPS and a MEMS accelerometer, Geophysical Research Letters, 40, 3825-3829, doi:10.1002/grl.50653.

  13. Remediation of uranium-contaminated soil using the Segmented Gate System and containerized vat leaching techniques: a cost effectiveness study

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, M.; Booth, S.R.

    1996-09-01

    Because it is difficult to characterize heterogeneously contaminated soils in detail and to excavate such soils precisely using heavy equipment, it is common for large quantities of uncontaminated soil to be removed during excavation of contaminated sites. Until now, volume reduction of radioactively contaminated soil depended upon manual screening and analysis of samples, a costly and impractical approach, particularly with large volumes of heterogeneously contaminated soil. The baseline approach for the remediation of soils containing radioactive waste is excavation, pretreatment, containerization, and disposal at a federally permitted landfill. However, disposal of low-level radioactive waste is expensive and storage capacity is limited. ThermoNuclean`s Segmented Gate System (SGS) removes only the radioactively contaminated soil, in turn greatly reducing the volume of soils that requires disposal. After processing using the SGS, the fraction of contaminated soil is processed using the containerized vat leaching (CVL) system developed at LANL. Uranium is leached out of the soil in solution. The uranium is recovered with an ion exchange resin, leaving only a small volume of liquid low-level waste requiring disposal. The reclaimed soil can be returned to its original location after treatment with CVL.

  14. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    Herd, Dean Wyatte, Kenneth Latimer, and Randy O’Reilly Computational Cognitive Neuroscience Lab Department of Psychology University of Colorado at...figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis

  15. Denoising and Back Ground Clutter of Video Sequence using Adaptive Gaussian Mixture Model Based Segmentation for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    Shanmugapriya. K

    2014-01-01

    Full Text Available The human action recognition system first gathers images by simply querying the name of the action on a web image search engine like Google or Yahoo. Based on the assumption that the set of retrieved images contains relevant images of the queried action, we construct a dataset of action images in an incremental manner. This yields a large image set, which includes images of actions taken from multiple viewpoints in a range of environments, performed by people who have varying body proportions and different clothing. The images mostly present the “key poses” since these images try to convey the action with a single pose. In existing system to support this they first used an incremental image retrieval procedure to collect and clean up the necessary training set for building the human pose classifiers. There are challenges that come at the expense of this broad and representative data. First, the retrieved images are very noisy, since the Web is very diverse. Second, detecting and estimating the pose of humans in still images is more difficult than in videos, partly due to the background clutter and the lack of a foreground mask. In videos, foreground segmentation can exploit motion cues to great benefit. In still images, the only cue at hand is the appearance information and therefore, our model must address various challenges associated with different forms of appearance. Therefore for robust separation, in proposed work a segmentation algorithm based on Gaussian Mixture Models is proposed which is adaptive to light illuminations, shadow and white balance is proposed here. This segmentation algorithm processes the video with or without noise and sets up adaptive background models based on the characteristics also this method is a very effective technique for background modeling which classifies the pixels of a video frame either background or foreground based on probability distribution.

  16. Survivability enhancement study for C/sup 3/I/BM (communications, command, control and intelligence/battle management) ground segments: Final report

    Energy Technology Data Exchange (ETDEWEB)

    1986-10-30

    This study involves a concept developed by the Fairchild Space Company which is directly applicable to the Strategic Defense Initiative (SDI) Program as well as other national security programs requiring reliable, secure and survivable telecommunications systems. The overall objective of this study program was to determine the feasibility of combining and integrating long-lived, compact, autonomous isotope power sources with fiber optic and other types of ground segments of the SDI communications, command, control and intelligence/battle management (C/sup 3/I/BM) system in order to significantly enhance the survivability of those critical systems, especially against the potential threats of electromagnetic pulse(s) (EMP) resulting from high altitude nuclear weapon explosion(s). 28 figs., 2 tabs.

  17. Adaptation of Dubins Paths for UAV Ground Obstacle Avoidance When Using a Low Cost On-Board GNSS Sensor

    Directory of Open Access Journals (Sweden)

    Ramūnas Kikutis

    2017-09-01

    Full Text Available Current research on Unmanned Aerial Vehicles (UAVs shows a lot of interest in autonomous UAV navigation. This interest is mainly driven by the necessity to meet the rules and restrictions for small UAV flights that are issued by various international and national legal organizations. In order to lower these restrictions, new levels of automation and flight safety must be reached. In this paper, a new method for ground obstacle avoidance derived by using UAV navigation based on the Dubins paths algorithm is presented. The accuracy of the proposed method has been tested, and research results have been obtained by using Software-in-the-Loop (SITL simulation and real UAV flights, with the measurements done with a low cost Global Navigation Satellite System (GNSS sensor. All tests were carried out in a three-dimensional space, but the height accuracy was not assessed. The GNSS navigation data for the ground obstacle avoidance algorithm is evaluated statistically.

  18. Low-Cost Tracking Ground Terminal Designed to Use Cryogenically Cooled Electronics

    Science.gov (United States)

    Wald, Lawrence W.; Romanofsky, Robert R.; Warner, Joseph D.

    2000-01-01

    A computer-controlled, tracking ground terminal will be assembled at the NASA Glenn Research Center at Lewis Field to receive signals transmitted by the Glenn's Direct Data Distribution (D3) payload planned for a shuttle flight in low Earth orbit. The terminal will enable direct data reception of up to two 622-megabits-per-second (Mbps) beams from the space-based, K-band (19.05-GHz) transmitting array at an end-user bit error rate of up to 10(exp -12). The ground terminal will include a 0.9-m-diameter receive-only Cassegrain reflector antenna with a corrugated feed horn incorporating a dual circularly polarized, K-band feed assembly mounted on a multiaxis, gimbaled tracking pedestal as well as electronics to receive the downlink signals. The tracking system will acquire and automatically track the shuttle through the sky for all elevations greater than 20 above the horizon. The receiving electronics for the ground terminal consist of a six-pole microstrip bandpass filter, a three-stage monolithic microwave integrated circuit (MMIC) amplifier, and a Stirling cycle cryocooler (1 W at 80 K). The Sterling cycle cryocooler cools the front end of the receiver, also known as the low-noise amplifier (LNA), to about 77 K. Cryocooling the LNA significantly increases receiver performance, which is necessary so that it can use the antenna, which has an aperture of only 0.9 m. The following drawing illustrates the cryoterminal.

  19. Space and ground segment performance and lessons learned of the FORMOSAT-3/COSMIC mission: four years in orbit

    Directory of Open Access Journals (Sweden)

    C.-J. Fong

    2011-06-01

    Full Text Available The FORMOSAT-3/COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate Mission consisting of six Low-Earth-Orbit (LEO satellites is the world's first demonstration constellation using radio occultation signals from Global Positioning System (GPS satellites. The atmospheric profiles derived by processing radio occultation signals are retrieved in near real-time for global weather/climate monitoring, numerical weather prediction, and space weather research. The mission has processed, on average, 1400 to 1800 high-quality atmospheric sounding profiles per day. The atmospheric radio occultation data are assimilated into operational numerical weather prediction models for global weather prediction, including typhoon/hurricane/cyclone forecasts. The radio occultation data has shown a positive impact on weather predictions at many national weather forecast centers. A follow-on mission was proposed that transitions the current experimental research mission into a significantly improved real-time operational mission, which will reliably provide 8000 radio occultation soundings per day. The follow-on mission, as planned, will consist of 12 LEO satellites (compared to 6 satellites for the current mission with data latency requirement of 45 min (compared to 3 h for the current mission, which will provide greatly enhanced opportunities for operational forecasts and scientific research. This paper will address the FORMOSAT-3/COSMIC system and mission overview, the spacecraft and ground system performance after four years in orbit, the lessons learned from the encountered technical challenges and observations, and the expected design improvements for the spacecraft and ground system for FORMOSAT-7/COSMIC-2.

  20. COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar:" ongoing research activities and mid-term results

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Slob, Evert; Tosti, Fabio

    2015-04-01

    This work aims at presenting the ongoing activities and mid-term results of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar.' Almost three hundreds experts are participating to the Action, from 28 COST Countries (Austria, Belgium, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Malta, Macedonia, The Netherlands, Norway, Poland, Portugal, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom), and from Albania, Armenia, Australia, Egypt, Hong Kong, Jordan, Israel, Philippines, Russia, Rwanda, Ukraine, and United States of America. In September 2014, TU1208 has been praised among the running Actions as 'COST Success Story' ('The Cities of Tomorrow: The Challenges of Horizon 2020,' September 17-19, 2014, Torino, IT - A COST strategic workshop on the development and needs of the European cities). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Moreover, the Action is oriented to the following specific objectives and expected deliverables: (i) coordinating European scientists to highlight problems, merits and limits of current GPR systems; (ii) developing innovative protocols and guidelines, which will be published in a handbook and constitute a basis for European standards, for an effective GPR application in civil- engineering tasks; safety, economic and financial criteria will be integrated within the protocols; (iii) integrating competences for the improvement and merging of electromagnetic scattering techniques and of data- processing techniques; this will lead to a novel freeware tool for the localization of buried objects

  1. Objective Performance Evaluation of Video Segmentation Algorithms with Ground-Truth%一种客观的视频对象分割算法性能评价方法

    Institute of Scientific and Technical Information of China (English)

    杨高波; 张兆扬

    2004-01-01

    While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance.In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation.Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy.Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames.The experimental results show the feasibility of our approach.Moreover, it is computationally more efficient than previous methods.It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.

  2. COST Action TU1208 - Working Group 3 - Electromagnetic modelling, inversion, imaging and data-processing techniques for Ground Penetrating Radar

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos

    2017-04-01

    This work aims at presenting the main results achieved by Working Group (WG) 3 "Electromagnetic methods for near-field scattering problems by buried structures; data processing techniques" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu). The main objective of the Action, started in April 2013 and ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe the effective use of this safe non-destructive technique. The Action involves more than 150 Institutions from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. Among the most interesting achievements of WG3, we wish to mention the following ones: (i) A new open-source version of the finite-difference time-domain simulator gprMax was developed and released. The new gprMax is written in Python and includes many advanced features such as anisotropic and dispersive-material modelling, building of realistic heterogeneous objects with rough surfaces, built-in libraries of antenna models, optimisation of parameters based on Taguchi's method - and more. (ii) A new freeware CAD was developed and released, for the construction of two-dimensional gprMax models. This tool also includes scripts easing the execution of gprMax on multi-core machines or network of computers and scripts for a basic plotting of gprMax results. (iii) A series of interesting freeware codes were developed will be released by the end of the Action, implementing differential and integral forward-scattering methods, for the solution of simple electromagnetic problems by buried objects. (iv) An open database of synthetic and experimental GPR radargrams was created, in cooperation with WG2. The idea behind this initiative is to give researchers the

  3. Validation of the Carotid Intima-Media Thickness Variability: Can Manual Segmentations Be Trusted as Ground Truth?

    Science.gov (United States)

    Meiburger, Kristen M; Molinari, Filippo; Wong, Justin; Aguilar, Luis; Gallo, Diego; Steinman, David A; Morbiducci, Umberto

    2016-07-01

    The common carotid artery intima-media thickness (IMT) is widely accepted and used as an indicator of atherosclerosis. Recent studies, however, have found that the irregularity of the IMT along the carotid artery wall has a stronger correlation with atherosclerosis than the IMT itself. We set out to validate IMT variability (IMTV), a parameter defined to assess IMT irregularities along the wall. In particular, we analyzed whether or not manual segmentations of the lumen-intima and media-adventitia can be considered reliable in calculation of the IMTV parameter. To do this, we used a total of 60 simulated ultrasound images with a priori IMT and IMTV values. The images, simulated using the Fast And Mechanistic Ultrasound Simulation software, presented five different morphologies, four nominal IMT values and three different levels of variability along the carotid artery wall (no variability, small variability and large variability). Three experts traced the lumen-intima (LI) and media-adventitia (MA) profiles, and two automated algorithms were employed to obtain the LI and MA profiles. One expert also re-traced the LI and MA profiles to test intra-reader variability. The average IMTV measurements of the profiles used to simulate the longitudinal B-mode images were 0.002 ± 0.002, 0.149 ± 0.035 and 0.286 ± 0.068 mm for the cases of no variability, small variability and large variability, respectively. The IMTV measurements of one of the automated algorithms were statistically similar (p > 0.05, Wilcoxon signed rank) when considering small and large variability, but non-significant when considering no variability (p truth. On the other hand, our automated algorithm was found to be more reliable, indicating how automated techniques could therefore foster analysis of the carotid artery intima-media thickness irregularity.

  4. Renaissance: A revolutionary approach for providing low-cost ground data systems

    Science.gov (United States)

    Butler, Madeline J.; Perkins, Dorothy C.; Zeigenfuss, Lawrence B.

    1996-01-01

    The NASA is changing its attention from large missions to a greater number of smaller missions with reduced development schedules and budgets. In relation to this, the Renaissance Mission Operations and Data Systems Directorate systems engineering process is presented. The aim of the Renaissance approach is to improve system performance, reduce cost and schedules and meet specific customer needs. The approach includes: the early involvement of the users to define the mission requirements and system architectures; the streamlining of management processes; the development of a flexible cost estimation capability, and the ability to insert technology. Renaissance-based systems demonstrate significant reuse of commercial off-the-shelf building blocks in an integrated system architecture.

  5. Renaissance: A revolutionary approach for providing low-cost ground data systems

    Science.gov (United States)

    Butler, Madeline J.; Perkins, Dorothy C.; Zeigenfuss, Lawrence B.

    1996-01-01

    The NASA is changing its attention from large missions to a greater number of smaller missions with reduced development schedules and budgets. In relation to this, the Renaissance Mission Operations and Data Systems Directorate systems engineering process is presented. The aim of the Renaissance approach is to improve system performance, reduce cost and schedules and meet specific customer needs. The approach includes: the early involvement of the users to define the mission requirements and system architectures; the streamlining of management processes; the development of a flexible cost estimation capability, and the ability to insert technology. Renaissance-based systems demonstrate significant reuse of commercial off-the-shelf building blocks in an integrated system architecture.

  6. COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar": ongoing research activities and third-year results

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Tosti, Fabio

    2016-04-01

    This work aims at disseminating the ongoing research activities and third-year results of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar." About 350 experts are participating to the Action, from 28 COST Countries (Austria, Belgium, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Latvia, Malta, Macedonia, The Netherlands, Norway, Poland, Portugal, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, United Kingdom), and from Albania, Armenia, Australia, Colombia, Egypt, Hong Kong, Jordan, Israel, Philippines, Russia, Rwanda, Ukraine, and United States of America. In September 2014, TU1208 has been recognised among the running Actions as "COST Success Story" ("The Cities of Tomorrow: The Challenges of Horizon 2020," September 17-19, 2014, Torino, IT - A COST strategic workshop on the development and needs of the European cities). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Moreover, the Action is oriented to the following specific objectives and expected deliverables: (i) coordinating European scientists to highlight problems, merits and limits of current GPR systems; (ii) developing innovative protocols and guidelines, which will be published in a handbook and constitute a basis for European standards, for an effective GPR application in civil- engineering tasks; safety, economic and financial criteria will be integrated within the protocols; (iii) integrating competences for the improvement and merging of electromagnetic scattering techniques and of data- processing techniques; this will lead to a novel freeware tool for the localization of

  7. The Holy Grail of Resource Assessment: Low Cost Ground-Based Measurements with Good Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Marion, Bill; Smith, Benjamin

    2017-06-22

    Using performance data from some of the millions of installed photovoltaic (PV) modules with micro-inverters may afford the opportunity to provide ground-based solar resource data critical for developing PV projects. The method used back-solves for the direct normal irradiance (DNI) and the diffuse horizontal irradiance (DHI) from the micro-inverter ac production data. When the derived values of DNI and DHI were then used to model the performance of other PV systems, the annual mean bias deviations were within +/- 4%, and only 1% greater than when the PV performance was modeled using high quality irradiance measurements. An uncertainty analysis shows the method better suited for modeling PV performance than using satellite-based global horizontal irradiance.

  8. Cost-Effective Telemetry and Command Ground Systems Automation Strategy for the Soil Moisture Active Passive (SMAP) Mission

    Science.gov (United States)

    Choi, Joshua S.; Sanders, Antonio L.

    2012-01-01

    Soil Moisture Active Passive (SMAP) is an Earth-orbiting, remote-sensing NASA mission slated for launch in 2014.[double dagger] The ground data system (GDS) being developed for SMAP is composed of many heterogeneous subsystems, ranging from those that support planning and sequencing to those used for real-time operations, and even further to those that enable science data exchange. A full end-to-end automation of the GDS may result in cost savings during mission operations, but it would require a significant upfront investment to develop such comprehensive automation. As demonstrated by the Jason-1 and Wide-field Infrared Survey Explorer (WISE) missions, a measure of "lights-out" automation for routine, orbital pass ground operations can still reduce mission cost through smaller staffing of operators and limited work hours. The challenge, then, for the SMAP GDS engineering team is to formulate an automated operations strategy--and corresponding system architecture--to minimize operator intervention during operations, while balancing the development cost associated with the scope and complexity of automation. This paper discusses the automated operations approach being developed for the SMAP GDS. The focus is on automating the activities involved in routine passes, which limits the scope to real-time operations. A key subsystem of the SMAP GDS--NASA's AMMOS Mission Data Processing and Control System (AMPCS)--provides a set of capabilities that enable such automation. Also discussed are the lights-out pass automations of the Jason-1 and WISE missions and how they informed the automation strategy for SMAP. The paper aims to provide insights into what is necessary in automating the GDS operations for Earth satellite missions.

  9. Ground Water Atlas of the United States: Segment 13, Alaska, Hawaii, Puerto Rico, and the U.S. Virgin Islands

    Science.gov (United States)

    Miller, James A.; Whitehead, R.L.; Oki, Delwyn S.; Gingerich, Stephen B.; Olcott, Perry G.

    1997-01-01

    Alaska is the largest State in the Nation and has an area of about 586,400 square miles, or about one-fifth the area of the conterminous United States. The State is geologically and topographically diverse and is characterized by wild, scenic beauty. Alaska contains abundant natural resources, including ground water and surface water of chemical quality that is generally suitable for most uses.The central part of Alaska is drained by the Yukon River and its tributaries, the largest of which are the Porcupine, the Tanana, and the Koyukuk Rivers. The Yukon River originates in northwestern Canada and, like the Kuskokwim River, which drains a large part of southwestern Alaska , discharges into the Bering Sea. The Noatak River in northwestern Alaska discharges into the Chukchi Sea. Major rivers in southern Alaska include the Susitna and the Matanuska Rivers, which discharge into Cook Inlet, and the Copper River, which discharges into the Gulf of Alaska . North of the Brooks Range, the Colville and the Sagavanirktok Rivers and numerous smaller streams discharge into the Arctic Ocean.In 1990, Alaska had a population of about 552,000 and, thus , is one of the least populated States in the Nation. Most of the population is concentrated in the cities of Anchorage, Fairbanks, and Juneau, all of which are located in lowland areas. The mountains, the frozen Arctic desert, the interior plateaus, and the areas covered with glaciers lack major population centers. Large parts of Alaska are uninhabited and much of the State is public land. Ground-water development has not occurred over most of these remote areas.The Hawaiian islands are the exposed parts of the Hawaiian Ridge, which is a large volcanic mountain range on the sea floor. Most of the Hawaiian Ridge is below sea level (fig. 31) . The State of Hawaii consists of a group of 132 islands, reefs, and shoals that extend for more than 1 ,500 miles from southeast to northwest across the central Pacific Ocean between about 155

  10. Characterization of Personal Privacy Devices (PPD) radiation pattern impact on the ground and airborne segments of the local area augmentation system (LAAS) at GPS L1 frequency

    Science.gov (United States)

    Alkhateeb, Abualkair M. Khair

    Personal Privacy Devices (PPDs) are radio-frequency transmitters that intentionally transmit in a frequency band used by other devices for the intent purpose of denying service to those devices. These devices have shown the potential to interfere with the ground and air sub-systems of the Local Area Augmentation Systems (LAAS), a GPS-based navigation aids at commercial airports. The Federal Aviation Administration (FAA) is concerned by the potential impact of these devices to GPS navigation aids at airports and has commenced an activity to determine the severity of this threat. In support of this situation, the research in this dissertation has been conducted under (FAA) Cooperative Agreement 2011-G-012, to investigate the impact of these devices on the LAAS. In order to investigate the impact of PPDs Radio Frequency Interference (RFI) on the ground and air sub-systems of the LAAS, the work presented in phase one of this research is intended to characterize the vehicle's impact on the PPD's Effective Isotropic Radiated Power (EIRP). A study was conceived in this research to characterize PPD performance by examining the on-vehicle radiation patterns as a function of vehicle type, jammer type, jammer location inside a vehicle and jammer orientation at each location. Phase two was to characterize the GPS Radiation Pattern on Multipath Limiting Antenna. MLA has to meet stringent requirements for acceptable signal detection and multipath rejection. The ARL-2100 is the most recent MLA antenna proposed to be used in the LAAS ground segment. The ground-based antenna's radiation pattern was modeled. This was achieved via (HFSS) a commercial-off the shelf CAD-based modeling code with a full-wave electromagnetic software simulation package that uses the Finite Element Analysis. Phase three of this work has been conducted to study the characteristics of the GPS Radiation Pattern on Commercial Aircraft. The airborne GPS antenna was modeled and the resulting radiation pattern on

  11. COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar": first-year activities and results

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; Loizos, Andreas; Slob, Evert; Tosti, Fabio

    2014-05-01

    This work aims at presenting the first-year activities and results of COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". This Action was launched in April 2013 and will last four years. The principal aim of COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, whilst simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Moreover, the Action is oriented to the following specific objectives and expected deliverables: (i) coordinating European scientists to highlight problems, merits and limits of current GPR systems; (ii) developing innovative protocols and guidelines, which will be published in a handbook and constitute a basis for European standards, for an effective GPR application in civil- engineering tasks; safety, economic and financial criteria will be integrated within the protocols; (iii) integrating competences for the improvement and merging of electromagnetic scattering techniques and of data- processing techniques; this will lead to a novel freeware tool for the localization of buried objects, shape-reconstruction and estimation of geophysical parameters useful for civil engineering needs; (iv) networking for the design, realization and optimization of innovative GPR equipment; (v) comparing GPR with different NDT techniques, such as ultrasonic, radiographic, liquid-penetrant, magnetic-particle, acoustic-emission and eddy-current testing; (vi) comparing GPR technology and methodology used in civil engineering with those used in other fields; (vii) promotion of a more widespread, advanced and efficient use of GPR in civil engineering; and (viii) organization of a high-level modular training program for GPR European users. Four Working Groups (WGs) carry out the research activities. The first WG

  12. Exchanging knowledge and working together in COST Action TU1208: Short-Term Scientific Missions on Ground Penetrating Radar

    Science.gov (United States)

    Santos Assuncao, Sonia; De Smedt, Philippe; Giannakis, Iraklis; Matera, Loredana; Pinel, Nicolas; Dimitriadis, Klisthenis; Giannopoulos, Antonios; Sala, Jacopo; Lambot, Sébastien; Trinks, Immo; Marciniak, Marian; Pajewski, Lara

    2015-04-01

    This work aims at presenting the scientific results stemming from six Short-Term Scientific Missions (STSMs) funded by the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (Action Chair: Lara Pajewski, STSM Manager: Marian Marciniak). STSMs are important means to develop linkages and scientific collaborations between participating institutions involved in a COST Action. Scientists have the possibility to go to an institution abroad, in order to undertake joint research and share techniques/equipment/infrastructures that may not be available in their own institution. STSMs are particularly intended for Early Stage Researchers (ESRs), i.e., young scientists who obtained their PhD since no more than 8 years when they started to be involved in the Action. Duration of a standard STSM can be from 5 to 90 days and the research activities carried out during this short stay shall specifically contribute to the achievement of the scientific objectives of the supporting COST Action. The first STSM was carried out by Lara Pajewski, visiting Antonis Giannopoulos at The University of Edinburgh (United Kingdom). The research activities focused on the electromagnetic modelling of Ground Penetrating Radar (GPR) responses to complex targets. A set of test scenarios was defined, to be used by research groups participating to Working Group 3 of COST Action TU1208, to test and compare different electromagnetic forward- and inverse-scattering methods; these scenarios were modelled by using the well-known finite-difference time-domain simulator GprMax. New Matlab procedures for the processing and visualization of GprMax output data were developed. During the second STSM, Iraklis Giannakis visited Lara Pajewski at Roma Tre University (Italy). The study was concerned with the numerical modelling of horn antennas for GPR. An air-coupled horn antenna was implemented in GprMax and tested in a realistically

  13. The 30 GHz solid state amplifier for low cost low data rate ground terminals

    Science.gov (United States)

    Ngan, Y. C.; Quijije, M. A.

    1984-01-01

    This report details the development of a 20-W solid state amplifier operating near 30 GHz. The IMPATT amplifier not only met or exceeded all the program objectives, but also possesses the ability to operate in the pulse mode, which was not called for in the original contract requirements. The ability to operate in the pulse mode is essential for TDMA (Time Domain Multiple Access) operation. An output power of 20 W was achieved with a 1-dB instantaneous bandwidth of 260 MHz. The amplifier has also been tested in pulse mode with 50% duty for pulse lengths ranging from 200 ns to 2 micro s with 10 ns rise and fall times and no degradation in output power. This pulse mode operation was made possible by the development of a stable 12-diode power combiner/amplifier and a single-diode pulsed driver whose RF output power was switched on and off by having its bias current modulated via a fast-switching current pulse modulator. Essential to the overall amplifier development was the successful development of state-of-the-art silicon double-drift IMPATT diodes capable of reproducible 2.5 W CW output power with 12% dc-to-RF conversion efficiency. Output powers of as high as 2.75 W has been observed. Both the device and circuit design are amenable to low cost production.

  14. A low-cost drone based application for identifying and mapping of coastal fish nursery grounds

    Science.gov (United States)

    Ventura, Daniele; Bruno, Michele; Jona Lasinio, Giovanna; Belluscio, Andrea; Ardizzone, Giandomenico

    2016-03-01

    Acquiring seabed, landform or other topographic data in the field of marine ecology has a pivotal role in defining and mapping key marine habitats. However, accessibility for this kind of data with a high level of detail for very shallow and inaccessible marine habitats has been often challenging, time consuming. Spatial and temporal coverage often has to be compromised to make more cost effective the monitoring routine. Nowadays, emerging technologies, can overcome many of these constraints. Here we describe a recent development in remote sensing based on a small unmanned drone (UAVs) that produce very fine scale maps of fish nursery areas. This technology is simple to use, inexpensive, and timely in producing aerial photographs of marine areas. Both technical details regarding aerial photos acquisition (drone and camera settings) and post processing workflow (3D model generation with Structure From Motion algorithm and photo-stitching) are given. Finally by applying modern algorithm of semi-automatic image analysis and classification (Maximum Likelihood, ECHO and Object-based Image Analysis) we compared the results of three thematic maps of nursery area for juvenile sparid fishes, highlighting the potential of this method in mapping and monitoring coastal marine habitats.

  15. A Decolorization Technique with Spent “Greek Coffee” Grounds as Zero-Cost Adsorbents for Industrial Textile Wastewaters

    Science.gov (United States)

    Kyzas, George Z.

    2012-01-01

    In this study, the decolorization of industrial textile wastewaters was studied in batch mode using spent “Greek coffee” grounds (COF) as low-cost adsorbents. In this attempt, there is a cost-saving potential given that there was no further modification of COF (just washed with distilled water to remove dirt and color, then dried in an oven). Furthermore, tests were realized both in synthetic and real textile wastewaters for comparative reasons. The optimum pH of adsorption was acidic (pH = 2) for synthetic effluents, while experiments in free pH (non-adjusted) were carried out for real effluents. Equilibrium data were fitted to the Langmuir, Freundlich and Langmuir-Freundlich (L-F) models. The calculated maximum adsorption capacities (Qmax) for total dye (reactive) removal at 25 °C was 241 mg/g (pH = 2) and 179 mg/g (pH = 10). Thermodynamic parameters were also calculated (ΔH0, ΔG0, ΔS0). Kinetic data were fitted to the pseudo-first, -second and -third order model. The optimum pH for desorption was determined, in line with desorption and reuse analysis. Experiments dealing the increase of mass of adsorbent showed a strong increase in total dye removal.

  16. A Decolorization Technique with Spent “Greek Coffee” Grounds as Zero-Cost Adsorbents for Industrial Textile Wastewaters

    Directory of Open Access Journals (Sweden)

    George Z. Kyzas

    2012-10-01

    Full Text Available In this study, the decolorization of industrial textile wastewaters was studied in batch mode using spent “Greek coffee” grounds (COF as low-cost adsorbents. In this attempt, there is a cost-saving potential given that there was no further modification of COF (just washed with distilled water to remove dirt and color, then dried in an oven. Furthermore, tests were realized both in synthetic and real textile wastewaters for comparative reasons. The optimum pH of adsorption was acidic (pH = 2 for synthetic effluents, while experiments in free pH (non-adjusted were carried out for real effluents. Equilibrium data were fitted to the Langmuir, Freundlich and Langmuir-Freundlich (L-F models. The calculated maximum adsorption capacities (Qmax for total dye (reactive removal at 25 °C was 241 mg/g (pH = 2 and 179 mg/g (pH = 10. Thermodynamic parameters were also calculated (ΔH0, ΔG0, ΔS0. Kinetic data were fitted to the pseudo-first, -second and -third order model. The optimum pH for desorption was determined, in line with desorption and reuse analysis. Experiments dealing the increase of mass of adsorbent showed a strong increase in total dye removal.

  17. COST Action TU1208 - Working Group 1 - Design and realisation of Ground Penetrating Radar equipment for civil engineering applications

    Science.gov (United States)

    Pajewski, Lara; Benedetto, Andrea; D'Amico, Sebastiano; Ferrara, Vincenzo; Frezza, Fabrizio; Persico, Raffaele; Tosti, Fabio

    2017-04-01

    This work aims at presenting the main results achieved by Working Group (WG) 1 "Novel Ground Penetrating Radar instrumentation" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.cost.eu, www.GPRadar.eu). The principal goal of the Action, which started in April 2013 and is ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar techniques in civil engineering, whilst promoting throughout Europe the effective use of this safe non-destructive technique. The Action involves more than 300 Members from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. The most interesting achievements of WG1 include: 1. The state of the art on GPR systems and antennas was composed; merits and limits of current GPR systems in civil engineering applications were highlighted and open issues were identified. 2. The Action investigated the new challenge of inferring mechanical (strength and deformation) properties of flexible pavement from electromagnetic data. A semi-empirical method was developed by an Italian research team and tested over an Italian test site: a good agreement was found between the values measured by using a light falling weight deflectometer (LFWD) and the values estimated by using the proposed semi-empirical method, thereby showing great promises for large-scale mechanical inspections of pavements using GPR. Subsequently, the method was tested on a real scale, on an Italian road in the countryside: again, a good agreement between LFWD and GPR data was achieved. As a third step, the method was tested at larger scale, over three different road sections within the districts of Madrid and Guadalajara, in Spain: GPR surveys were carried out at the speed of traffic for a total of 39 kilometers, approximately; results were collected by using different GPR antennas

  18. Cost-effectiveness of bivalirudin versus heparin plus glycoprotein IIb/IIIa inhibitor in the treatment of non-ST-segment elevation acute coronary syndromes.

    Science.gov (United States)

    Schwenkglenks, Matthias; Brazier, John E; Szucs, Thomas D; Fox, Keith A A

    2011-01-01

    This study sought to assess the cost-effectiveness of bivalirudin versus heparin plus glycoprotein IIb/IIIa inhibitor (GPI) in thienopyridine-treated non-ST-segment elevation acute coronary syndrome (NSTE-ACS) patients undergoing early or urgent invasive management, from a United Kingdom National Health Service perspective. A decision-analytic model with lifelong time horizon was populated with event risks and resource use parameters derived from the Acute Catheterization and Urgent Intervention Triage Strategy (ACUITY) trial raw data. In a parallel analysis, key comparator strategy inputs came from Global Registry of Acute Coronary Events (GRACE) patients enrolled in the United Kingdom. Upstream and catheter laboratory-initiated GPI were assumed to be tirofiban and abciximab, respectively. Life expectancy of first-year survivors, unit costs, and health-state utilities came from United Kingdom sources. Costs and effects were discounted at 3.5%. Incremental cost-effectiveness ratios (ICERs) were expressed as cost per quality-adjusted life year (QALY) gained. Higher acquisition costs for bivalirudin were partially offset by lower hospitalization and bleeding costs. In the ACUITY-based analysis, per-patient lifetime costs in the bivalirudin and heparin plus GPI strategies were £10,903 and £10,653, respectively. Patients survived 10.87 and 10.82 years on average, corresponding to 5.96 and 5.93 QALYs and resulting in an ICER of £9,906 per QALY gained. The GRACE-based ICER was £12,276 per QALY gained. In probabilistic sensitivity analysis, 72.1% and 67.0% of simulation results were more cost-effective than £20,000 per QALY gained, in the ACUITY-based and GRACE-based analyses, respectively. Additional scenario analyses implied that greater cost-effectiveness may be achieved in actual clinical practice. Treating NSTE-ACS patients undergoing invasive management with bivalirudin is likely to represent a cost-effective option for the United Kingdom, when compared with

  19. Exergy cost analysis of ground source heat pump system%土壤源热泵系统的火用成本分析

    Institute of Scientific and Technical Information of China (English)

    龚光彩; 冯启辉; 苏欢; 徐春雯

    2011-01-01

    在对火用分析与经济学原理研究的基础上,利用火用成本分析方法,建立了土壤源热泵系统的(火用) 成本分析模型.以实际的建筑空调系统为背景,运用所建火用成本分析模型与能量成本分析方法,对土壤源热泵系统与风冷热泵系统进行了对比分析计算,并以空气源热泵为基础对土壤源热泵进行了敏感性分析.对土壤源热泵的实际应用具有指导意义.%Based on the study of the exergy analysis and economics theory, an exergy cost analysis model of ground source heat pump system was established by using the exergy cost analysis method. With the background of an actual building air - conditioning system, the ground source heat pump system and the air source heat pump system were calculated by using exergy cost analysis model and energy cost analysis method. And the sensitivity analysis method was used in the exergy cost of ground source heat pump basing on air source heat pump. The analysis method has guiding significance for practical application of ground source heat pump.

  20. Impact of a bronchiolitis guideline on ED resource use and cost: a segmented time-series analysis.

    Science.gov (United States)

    Akenroye, Ayobami T; Baskin, Marc N; Samnaliev, Mihail; Stack, Anne M

    2014-01-01

    Bronchiolitis is a major cause of infant morbidity and contributes to millions of dollars in health care costs. Care guidelines may cut costs by reducing unnecessary resource utilization. Through the implementation of a guideline, we sought to reduce unnecessary resource utilization and improve the value of care provided to infants with bronchiolitis in a pediatric emergency department (ED). We conducted an interrupted time series that examined ED visits of 2929 patients with bronchiolitis, aged 1 to 12 months old, seen between November 2007 and April 2013. Outcomes were proportion having a chest radiograph (CXR), respiratory syncytial virus (RSV) testing, albuterol or antibiotic administration, and the total cost of care. Balancing measures included admission rate, returns to the ED resulting in admission within 72 hours of discharge, and ED length of stay (LOS). There were no significant preexisting trends in the outcomes. After guideline implementation, there was an absolute reduction of 23% in CXR (95% confidence interval [CI]: 11% to 34%), 11% in RSV testing (95% CI: 6% to 17%), 7% in albuterol use (95% CI: 0.2% to 13%), and 41 minutes in ED LOS (95% CI: 16 to 65 minutes). Mean cost per patient was reduced by $197 (95% CI: $136 to $259). Total cost savings was $196,409 (95% CI: $135,592 to $258,223) over the 2 bronchiolitis seasons after guideline implementation. There were no significant differences in antibiotic use, admission rates, or returns resulting in admission within 72 hours of discharge. A bronchiolitis guideline was associated with reductions in CXR, RSV testing, albuterol use, ED LOS, and total costs in a pediatric ED.

  1. Segmentation of Color Images Based on Different Segmentation Techniques

    Directory of Open Access Journals (Sweden)

    Purnashti Bhosale

    2013-03-01

    Full Text Available In this paper, we propose an Color image segmentation algorithm based on different segmentation techniques. We recognize the background objects such as the sky, ground, and trees etc based on the color and texture information using various methods of segmentation. The study of segmentation techniques by using different threshold methods such as global and local techniques and they are compared with one another so as to choose the best technique for threshold segmentation. Further segmentation is done by using clustering method and Graph cut method to improve the results of segmentation.

  2. Numerical modelling of GPR ground-matching enhancement by a chirped multilayer structure - output of cooperation within COST Action TU1208

    Science.gov (United States)

    Baghdasaryan, Hovik V.; Knyazyan, Tamara M.; Hovhannisyan, Tamara. T.; Marciniak, Marian; Pajewski, Lara

    2016-04-01

    As is well know, Ground Penetrating Radar (GPR) is an electromagnetic technique for the detection and imaging of buried objects, with resolution ranging from centimeters to few meters [1, 2]. Though this technique is mature enough and different types of GPR devices are already in use, some problems are still waiting for their solution [3]. One of them is to achieve a better matching of transmitting GPR antenna to the ground, that will increase the signal penetration depth and the signal/noise ratio at the receiving end. In the current work, a full-wave electromagnetic modelling of the interaction of a plane wave with a chirped multilayered structure on the ground is performed, via numerical simulation. The method of single expression is used, which is a suitable technique for multi-boundary problems solution [4, 5]. The considered multilayer consists of two different dielectric slabs of low and high permittivity, where the highest value of permittivity doesn't exceed the permittivity of the ground. The losses in the ground are suitably taken into account. Two types of multilayers are analysed. Numerical results are obtained for the reflectance from the structure, as well as for the distributions of electric field components and power flow density in both the considered structures and the ground. The obtained results indicate that, for a better matching with the ground, the layer closer to the ground should be the high-permittivity one. Acknowledgement This work benefited from networking activities carried out within the EU funded COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu). Part of this work was developed during the Short-Term Scientific Mission COST-STSM-TU1208-25016, carried out by Prof. Baghdasaryan in the National Institute of Telecommunications in Warsaw, Poland. References [1] H. M. Jol. Ground Penetrating Radar: Theory and Applications. Elsevier, 2009. 509 pp. [2] R. Persico. Introduction to

  3. Ground-penetrating radar investigation of St. Leonard's Crypt under the Wawel Cathedral (Cracow, Poland) - COST Action TU1208

    Science.gov (United States)

    Benedetto, Andrea; Pajewski, Lara; Dimitriadis, Klisthenis; Avlonitou, Pepi; Konstantakis, Yannis; Musiela, Małgorzata; Mitka, Bartosz; Lambot, Sébastien; Żakowska, Lidia

    2016-04-01

    The Wawel ensemble, including the Royal Castle, the Wawel Cathedral and other monuments, is perched on top of the Wawel hill immediately south of the Cracow Old Town, and is by far the most important collection of buildings in Poland. St. Leonard's Crypt is located under the Wawel Cathedral of St Stanislaus BM and St Wenceslaus M. It was built in the years 1090-1117 and was the western crypt of the pre-existing Romanesque Wawel Cathedral, so-called Hermanowska. Pope John Paul II said his first Mass on the altar of St. Leonard's Crypt on November 2, 1946, one day after his priestly ordination. The interior of the crypt is divided by eight columns into three naves with vaulted ceiling and ended with one apse. The tomb of Bishop Maurus, who died in 1118, is in the middle of the crypt under the floor; an inscription "+ MAVRVS EPC MCXVIII +" indicates the burial place and was made in 1938 after the completion of archaeological works which resulted in the discovery of this tomb. Moreover, the crypt hosts the tombs of six Polish kings and heroes: Michał Korybut Wiśniowiecki (King of the Polish-Lithuanian Commonwealth), Jan III Sobieski (King of the Polish-Lithuanian Commonwealth and Commander at the Battle of Vienna), Maria Kazimiera (Queen of the Polish-Lithuanian Commonwealth and consort to Jan III Sobieski), Józef Poniatowski (Prince of Poland and Marshal of France), Tadeusz Kościuszko (Polish general, revolutionary and a Brigadier General in the American Revolutionary War) and Władysław Sikorski (Prime Minister of the Polish Government in Exile and Commander-in-Chief of the Polish Armed Forces). The adjacent six crypts and corridors host the tombs of the other Polish kings, from Sigismund the Old to Augustus II the Strong, their families and several Polish heroes. In May 2015, the COST (European COoperation in Science and Technology) Action TU1208 "Civil engineering applications of Ground Penetrating Radar" organised and offered a Training School (TS) on the

  4. Surface Figure Metrology for CELT Primary Mirror Segments

    Energy Technology Data Exchange (ETDEWEB)

    Sommargren, G; Phillion, D; Seppala, L; Lerner, S

    2001-02-27

    The University of California and California Institute of Technology are currently studying the feasibility of building a 30-m segmented ground based optical telescope called the California Extremely Large Telescope (CELT). The early ideas for this telescope were first described by Nelson and Mast and more recently refined by Nelson. In parallel, concepts for the fabrication of the primary segments were proposed by Mast, Nelson and Sommargren where high risk technologies were identified. One of these was the surface figure metrology needed for fabricating the aspheric mirror segments. This report addresses the advanced interferometry that will be needed to achieve 15nm rms accuracy for mirror segments with aspheric departures as large as 35mm peak-to-valley. For reasons of cost, size, measurement consistency and ease of operation we believe it is desirable to have a single interferometer that can be universally applied to each and every mirror segment. Such an instrument is described in this report.

  5. Taking the Evolutionary Road to Developing an In-House Cost Estimate

    Science.gov (United States)

    Jacintho, David; Esker, Lind; Herman, Frank; Lavaque, Rodolfo; Regardie, Myma

    2011-01-01

    This slide presentation reviews the process and some of the problems and challenges of developing an In-House Cost Estimate (IHCE). Using as an example the Space Network Ground Segment Sustainment (SGSS) project, the presentation reviews the phases for developing a Cost estimate within the project to estimate government and contractor project costs to support a budget request.

  6. Design of a white-light interferometric measuring system for co-phasing the primary mirror segments of the next generation of ground-based telescope

    Science.gov (United States)

    Song, Helun; Xian, Hao; Jiang, Wenhan; Rao, Changhui; Wang, Shengqian

    2007-12-01

    With the increase of telescope size, the manufacture of monolithic primaries becomes increasingly difficult. Instead, the use of segmented mirrors, where many individual mirrors (the segments) work together to provide good image quality and an aperture equivalent to that of a large monolithic mirror, is considered a more appropriate strategy. But, with the introduction of large telescope mirror comprised of many individual segments, the problem of insuring a smooth continuous mirror surface (co-phased mirrors) becomes critical. One of the main problems arising in the co-phasing of the segmented mirrors telescope is the problem of measurements of the vertical displacements between the individual segments (piston errors). Because of such mirrors to exhibit diffraction-limited performance, a phasing process is required in order to guarantee that the segments have to be positioned with an accuracy of a fraction of a wavelength of the incoming light.The measurements become especially complicated when the piston error is in order of wavelength fractions. To meet the performance capabilities, a novel method for phasing the segmented mirrors optical system is described. The phasing method is based on a high-aperture Michelson interferometer. The use of an interferometric technique allows the measurement of segment misalignment during daytime with high accuracy, which is a major design guideline. The innovation introduced in the optical design of the interferometer is the simultaneous use of both monochromatic and white-light sources that allows the system to measure the piston error with an uncertainty of 6nm in 50µm range. The description about the expected monochromatic and white-light illumination interferograms and the feasibility of the phasing method are presented here.

  7. COST Action TU1206 "SUB-URBAN - A European network to improve understanding and use of the ground beneath our cities"

    Science.gov (United States)

    Campbell, Diarmad; de Beer, Johannes; Lawrence, David; van der Meulen, Michiel; Mielby, Susie; Hay, David; Scanlon, Ray; Campenhout, Ignace; Taugs, Renate; Eriksson, Ingelov

    2014-05-01

    Sustainable urbanisation is the focus of SUB-URBAN, a European Cooperation in Science and Technology (COST) Action TU1206 - A European network to improve understanding and use of the ground beneath our cities. This aims to transform relationships between experts who develop urban subsurface geoscience knowledge - principally national Geological Survey Organisations (GSOs), and those who can most benefit from it - urban decision makers, planners, practitioners and the wider research community. Under COST's Transport and Urban Development Domain, SUB-URBAN has established a network of GSOs and other researchers in over 20 countries, to draw together and evaluate collective urban geoscience research in 3D/4D characterisation, prediction and visualisation. Knowledge exchange between researchers and City-partners within 'SUB-URBAN' is already facilitating new city-scale subsurface projects, and is developing a tool-box of good-practice guidance, decision-support tools, and cost-effective methodologies that are appropriate to local needs and circumstances. These are intended to act as catalysts in the transformation of relationships between geoscientists and urban decision-makers more generally. As a result, the importance of the urban sub-surface in the sustainable development of our cities will be better appreciated, and the conflicting demands currently placed on it will be acknowledged, and resolved appropriately. Existing city-scale 3D/4D model exemplars are being developed by partners in the UK (Glasgow, London), Germany (Hamburg) and France (Paris). These draw on extensive ground investigation (10s-100s of thousands of boreholes) and other data. Model linkage enables prediction of groundwater, heat, SuDS, and engineering properties. Combined subsurface and above-ground (CityGML, BIMs) models are in preparation. These models will provide valuable tools for more holistic urban planning; identifying subsurface opportunities and saving costs by reducing uncertainty in

  8. Cost-effective monitoring of ground motion related to earthquakes, landslides, or volcanic activity by joint use of a single-frequency GPS and a MEMS accelerometer

    Science.gov (United States)

    Tu, R.; Wang, R.; Ge, M.; Walter, T. R.; Ramatschi, M.; Milkereit, C.; Bindi, D.; Dahm, T.

    2013-08-01

    detection and precise estimation of strong ground motion are crucial for rapid assessment and early warning of geohazards such as earthquakes, landslides, and volcanic activity. This challenging task can be accomplished by combining GPS and accelerometer measurements because of their complementary capabilities to resolve broadband ground motion signals. However, for implementing an operational monitoring network of such joint measurement systems, cost-effective techniques need to be developed and rigorously tested. We propose a new approach for joint processing of single-frequency GPS and MEMS (microelectromechanical systems) accelerometer data in real time. To demonstrate the performance of our method, we describe results from outdoor experiments under controlled conditions. For validation, we analyzed dual-frequency GPS data and images recorded by a video camera. The results of the different sensors agree very well, suggesting that real-time broadband information of ground motion can be provided by using single-frequency GPS and MEMS accelerometers.

  9. Universal Numeric Segmented Display

    CERN Document Server

    Azad, Md Abul kalam; Kamruzzaman, S M

    2010-01-01

    Segmentation display plays a vital role to display numerals. But in today's world matrix display is also used in displaying numerals. Because numerals has lots of curve edges which is better supported by matrix display. But as matrix display is costly and complex to implement and also needs more memory, segment display is generally used to display numerals. But as there is yet no proposed compact display architecture to display multiple language numerals at a time, this paper proposes uniform display architecture to display multiple language digits and general mathematical expressions with higher accuracy and simplicity by using a 18-segment display, which is an improvement over the 16 segment display.

  10. Fast Approximate Broadband Phase Retrieval for Segmented Systems

    Science.gov (United States)

    Jurling, Alden S.; Fienup, James R.

    2011-01-01

    Broadband phase retrieval needed when: a) Narrow spectral filters are unavailable. b) Dim sources. c) Low throughput due to misalignment. d) Short exposures times. i.e., Pointing instability (space); and Atmospheric instability (ground based AO). Traditional approach is computationally burdensome for extreme bandwidths. Approximate approach: a) Substitute monochromatic model. b) Blur model and data. Test case performance: a) approx.270x reduction in computational cost for FGS-like test case. b) Good accuracy for monolithic system. c) Acceptable accuracy for segmented systems. i.e., Reduced by diffraction and Reduced by higher order segment model.

  11. Design and Development of an Equipotential Voltage Reference (Grounding) System for a Low-Cost Rapid-Development Modular Spacecraft Architecture

    Science.gov (United States)

    Lukash, James A.; Daley, Earl

    2011-01-01

    This work describes the design and development effort to adapt rapid-development space hardware by creating a ground system using solutions of low complexity, mass, & cost. The Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft is based on the modular common spacecraft bus architecture developed at NASA Ames Research Center. The challenge was building upon the existing modular common bus design and development work and improving the LADEE spacecraft design by adding an Equipotential Voltage Reference (EVeR) system, commonly referred to as a ground system. This would aid LADEE in meeting Electromagnetic Environmental Effects (E3) requirements, thereby making the spacecraft more compatible with itself and its space environment. The methods used to adapt existing hardware are presented, including provisions which may be used on future spacecraft.

  12. Segment Level Based Heuristic for Workflow Cost-time Optimization in Grids%基于启发式分段的网格工作流费用优化方法

    Institute of Scientific and Technical Information of China (English)

    龙浩; 邸瑞华; 梁毅

    2011-01-01

    针对有向无环图(directed acrylic graph,DAG)表示的截止期约束下的网格工作流费用优化问题,提出启发式分段(segment level,SL)费用优化算法.通过分析DAG图中活动的并行和同步特征,算法对活动进行分段,时间浮差按比例分配到各段,段内的费用优化采用动态规划的求解策略实现.通过将工作流截止期转换为段截止时间,扩大了活动的费用优化区间,通过大量模拟实验将SL算法和MCP(minimum critcal path)、DTL (deadline top level)、DBL(deadline bottom level)算法比较,证明了SL算法的有效性.%Workflow scheduling with the objective of time-cost optimization is a fundamental problem in grids and generally the problem is NP-hard. In this paper, a novel heuristics called SL (Segment Level ) for workflows represented by DAG (Directed Acyclic Graph ) is proposed. Considering the parallel and synchronization properties, the workflow application is divided into segments, and the workflow deadline is transformed into the time intervals and appointed to different segments. The floating time is prorated to each segment to enlarge costtime duration, and a dynamic programming method is implemented to optimize cost for each segment. By comparing SL with MCP (Minimum Critical Path), DTL(Deadline Top Level), DBL(Deadline Bottom Level), the heuristics' efficiency is verified by experimental results.

  13. Cost Benefit Analysis Modeling Tool for Electric vs. ICE Airport Ground Support Equipment – Development and Results

    Energy Technology Data Exchange (ETDEWEB)

    James Francfort; Kevin Morrow; Dimitri Hochard

    2007-02-01

    This report documents efforts to develop a computer tool for modeling the economic payback for comparative airport ground support equipment (GSE) that are propelled by either electric motors or gasoline and diesel engines. The types of GSE modeled are pushback tractors, baggage tractors, and belt loaders. The GSE modeling tool includes an emissions module that estimates the amount of tailpipe emissions saved by replacing internal combustion engine GSE with electric GSE. This report contains modeling assumptions, methodology, a user’s manual, and modeling results. The model was developed based on the operations of two airlines at four United States airports.

  14. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... and analysed possible segments in the market. Results show that the statistical model used identified two segments - a segment of so-called "fish lovers" and another segment called "traditionalists". The "fish lovers" are very fond of eating fish and they actually prefer fish to other dishes...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...

  15. Impact of Screening on Behavior During Storage and Cost of Ground Small-Diameter Pine Trees: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Erin Searcy; Brad D Blackwelder; Mark E Delwiche; Allison E Ray; Kevin L Kenney

    2011-10-01

    Whole comminuted trees are known to self-heat and undergo quality changes during storage. Trommel screening after grinding is a process that removes fines from the screened material and removes a large proportion of high-ash, high-nutrient material. In this study, the trade-off between an increase in preprocessing cost from trommel screening and an increase in quality of the screened material was examined. Fresh lodgepole pine (Pinus contorta) was comminuted using a drum grinder with a 10-cm screen, and the resulting material was distributed into separate fines and overs piles. A third pile of unscreened material, the unsorted pile, was also examined. The three piles exhibited different characteristics during a 6-week storage period. The overs pile was much slower to heat. The overs pile reached a maximum temperature of 56.88 degrees C, which was lower than the maximum reached by the other two piles (65.98 degrees C and 63.48 degrees C for the unsorted and fines, respectively). The overs also cooled faster and dried to a more uniform moisture content and had a lower ash content than the other two piles. Both piles of sorted material exhibited improved airflow and more drying than the unsorted material. Looking at supply system costs from preprocessing through in-feed into thermochemical conversion, this study found that trommel screening reduced system costs by over $3.50 per dry matter ton and stabilized material during storage.

  16. Diffractive imaging analysis of large-aperture segmented telescope based on partial Fourier transform

    Science.gov (United States)

    Dong, Bing; Qin, Shun; Hu, Xinqi

    2013-09-01

    Large-aperture segmented primary mirror will be widely used in next-generation space-based and ground-based telescopes. The effects of intersegment gaps, obstructions, position and figure errors of segments, which are all involved in the pupil plane, on the image quality metric should be analyzed using diffractive imaging theory. Traditional Fast Fourier Transform (FFT) method is very time-consuming and costs a lot of memory especially in dealing with large pupil-sampling matrix. A Partial Fourier Transform (PFT) method is first proposed to substantially speed up the computation and reduce memory usage for diffractive imaging analysis. Diffraction effects of a 6-meter segmented mirror including 18 hexagonal segments are simulated and analyzed using PFT method. The influence of intersegment gaps and position errors of segments on Strehl ratio is quantitatively analyzed by computing the Point Spread Function (PSF). By comparing simulation results with theoretical results, the correctness and feasibility of PFT method is confirmed.

  17. A cost effective and operational methodology for wall to wall Above Ground Biomass (AGB) and carbon stocks estimation and mapping: Nepal REDD+

    Science.gov (United States)

    Gilani, H., Sr.; Ganguly, S.; Zhang, G.; Koju, U. A.; Murthy, M. S. R.; Nemani, R. R.; Manandhar, U.; Thapa, G. J.

    2015-12-01

    Nepal is a landlocked country with 39% forest cover of the total land area (147,181 km2). Under the Forest Carbon Partnership Facility (FCPF) and implemented by the World Bank (WB), Nepal chosen as one of four countries best suitable for results-based payment system for Reducing Emissions from Deforestation and Forest Degradation (REDD and REDD+) scheme. At the national level Landsat based, from 1990 to 2000 the forest area has declined by 2%, i.e. by 1467 km2, whereas from 2000 to 2010 it has declined only by 0.12% i.e. 176 km2. A cost effective monitoring and evaluation system for REDD+ requires a balanced approach of remote sensing and ground measurements. This paper provides, for Nepal a cost effective and operational 30 m Above Ground Biomass (AGB) estimation and mapping methodology using freely available satellite data integrated with field inventory. Leaf Area Index (LAI) generated based on propose methodology by Ganguly et al. (2012) using Landsat-8 the OLI cloud free images. To generate tree canopy height map, a density scatter graph between the Geoscience Laser Altimeter System (GLAS) on the Ice, Cloud, and Land Elevation Satellite (ICESat) estimated maximum height and Landsat LAI nearest to the center coordinates of the GLAS shots show a moderate but significant exponential correlation (31.211*LAI0.4593, R2= 0.33, RMSE=13.25 m). From the field well distributed circular (750m2 and 500m2), 1124 field plots (0.001% representation of forest cover) measured which were used for estimation AGB (ton/ha) using Sharma et al. (1990) proposed equations for all tree species of Nepal. A satisfactory linear relationship (AGB = 8.7018*Hmax-101.24, R2=0.67, RMSE=7.2 ton/ha) achieved between maximum canopy height (Hmax) and AGB (ton/ha). This cost effective and operational methodology is replicable, over 5-10 years with minimum ground samples through integration of satellite images. Developed AGB used to produce optimum fuel wood scenarios using population and road

  18. Low-cost approach for a software-defined radio based ground station receiver for CCSDS standard compliant S-band satellite communications

    Science.gov (United States)

    Boettcher, M. A.; Butt, B. M.; Klinkner, S.

    2016-10-01

    A major concern of a university satellite mission is to download the payload and the telemetry data from a satellite. While the ground station antennas are in general easy and with limited afford to procure, the receiving unit is most certainly not. The flexible and low-cost software-defined radio (SDR) transceiver "BladeRF" is used to receive the QPSK modulated and CCSDS compliant coded data of a satellite in the HAM radio S-band. The control software is based on the Open Source program GNU Radio, which also is used to perform CCSDS post processing of the binary bit stream. The test results show a good performance of the receiving system.

  19. An economic analysis of space solar power and its cost competitiveness as a supplemental source of energy for space and ground markets

    Science.gov (United States)

    Marzwell, N. I.

    2002-01-01

    Economic Growth has been historically associated with nations that first made use of each new energy source. There is no doubt that Solar Power Satellites is high as a potential energy system for the future. A conceptual cost model of the economics value of space solar power (SSP) as a source of complementary power for in-space and ground applications will be discussed. Several financial analysis will be offered based on present and new technological innovations that may compete with or be complementary to present energy market suppliers depending on various institutional arrangements for government and the private sector in a Global Economy. Any of the systems based on fossil fuels such as coal, oil, natural gas, and synthetic fuels share the problem of being finite resources and are subject to ever-increasing cost as they grow ever more scarce with drastic increase in world population. Increasing world population and requirements from emerging underdeveloped countries will also increase overall demand. This paper would compare the future value of SSP with that of other terrestrial renewable energy in distinct geographic markets within the US, in developing countries, Europe, Asia, and Eastern Europe.

  20. Fingerprint Segmentation

    OpenAIRE

    Jomaa, Diala

    2009-01-01

    In this thesis, a new algorithm has been proposed to segment the foreground of the fingerprint from the image under consideration. The algorithm uses three features, mean, variance and coherence. Based on these features, a rule system is built to help the algorithm to efficiently segment the image. In addition, the proposed algorithm combine split and merge with modified Otsu. Both enhancements techniques such as Gaussian filter and histogram equalization are applied to enhance and improve th...

  1. Design and testing of Ground Penetrating Radar equipment dedicated for civil engineering applications: ongoing activities in Working Group 1 of COST Action TU1208

    Science.gov (United States)

    Pajewski, Lara; Manacorda, Guido; Persico, Raffaele

    2015-04-01

    This work aims at presenting the ongoing research activities carried out in Working Group 1 'Novel GPR instrumentation' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. Working Group 1 (WG1) of the Action focuses on the development of innovative GPR equipment dedicated for civil engineering applications. It includes three Projects. Project 1.1 is focused on the 'Design, realisation and optimisation of innovative GPR equipment for the monitoring of critical transport infrastructures and buildings, and for the sensing of underground utilities and voids.' Project 1.2 is concerned with the 'Development and definition of advanced testing, calibration and stability procedures and protocols, for GPR equipment.' Project 1.3 deals with the 'Design, modelling and optimisation of GPR antennas.' During the first year of the Action, WG1 Members coordinated between themselves to address the state of the art and open problems in the scientific fields identified by the above-mentioned Projects [1, 2]. In carrying our this work, the WG1 strongly benefited from the participation of IDS Ingegneria dei Sistemi, one of the biggest GPR manufacturers, as well as from the contribution of external experts as David J. Daniels and Erica Utsi, sharing with the Action Members their wide experience on GPR technology and methodology (First General Meeting, July 2013). The synergy with WG2 and WG4 of the Action was useful for a deep understanding of the problems, merits and limits of available GPR equipment, as well as to discuss how to quantify the reliability of GPR results. An

  2. Weighing up the costs of seeking health care for dengue symptoms: a grounded theory study of backpackers' decision-making processes.

    Science.gov (United States)

    Vajta, Bálint; Holberg, Mette; Mills, Jane; McBride, William J H

    2015-01-01

    Dengue fever, a mosquito-borne virus, is an ongoing public health issue in North Queensland. Importation of dengue fever by travellers visiting or returning to Australia can lead to epidemics. The mosquito can acquire the virus in the symptomatic viraemic phase, so timely recognition of cases is important to prevent epidemics. There is a gap in the literature about backpackers' knowledge of dengue fever and the decision-making process they use when considering utilising the Australian health-care system. This study uses grounded theory methods to construct a theory that explains the process backpackers use when seeking health care. Fifty semi-structured interviews with backpackers, hostel receptionists, travel agents and pharmacists were analysed, resulting in identification of a core category: 'weighing up the costs of seeking health care'. This core category has three subcategories: 'self-assessment of health status', 'wait-and-see' and 'seek direction'. Findings from this study identified key areas where health promotion material and increased access to health-care professionals could reduce the risk of backpackers spreading dengue fever.

  3. Comparison of In-Hospital Mortality, Length of Stay, Postprocedural Complications, and Cost of Single-Vessel Versus Multivessel Percutaneous Coronary Intervention in Hemodynamically Stable Patients With ST-Segment Elevation Myocardial Infarction (from Nationwide Inpatient Sample [2006 to 2012]).

    Science.gov (United States)

    Panaich, Sidakpal S; Arora, Shilpkumar; Patel, Nilay; Schreiber, Theodore; Patel, Nileshkumar J; Pandya, Bhavi; Gupta, Vishal; Grines, Cindy L; Deshmukh, Abhishek; Badheka, Apurva O

    2016-10-01

    The primary objective of our study was to evaluate the in-hospital outcomes in terms of mortality, procedural complications, hospitalization costs, and length of stay (LOS) after multivessel percutaneous coronary intervention (MVPCI) in hemodynamically stable patients with ST-segment elevation myocardial infarction (STEMI). The study cohort was derived from the Healthcare Cost and Utilization Project Nationwide Inpatient Sample database, years 2006 to 2012. Percutaneous coronary interventions (PCI) performed during STEMI were identified using appropriate International Classification of Diseases, Ninth Revision, diagnostic and procedural codes. Patients in cardiogenic shock were excluded. Hierarchical mixed-effects logistic regression models were used for categorical dependent variables such as in-hospital mortality and composite of in-hospital mortality and complications, and hierarchical mixed-effects linear regression models were used for continuous dependent variables such as cost of hospitalization and LOS. We identified 106,317 (weighted n = 525,161) single-vessel PCI and 15,282 (weighted n = 74,543) MVPCIs. MVPCI (odds ratio, 95% confidence interval [CI], p value) was not associated with significant increase in in-hospital mortality (0.99, 0.85 to 1.15, 0.863) but predicted a higher composite end point of in-hospital mortality and postprocedural complications (1.09, 1.02 to 1.17, 0.013) compared to single-vessel PCI. MVPCI was also predictive of longer LOS (LOS +0.19 days, 95% CI +0.14 to +0.23 days, p <0.001) and higher hospitalization costs (cost +$4,445, 95% CI +$4,128 to +$4,762, p <0.001). MVPCI performed during STEMI in hemodynamically stable patients is associated with no increase in in-hospital mortality but a higher rate of postprocedural complications and longer LOS and greater hospitalization costs compared to single-vessel PCI.

  4. A Flexible Semi-Automatic Approach for Glioblastoma multiforme Segmentation

    CERN Document Server

    Egger, Jan; Kuhnt, Daniela; Kappus, Christoph; Carl, Barbara; Freisleben, Bernd; Nimsky, Christopher

    2011-01-01

    Gliomas are the most common primary brain tumors, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the preoperative tumor volume is essential. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming process that can be overcome with the help of segmentation methods. In this paper, a flexible semi-automatic approach for grade IV glioma segmentation is presented. The approach uses a novel segmentation scheme for spherical objects that creates a directed 3D graph. Thereafter, the minimal cost closed set on the graph is computed via a polynomial time s-t cut, creating an optimal segmentation of the tumor. The user can improve the results by specifying an arbitrary number of additional seed points to support the algorithm with grey value information and geometrical constraints. The presented method is tested on 12 magnetic resonance imaging datasets. The ground truth of the tumor boundaries are manually extracted by neurosurgeons. The...

  5. On the cost of approximating and recognizing a noise perturbed straight line or a quadratic curve segment in the plane. [central processing units

    Science.gov (United States)

    Cooper, D. B.; Yalabik, N.

    1975-01-01

    Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.

  6. The science and technology case for a global network of compact, low cost ground-based laser heterodyne radiometers for column measurements of CO2 and CH4

    Science.gov (United States)

    Mao, J.; Clarke, G.; Wilson, E. L.; Palmer, P. I.; Feng, L.; Ramanathan, A. K.; Ott, L. E.; Duncan, B. N.; Melroy, H.; McLinden, M.; DiGregorio, A.

    2015-12-01

    The importance of atmospheric carbon dioxide (CO2) and methane (CH4) in determining Earth's climate is well established. Recent technological developments in space-borne instrumentation have enabled us to observe changes in these gases to a precision necessary to infer for the responsible geographical fluxes. The Total Carbon Column Observing Network (TCCON), comprising a network of upward-looking Fourier transform spectrometers, was established to provide an accurate ground truth and minimize regional systematic bias. NASA Goddard Space Flight Center (GSFC) has developed a compact, low-cost laser heterodyne radiometer (LHR) for global column measurements CO2 and CH4. This Mini-LHR is a passive instrument that uses sunlight as the primary light source to measure absorption of CO2 and CH4in the shortwave infrared near 1.6 microns. It uses compact telecommunications lasers to offer a low cost (RObotic NETwork (AERONET) which has more than 500 sites worldwide. In addition, the NASA Micro-Pulse Lidar Network (MPLNET) provides both column and vertically resolved aerosol and cloud data in active remote sensing at nearly 50 sites worldwide. Tandem operation with AERONET/MPLNET provides a clear pathway for the Mini-LHR to be expanded into a global monitoring network for carbon cycle science and satellite data validation, offering coverage in cloudy regions (e.g., Amazon basin) and key regions such as the Arctic where accelerated warming due to the release of CO2 and CH4from thawing tundra and permafrost is a concern. These vulnerable geographic regions are not well covered by current space-based CO2 and CH4 measurements. We will present an overview of our instrument development and the implementation of a network based on current and future resources. We will also present preliminary Observing System Simulation Experiments to demonstrate the effectiveness of a network Mini-LHR instruments in quantify regional CO2 fluxes, including an analysis of measurement sensitivity

  7. [Segmental neurofibromatosis].

    Science.gov (United States)

    Zulaica, A; Peteiro, C; Pereiro, M; Pereiro Ferreiros, M; Quintas, C; Toribio, J

    1989-01-01

    Four cases of segmental neurofibromatosis (SNF) are reported. It is a rare entity considered to be a localized variant of neurofibromatosis (NF)-Riccardi's type V. Two cases are male and two female. The lesions are located to the head in a patient and the other three cases in the trunk. No family history nor transmission to progeny were manifested. The rest of the organs are undamaged.

  8. The cost of living in the membrane: a case study of hydrophobic mismatch for the multi-segment protein LeuT.

    Science.gov (United States)

    Mondal, Sayan; Khelashvili, George; Shi, Lei; Weinstein, Harel

    2013-04-01

    Many observations of the role of the membrane in the function and organization of transmembrane (TM) proteins have been explained in terms of hydrophobic mismatch between the membrane and the inserted protein. For a quantitative investigation of this mechanism in the lipid-protein interactions of functionally relevant conformations adopted by a multi-TM segment protein, the bacterial leucine transporter (LeuT), we employed a novel method, Continuum-Molecular Dynamics (CTMD), that quantifies the energetics of hydrophobic mismatch by combining the elastic continuum theory of membrane deformations with an atomistic level description of the radially asymmetric membrane-protein interface from MD simulations. LeuT has been serving as a model for structure-function studies of the mammalian neurotransmitter:sodium symporters (NSSs), such as the dopamine and serotonin transporters, which are the subject of intense research in the field of neurotransmission. The membrane models in which LeuT was embedded for these studies were composed of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) lipid, or 3:1 mixture of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (POPE) and 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoglycerol (POPG) lipids. The results show that deformation of the host membrane alone is not sufficient to alleviate the hydrophobic mismatch at specific residues of LeuT. The calculations reveal significant membrane thinning and water penetration due to the specific local polar environment produced by the charged K288 of TM7 in LeuT, that is membrane-facing deep inside the hydrophobic milieu of the membrane. This significant perturbation is shown to result in unfavorable polar-hydrophobic interactions at neighboring hydrophobic residues in TM1a and TM7. We show that all the effects attributed to the K288 residue (membrane thinning, water penetration, and the unfavorable polar-hydrophobic interactions at TM1a and TM7), are abolished in calculations with the

  9. Mixed segmentation

    DEFF Research Database (Denmark)

    Bonde, Anders; Aagaard, Morten; Hansen, Allan Grutt

    This book is about using recent developments in the fields of data analytics and data visualization to frame new ways of identifying target groups in media communication. Based on a mixed-methods approach, the authors combine psychophysiological monitoring (galvanic skin response) with textual...... content analysis and audience segmentation in a single-source perspective. The aim is to explain and understand target groups in relation to, on the one hand, emotional response to commercials or other forms of audio-visual communication and, on the other hand, living preferences and personality traits...

  10. Automatic segmentation of kidneys from non-contrast CT images using efficient belief propagation

    Science.gov (United States)

    Liu, Jianfei; Linguraru, Marius George; Wang, Shijun; Summers, Ronald M.

    2013-03-01

    CT colonography (CTC) can increase the chance of detecting high-risk lesions not only within the colon but anywhere in the abdomen with a low cost. Extracolonic findings such as calculi and masses are frequently found in the kidneys on CTC. Accurate kidney segmentation is an important step to detect extracolonic findings in the kidneys. However, noncontrast CTC images make the task of kidney segmentation substantially challenging because the intensity values of kidney parenchyma are similar to those of adjacent structures. In this paper, we present a fully automatic kidney segmentation algorithm to support extracolonic diagnosis from CTC data. It is built upon three major contributions: 1) localize kidney search regions by exploiting the segmented liver and spleen as well as body symmetry; 2) construct a probabilistic shape prior handling the issue of kidney touching other organs; 3) employ efficient belief propagation on the shape prior to extract the kidneys. We evaluated the accuracy of our algorithm on five non-contrast CTC datasets with manual kidney segmentation as the ground-truth. The Dice volume overlaps were 88%/89%, the root-mean-squared errors were 3.4 mm/2.8 mm, and the average surface distances were 2.1 mm/1.9 mm for the left/right kidney respectively. We also validated the robustness on 27 additional CTC cases, and 23 datasets were successfully segmented. In four problematic cases, the segmentation of the left kidney failed due to problems with the spleen segmentation. The results demonstrated that the proposed algorithm could automatically and accurately segment kidneys from CTC images, given the prior correct segmentation of the liver and spleen.

  11. Segmented blockcopolymers with uniform amide segments

    NARCIS (Netherlands)

    Husken, D.; Krijgsman, J.; Gaymans, R.J.

    2004-01-01

    Segmented blockcopolymers based on poly(tetramethylene oxide) (PTMO) soft segments and uniform crystallisable tetra-amide segments (TxTxT) are made via polycondensation. The PTMO soft segments, with a molecular weight of 1000 g/mol, are extended with terephthalic groups to a molecular weight of 6000

  12. Custo-efetividade de fondaparinux em pacientes com Síndrome Coronariana Aguda sem supradesnivelamento do ST Cost-effectiveness of fondaparinux in patients with acute coronary syndrome without ST-segment Elevation

    Directory of Open Access Journals (Sweden)

    Camila Pepe

    2012-07-01

    acute coronary syndrome without ST-segment elevation (ACSWSTE reduces cardiovascular events. Fondaparinux has demonstrated equivalence to enoxaparin in reducing cardiovascular events, but with a lower rate of bleeding in patients using fondaparinux. OBJECTIVE: Evaluate the cost-effectiveness of fondaparinux versus enoxaparin in patients with ACSWSTE in Brazil from the economic perspective of the Brazilian Unified Health System (SUS. METHODS: A decision analytic model was constructed to calculate the costs and consequences of the compared treatments. The model parameters were obtained from the OASIS-5 study (N = 20,078 patients with ACSWSTE randomized to fondaparinux or enoxaparin. The target outcome consisted of cardiovascular events (i.e., death, myocardial infarction, refractory ischemia and major bleeding on days 9, 30 and 180 after ACSWSTE. We evaluated all direct costs of treatment and ACSWSTE-related events. The year of the analysis was 2010 and the costs were described in reais (R$. RESULTS: On day 9, the cost of treatment per patient was R$ 2,768 for fondaparinux and R$ 2,852 for enoxaparin. Approximately 80% of total costs were associated with invasive treatments. The drug costs accounted for 10% of the total cost. The combined rates of cardiovascular events and major bleeding were 7.3% and 9.0% for fondaparinux and enoxaparin, respectively. Sensitivity analyses confirmed the initial results of the model. CONCLUSION: The use of fondaparinux for the treatment of patients with ACSWSTE is superior to that of enoxaparin in terms of prevention of further cardiovascular events at lower cost. (Arq Bras Cardiol. 2012; [online].ahead print, PP.0-0

  13. Multiatlas segmentation as nonparametric regression.

    Science.gov (United States)

    Awate, Suyash P; Whitaker, Ross T

    2014-09-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems.

  14. A comparison study of atlas-based 3D cardiac MRI segmentation: global versus global and local transformations

    Science.gov (United States)

    Daryanani, Aditya; Dangi, Shusil; Ben-Zikri, Yehuda Kfir; Linte, Cristian A.

    2016-03-01

    Magnetic Resonance Imaging (MRI) is a standard-of-care imaging modality for cardiac function assessment and guidance of cardiac interventions thanks to its high image quality and lack of exposure to ionizing radiation. Cardiac health parameters such as left ventricular volume, ejection fraction, myocardial mass, thickness, and strain can be assessed by segmenting the heart from cardiac MRI images. Furthermore, the segmented pre-operative anatomical heart models can be used to precisely identify regions of interest to be treated during minimally invasive therapy. Hence, the use of accurate and computationally efficient segmentation techniques is critical, especially for intra-procedural guidance applications that rely on the peri-operative segmentation of subject-specific datasets without delaying the procedure workflow. Atlas-based segmentation incorporates prior knowledge of the anatomy of interest from expertly annotated image datasets. Typically, the ground truth atlas label is propagated to a test image using a combination of global and local registration. The high computational cost of non-rigid registration motivated us to obtain an initial segmentation using global transformations based on an atlas of the left ventricle from a population of patient MRI images and refine it using well developed technique based on graph cuts. Here we quantitatively compare the segmentations obtained from the global and global plus local atlases and refined using graph cut-based techniques with the expert segmentations according to several similarity metrics, including Dice correlation coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.

  15. Efficient segmentation by sparse pixel classification

    DEFF Research Database (Denmark)

    Dam, Erik B; Loog, Marco

    2008-01-01

    Segmentation methods based on pixel classification are powerful but often slow. We introduce two general algorithms, based on sparse classification, for optimizing the computation while still obtaining accurate segmentations. The computational costs of the algorithms are derived......, and they are demonstrated on real 3-D magnetic resonance imaging and 2-D radiograph data. We show that each algorithm is optimal for specific tasks, and that both algorithms allow a speedup of one or more orders of magnitude on typical segmentation tasks....

  16. The Effects of Management Initiatives on the Costs and Schedules of Defense Acquisition Programs. Volume 2. Analyses of Ground Combat and Ship Programs

    Science.gov (United States)

    1992-11-01

    negotiate a sales agreement with Turkey, the missiles were ___ 1~2.~~____ _____ __ I handed over to a unit of the New Mexico National Guard, The unit was...87 End Date Sep-89 Sep-06 Sep-06 Qnanthy, N/A N/AN/ Program Status Development-Completed Prduction - -3 yas of data Note: N/A moans not applicable. d...Quantity 12 18 12 Cost $146.5 $337.2 $239.1 3 Total Program Cost $205.9 $443.0 $344.9 Average Unit Cost Prduction $12.2 $18,7 $13,3 Total Pmgrn $17.2 $24.6

  17. Rhythm-based segmentation of Popular Chinese Music

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2005-01-01

    We present a new method to segment popular music based on rhythm. By computing a shortest path based on the self-similarity matrix calculated from a model of rhythm, segmenting boundaries are found along the di- agonal of the matrix. The cost of a new segment is opti- mized by matching manual...... and automatic segment boundaries. We compile a small song database of 21 randomly selected popular Chinese songs which come from Chinese Mainland, Taiwan and Hong Kong. The segmenting results on the small corpus show that 78% manual segmentation points are detected and 74% auto- matic segmentation points...

  18. Anatomy-aware measurement of segmentation accuracy

    Science.gov (United States)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  19. RSRM Segment Train Derailment and Recovery

    Science.gov (United States)

    Taylor Jr., Robert H.; McConnaugghey, Paul K.; Beaman, David E.; Moore, Dennis R.; Reed, Harry

    2008-01-01

    On May 2, 2007, a freight train carrying segments of the space shuttle's solid rocket boosters derailed in Myrtlewood, Alabama, after a rail trestle collapsed. The train was carrying Reusable Solid Rocket Motors (RSRM) 98 center and forward segments (STS-120) and RSRM 99 aft segments (STS-122). Initially, it was not known if the segments had been seriously damaged. Four segments dropped approximately 10 feet when the trestle collapsed and one of those four rolled off the track onto its side. The exit cones and the other four segments, not yet on the trestle, remained on solid ground. ATK and NASA immediately dispatched an investigation and recovery team to determine the safety of the situation and eventually the usability of the segments and exit cones for flight. Instrumentation on each segment provided invaluable data to determine the acceleration loads imparted into each loaded segment and exit cone. This paper details the incident, recovery plan and the team work that created a success story that ended with the safe launch of STS120 using the four center segments and the launch of STS122 using the Aft exit cones assemblies.

  20. Strategic market segmentation

    National Research Council Canada - National Science Library

    Maričić Branko R; Đorđević Aleksandar

    2015-01-01

    ..., requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation...

  1. Ground State Spin Logic

    CERN Document Server

    Whitfield, J D; Biamonte, J D

    2012-01-01

    Designing and optimizing cost functions and energy landscapes is a problem encountered in many fields of science and engineering. These landscapes and cost functions can be embedded and annealed in experimentally controllable spin Hamiltonians. Using an approach based on group theory and symmetries, we examine the embedding of Boolean logic gates into the ground state subspace of such spin systems. We describe parameterized families of diagonal Hamiltonians and symmetry operations which preserve the ground state subspace encoding the truth tables of Boolean formulas. The ground state embeddings of adder circuits are used to illustrate how gates are combined and simplified using symmetry. Our work is relevant for experimental demonstrations of ground state embeddings found in both classical optimization as well as adiabatic quantum optimization.

  2. Ground Vehicle Robotics Presentation

    Science.gov (United States)

    2012-08-14

    Mr. Jim Parker Associate Director Ground Vehicle Robotics Distribution Statement A. Approved for public release Report Documentation Page...Briefing 3. DATES COVERED 01-07-2012 to 01-08-2012 4. TITLE AND SUBTITLE Ground Vehicle Robotics Presentation 5a. CONTRACT NUMBER 5b. GRANT...ABSTRACT Provide Transition-Ready, Cost-Effective, and Innovative Robotics and Control System Solutions for Manned, Optionally-Manned, and Unmanned

  3. Segment handling system prototype progress for Thirty Meter Telescope

    Science.gov (United States)

    Sofuku, Satoru; Ezaki, Yutaka; Kawaguchi, Noboru; Nakaoji, Toshitaka; Takaki, Junji; Horiuchi, Yasushi; Saruta, Yusuke; Haruna, Masaki; Kim, Ieyoung; Fukushima, Kazuhiko; Domae, Yukiyasu; Hatta, Toshiyuki; Yoshitake, Shinya; Hoshino, Hayato

    2016-07-01

    Segment Handling System (SHS) is the subsystem that is planned to be permanently implemented on Thirty Meter Telescope (TMT) telescope structure that enables fast, efficient, semi-automatic exchange of M1 segments. TMT plans challenging segment exchange (10 segments per 10 hours a day). To achieve these, MELCO develops innovative SHS by accommodating Factory Automation (FA) technology such as force control system and machine vision system into the system. Force control system used for install operation, achieves soft handling by detecting force exerted to mirror segment and automatically compensating the position error between handling segments and primary mirror. Machine vision system used for removal operation, achieves semi-automatic positioning between SHS and mirror segments to be handled. Prototype experience proves soft (extraneous force 300N) and fast ( 3 minutes) segment handling. The SHS will provide upcoming segmented large telescopes for cost-efficient, effortless, and safe segment exchange operation.

  4. Metabolic cost of level-ground walking with a robotic transtibial prosthesis combining push-off power and nonlinear damping behaviors: preliminary results.

    Science.gov (United States)

    Yanggang Feng; Jinying Zhu; Qining Wang

    2016-08-01

    Recent advances in robotic technology are facilitating the development of robotic prostheses. Our previous studies proposed a lightweight robotic transtibial prosthesis with a damping control strategy. To improve the performance of power assistance, in this paper, we redesign the prosthesis and improve the control strategy by supplying extra push-off power. A male transtibial amputee subject volunteered to participate in the study. Preliminary experimental results show that the proposed prosthesis with push-off control improves energy expenditure by a percentage ranged from 9.72 % to 14.99 % for level-ground walking compared with the one using non-push-off control.

  5. Review of segmentation process in consumer markets

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2013-01-01

    Full Text Available Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step of the segmentation process with equal treatment. Hence, the objective of this paper is two-fold. First, a snapshot of the segmentation process in a step-by-step fashion will be provided. Second, each step (where possible will be evaluated on chosen criteria by means of description, comparison, analysis and synthesis of 32 academic papers and 13 commercial typology systems. Ultimately, the segmentation stages will be discussed with empirical findings prevalent in the segmentation studies and last but not least suggestions calling for further investigation will be presented. This seven-step-framework may assist when segmenting in practice allowing for more confidential targeting which in turn might prepare grounds for creating of a differential advantage.

  6. Image segmentation based on competitive learning

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jing; LIU Qun; Baikunth Nath

    2004-01-01

    Image segment is a primary step in image analysis of unexploded ordnance (UXO) detection by ground p enetrating radar (GPR) sensor which is accompanied with a lot of noises and other elements that affect the recognition of real target size. In this paper we bring forward a new theory, that is, we look the weight sets as target vector sets which is the new cues in semi-automatic segmentation to form the final image segmentation. The experiment results show that the measure size of target with our method is much smaller than the size with other methods and close to the real size of target.

  7. Segmentation Similarity and Agreement

    CERN Document Server

    Fournier, Chris

    2012-01-01

    We propose a new segmentation evaluation metric, called segmentation similarity (S), that quantifies the similarity between two segmentations as the proportion of boundaries that are not transformed when comparing them using edit distance, essentially using edit distance as a penalty function and scaling penalties by segmentation size. We propose several adapted inter-annotator agreement coefficients which use S that are suitable for segmentation. We show that S is configurable enough to suit a wide variety of segmentation evaluations, and is an improvement upon the state of the art. We also propose using inter-annotator agreement coefficients to evaluate automatic segmenters in terms of human performance.

  8. Segmentation in local hospital markets.

    Science.gov (United States)

    Dranove, D; White, W D; Wu, L

    1993-01-01

    This study examines evidence of market segmentation on the basis of patients' insurance status, demographic characteristics, and medical condition in selected local markets in California in the years 1983 and 1989. Substantial differences exist in the probability patients may be admitted to particular hospitals based on insurance coverage, particularly Medicaid, and race. Segmentation based on insurance and race is related to hospital characteristics, but not the characteristics of the hospital's community. Medicaid patients are more likely to go to hospitals with lower costs and fewer service offerings. Privately insured patients go to hospitals offering more services, although cost concerns are increasing. Hispanic patients also go to low-cost hospitals, ceteris paribus. Results indicate little evidence of segmentation based on medical condition in either 1983 or 1989, suggesting that "centers of excellence" have yet to play an important role in patient choice of hospital. The authors found that distance matters, and that patients prefer nearby hospitals, moreso for some medical conditions than others, in ways consistent with economic theories of consumer choice.

  9. Grounded theory.

    Science.gov (United States)

    Harris, Tina

    2015-04-29

    Grounded theory is a popular research approach in health care and the social sciences. This article provides a description of grounded theory methodology and its key components, using examples from published studies to demonstrate practical application. It aims to demystify grounded theory for novice nurse researchers, by explaining what it is, when to use it, why they would want to use it and how to use it. It should enable nurse researchers to decide if grounded theory is an appropriate approach for their research, and to determine the quality of any grounded theory research they read.

  10. Anatomy packing with hierarchical segments: an algorithm for segmentation of pulmonary nodules in CT images.

    Science.gov (United States)

    Tsou, Chi-Hsuan; Lor, Kuo-Lung; Chang, Yeun-Chung; Chen, Chung-Ming

    2015-05-14

    This paper proposes a semantic segmentation algorithm that provides the spatial distribution patterns of pulmonary ground-glass nodules with solid portions in computed tomography (CT) images. The proposed segmentation algorithm, anatomy packing with hierarchical segments (APHS), performs pulmonary nodule segmentation and quantification in CT images. In particular, the APHS algorithm consists of two essential processes: hierarchical segmentation tree construction and anatomy packing. It constructs the hierarchical segmentation tree based on region attributes and local contour cues along the region boundaries. Each node of the tree corresponds to the soft boundary associated with a family of nested segmentations through different scales applied by a hierarchical segmentation operator that is used to decompose the image in a structurally coherent manner. The anatomy packing process detects and localizes individual object instances by optimizing a hierarchical conditional random field model. Ninety-two histopathologically confirmed pulmonary nodules were used to evaluate the performance of the proposed APHS algorithm. Further, a comparative study was conducted with two conventional multi-label image segmentation algorithms based on four assessment metrics: the modified Williams index, percentage statistic, overlapping ratio, and difference ratio. Under the same framework, the proposed APHS algorithm was applied to two clinical applications: multi-label segmentation of nodules with a solid portion and surrounding tissues and pulmonary nodule segmentation. The results obtained indicate that the APHS-generated boundaries are comparable to manual delineations with a modified Williams index of 1.013. Further, the resulting segmentation of the APHS algorithm is also better than that achieved by two conventional multi-label image segmentation algorithms. The proposed two-level hierarchical segmentation algorithm effectively labelled the pulmonary nodule and its surrounding

  11. Pituitary Adenoma Segmentation

    CERN Document Server

    Egger, Jan; Kuhnt, Daniela; Freisleben, Bernd; Nimsky, Christopher

    2011-01-01

    Sellar tumors are approximately 10-15% among all intracranial neoplasms. The most common sellar lesion is the pituitary adenoma. Manual segmentation is a time-consuming process that can be shortened by using adequate algorithms. In this contribution, we present a segmentation method for pituitary adenoma. The method is based on an algorithm we developed recently in previous work where the novel segmentation scheme was successfully used for segmentation of glioblastoma multiforme and provided an average Dice Similarity Coefficient (DSC) of 77%. This scheme is used for automatic adenoma segmentation. In our experimental evaluation, neurosurgeons with strong experiences in the treatment of pituitary adenoma performed manual slice-by-slice segmentation of 10 magnetic resonance imaging (MRI) cases. Afterwards, the segmentations were compared with the segmentation results of the proposed method via the DSC. The average DSC for all data sets was 77.49% +/- 4.52%. Compared with a manual segmentation that took, on the...

  12. Analysis of Energy, Environmental and Life Cycle Cost Reduction Potential of Ground Source Heat Pump (GSHP) in Hot and Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Yong X. Tao; Yimin Zhu

    2012-04-26

    It has been widely recognized that the energy saving benefits of GSHP systems are best realized in the northern and central regions where heating needs are dominant or both heating and cooling loads are comparable. For hot and humid climate such as in the states of FL, LA, TX, southern AL, MS, GA, NC and SC, buildings have much larger cooling needs than heating needs. The Hybrid GSHP (HGSHP) systems therefore have been developed and installed in some locations of those states, which use additional heat sinks (such as cooling tower, domestic water heating systems) to reject excess heat. Despite the development of HGSHP the comprehensive analysis of their benefits and barriers for wide application has been limited and often yields non-conclusive results. In general, GSHP/HGSHP systems often have higher initial costs than conventional systems making short-term economics unattractive. Addressing these technical and financial barriers call for additional evaluation of innovative utility programs, incentives and delivery approaches. From scientific and technical point of view, the potential for wide applications of GSHP especially HGSHP in hot and humid climate is significant, especially towards building zero energy homes where the combined energy efficient GSHP and abundant solar energy production in hot climate can be an optimal solution. To address these challenges, this report presents gathering and analyzing data on the costs and benefits of GSHP/HGSHP systems utilized in southern states using a representative sample of building applications. The report outlines the detailed analysis to conclude that the application of GSHP in Florida (and hot and humid climate in general) shows a good potential.

  13. PCG-cut: graph driven segmentation of the prostate central gland.

    Directory of Open Access Journals (Sweden)

    Jan Egger

    Full Text Available Prostate cancer is the most abundant cancer in men, with over 200,000 expected new cases and around 28,000 deaths in 2012 in the US alone. In this study, the segmentation results for the prostate central gland (PCG in MR scans are presented. The aim of this research study is to apply a graph-based algorithm to automated segmentation (i.e. delineation of organ limits for the prostate central gland. The ultimate goal is to apply automated segmentation approach to facilitate efficient MR-guided biopsy and radiation treatment planning. The automated segmentation algorithm used is graph-driven based on a spherical template. Therefore, rays are sent through the surface points of a polyhedron to sample the graph's nodes. After graph construction--which only requires the center of the polyhedron defined by the user and located inside the prostate center gland--the minimal cost closed set on the graph is computed via a polynomial time s-t-cut, which results in the segmentation of the prostate center gland's boundaries and volume. The algorithm has been realized as a C++ module within the medical research platform MeVisLab and the ground truth of the central gland boundaries were manually extracted by clinical experts (interventional radiologists with several years of experience in prostate treatment. For evaluation the automated segmentations of the proposed scheme have been compared with the manual segmentations, yielding an average Dice Similarity Coefficient (DSC of 78.94 ± 10.85%.

  14. Grounded cognition.

    Science.gov (United States)

    Barsalou, Lawrence W

    2008-01-01

    Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.

  15. GPS Control Segment

    Science.gov (United States)

    2015-04-29

    Luke J. Schaub Chief, GPS Control Segment Division 29 Apr 15 GPS Control Segment Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2015 4. TITLE AND SUBTITLE GPS Control Segment 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...Center, GPS Control Segment Division,Los Angeles AFB, El Segundo,CA,90245 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S

  16. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    Comparative molecular, developmental and morphogenetic analyses show that the three major segmented animal groups- Lophotrochozoa, Ecdysozoa and Vertebrata-use a wide range of ontogenetic pathways to establish metameric body organization. Even in the life history of a single specimen, different...... plasticity and potential evolutionary lability of segmentation nourishes the controversy of a segmented bilaterian ancestor versus multiple independent evolution of segmentation in respective metazoan lineages....

  17. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation analy

  18. Impact of seasonal forecast use on agricultural income in a system with varying crop costs and returns: an empirically-grounded simulation

    Science.gov (United States)

    Gunda, T.; Bazuin, J. T.; Nay, J.; Yeung, K. L.

    2017-03-01

    Access to seasonal climate forecasts can benefit farmers by allowing them to make more informed decisions about their farming practices. However, it is unclear whether farmers realize these benefits when crop choices available to farmers have different and variable costs and returns; multiple countries have programs that incentivize production of certain crops while other crops are subject to market fluctuations. We hypothesize that the benefits of forecasts on farmer livelihoods will be moderated by the combined impact of differing crop economics and changing climate. Drawing upon methods and insights from both physical and social sciences, we develop a model of farmer decision-making to evaluate this hypothesis. The model dynamics are explored using empirical data from Sri Lanka; primary sources include survey and interview information as well as game-based experiments conducted with farmers in the field. Our simulations show that a farmer using seasonal forecasts has more diversified crop selections, which drive increases in average agricultural income. Increases in income are particularly notable under a drier climate scenario, when a farmer using seasonal forecasts is more likely to plant onions, a crop with higher possible returns. Our results indicate that, when water resources are scarce (i.e. drier climate scenario), farmer incomes could become stratified, potentially compounding existing disparities in farmers’ financial and technical abilities to use forecasts to inform their crop selections. This analysis highlights that while programs that promote production of certain crops may ensure food security in the short-term, the long-term implications of these dynamics need careful evaluation.

  19. Haustral fold segmentation with curvature-guided level set evolution.

    Science.gov (United States)

    Zhu, Hongbin; Barish, Matthew; Pickhardt, Perry; Liang, Zhengrong

    2013-02-01

    Human colon has complex structures mostly because of the haustral folds. The folds are thin flat protrusions on the colon wall, which complicate the shape analysis for computer-aided detection (CAD) of colonic polyps. Fold segmentation may help reduce the structural complexity, and the folds can serve as an anatomic reference for computed tomographic colonography (CTC). Therefore, in this study, based on a model of the haustral fold boundaries, we developed a level-set approach to automatically segment the fold surfaces. To evaluate the developed fold segmentation algorithm, we first established the ground truth of haustral fold boundaries by experts' drawing on 15 patient CTC datasets without severe under/over colon distention from two medical centers. The segmentation algorithm successfully detected 92.7% of the folds in the ground truth. In addition to the sensitivity measure, we further developed a merit of segmented-area ratio (SAR), i.e., the ratio between the area of the intersection and union of the expert-drawn folds and the area of the automatically segmented folds, to measure the segmentation accuracy. The segmentation algorithm reached an average value of SAR = 86.2%, showing a good match with the ground truth on the fold surfaces. We believe the automatically segmented fold surfaces have the potential to benefit many postprocedures in CTC, such as CAD, taenia coli extraction, supine-prone registration, etc.

  20. What is a segment?

    Science.gov (United States)

    Hannibal, Roberta L; Patel, Nipam H

    2013-12-17

    Animals have been described as segmented for more than 2,000 years, yet a precise definition of segmentation remains elusive. Here we give the history of the definition of segmentation, followed by a discussion on current controversies in defining a segment. While there is a general consensus that segmentation involves the repetition of units along the anterior-posterior (a-p) axis, long-running debates exist over whether a segment can be composed of only one tissue layer, whether the most anterior region of the arthropod head is considered segmented, and whether and how the vertebrate head is segmented. Additionally, we discuss whether a segment can be composed of a single cell in a column of cells, or a single row of cells within a grid of cells. We suggest that 'segmentation' be used in its more general sense, the repetition of units with a-p polarity along the a-p axis, to prevent artificial classification of animals. We further suggest that this general definition be combined with an exact description of what is being studied, as well as a clearly stated hypothesis concerning the specific nature of the potential homology of structures. These suggestions should facilitate dialogue among scientists who study vastly differing segmental structures.

  1. Path Research on Low-Cost Innovation and Implementation under Grounded Theory Analysis of Longitudinal Cases%基于纵向案例扎根分析的低成本创新实现路径研究

    Institute of Scientific and Technical Information of China (English)

    蔡瑞林; 陈圻; 陈万明

    2014-01-01

    企业创新实践产生了低成本创新的憧憬,但目前低成本创新尚未成为成熟的构念。以双星集团和宜家家居作为典型案例,运用Strauss程序化扎根理论方法对案例企业开展纵向研究,通过开放性译码、主轴译码和选择性译码三个步骤,分析案例企业低成本创新的具体举措,尝试构建低成本创新的路径模型。界定了低成本创新的内涵,并且发现低成本创新是技术、设计、市场、组织等传统管理要素变革的整合,且四种路径在企业不同的生命周期阶段的相对重要性不同。%The practice of enterprise innovation generates a low-cost innovative vision, but the constructs have not yet become mature. Therefore, the article carries out a longitudinal study on Double Star Group and IKEA as typi-cal cases under the Strauss Procedures of Grounded Theory analyzes their specific innovative initiatives through the three analysis steps of open coding, axial coding and selective coding, and then attempts to build the model of low-cost innovative path. The study defines the connotation of low-cost innovation, and this innovation is found the change and integration of traditional management elements in technology, design, marketing, organiza-tion, etc., and their relative importance of those four kinds of paths is different in enterprises' different stages of life cycle. The concept and model of low-cost INNOVATION enriches innovation theory, lays the foundation for further empirical research, and also provides a reference for China's industrial upgrading and the continuous devel-opment of low-cost manufacturing advantages.

  2. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  3. Voxel Based Segmentation of Large Airborne Topobathymetric LIDAR Data

    Science.gov (United States)

    Boerner, R.; Hoegner, L.; Stilla, U.

    2017-05-01

    Point cloud segmentation and classification is currently a research highlight. Methods in this field create labelled data, where each point has additional class information. Current approaches are to generate a graph on the basis of all points in the point cloud, calculate or learn descriptors and train a matcher for the descriptor to the corresponding classes. Since these approaches need to look on each point in the point cloud iteratively, they result in long calculation times for large point clouds. Therefore, large point clouds need a generalization, to save computation time. One kind of generalization is to cluster the raw points into a 3D grid structure, which is represented by small volume units ( i.e. voxels) used for further processing. This paper introduces a method to use such a voxel structure to cluster a large point cloud into ground and non-ground points. The proposed method for ground detection first marks ground voxels with a region growing approach. In a second step non ground voxels are searched and filtered in the ground segment to reduce effects of over-segmentations. This filter uses the probability that a voxel mostly consist of last pulses and a discrete gradient in a local neighbourhood . The result is the ground label as a first classification result and connected segments of non-ground points. The test area of the river Mangfall in Bavaria, Germany, is used for the first processing.

  4. Ground Wars

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Kleis

    Political campaigns today are won or lost in the so-called ground war--the strategic deployment of teams of staffers, volunteers, and paid part-timers who work the phones and canvass block by block, house by house, voter by voter. Ground Wars provides an in-depth ethnographic portrait of two...... infrastructures that utilize large databases with detailed individual-level information for targeting voters, and armies of dedicated volunteers and paid part-timers. Nielsen challenges the notion that political communication in America must be tightly scripted, controlled, and conducted by a select coterie...... of professionals. Yet he also quashes the romantic idea that canvassing is a purer form of grassroots politics. In today's political ground wars, Nielsen demonstrates, even the most ordinary-seeming volunteer knocking at your door is backed up by high-tech targeting technologies and party expertise. Ground Wars...

  5. Spherical primary optical telescope (SPOT) segments

    Science.gov (United States)

    Hall, Christopher; Hagopian, John; DeMarco, Michael

    2012-09-01

    The spherical primary optical telescope (SPOT) project is an internal research and development program at NASA Goddard Space Flight Center. The goals of the program are to develop a robust and cost effective way to manufacture spherical mirror segments and demonstrate a new wavefront sensing approach for continuous phasing across the segmented primary. This paper focuses on the fabrication of the mirror segments. Significant cost savings were achieved through the design, since it allowed the mirror segments to be cast rather than machined from a glass blank. Casting was followed by conventional figuring at Goddard Space Flight Center. After polishing, the mirror segments were mounted to their composite assemblies. QED Technologies used magnetorheological finishing (MRF®) for the final figuring. The MRF process polished the mirrors while they were mounted to their composite assemblies. Each assembly included several magnetic invar plugs that extended to within an inch of the face of the mirror. As part of this project, the interaction between the MRF magnetic field and invar plugs was evaluated. By properly selecting the polishing conditions, MRF was able to significantly improve the figure of the mounted segments. The final MRF figuring demonstrates that mirrors, in the mounted configuration, can be polished and tested to specification. There are significant process capability advantes due to polishing and testing the optics in their final, end-use assembled state.

  6. A proposed ground-water quality monitoring network for Idaho

    Science.gov (United States)

    Whitehead, R.L.; Parliman, D.J.

    1979-01-01

    A ground water quality monitoring network is proposed for Idaho. The network comprises 565 sites, 8 of which will require construction of new wells. Frequencies of sampling at the different sites are assigned at quarterly, semiannual, annual, and 5 years. Selected characteristics of the water will be monitored by both laboratory- and field-analysis methods. The network is designed to: (1) Enable water managers to keep abreast of the general quality of the State 's ground water, and (2) serve as a warning system for undesirable changes in ground-water quality. Data were compiled for hydrogeologic conditions, ground-water quality, cultural elements, and pollution sources. A ' hydrologic unit priority index ' is used to rank 84 hydrologic units (river basins or segments of river basins) of the State for monitoring according to pollution potential. Emphasis for selection of monitoring sites is placed on the 15 highest ranked units. The potential for pollution is greatest in areas of privately owned agricultural land. Other areas of pollution potential are residential development, mining and related processes, and hazardous waste disposal. Data are given for laboratory and field analyses, number of site visits, manpower, subsistence, and mileage, from which costs for implementing the network can be estimated. Suggestions are made for data storage and retrieval and for reporting changes in water quality. (Kosco-USGS)

  7. Keypoint Transfer Segmentation

    OpenAIRE

    Wachinger, C.; Toews, M.; Langs, G.; Wells, W.; Golland, P.

    2015-01-01

    We present an image segmentation method that transfers label maps of entire organs from the training images to the novel image to be segmented. The transfer is based on sparse correspondences between keypoints that represent automatically identified distinctive image locations. Our segmentation algorithm consists of three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ label maps. We introduce generative models for th...

  8. A model to identify high crash road segments with the dynamic segmentation method.

    Science.gov (United States)

    Boroujerdian, Amin Mirza; Saffarzadeh, Mahmoud; Yousefi, Hassan; Ghassemian, Hassan

    2014-12-01

    Currently, high social and economic costs in addition to physical and mental consequences put road safety among most important issues. This paper aims at presenting a novel approach, capable of identifying the location as well as the length of high crash road segments. It focuses on the location of accidents occurred along the road and their effective regions. In other words, due to applicability and budget limitations in improving safety of road segments, it is not possible to recognize all high crash road segments. Therefore, it is of utmost importance to identify high crash road segments and their real length to be able to prioritize the safety improvement in roads. In this paper, after evaluating deficiencies of the current road segmentation models, different kinds of errors caused by these methods are addressed. One of the main deficiencies of these models is that they can not identify the length of high crash road segments. In this paper, identifying the length of high crash road segments (corresponding to the arrangement of accidents along the road) is achieved by converting accident data to the road response signal of through traffic with a dynamic model based on the wavelet theory. The significant advantage of the presented method is multi-scale segmentation. In other words, this model identifies high crash road segments with different lengths and also it can recognize small segments within long segments. Applying the presented model into a real case for identifying 10-20 percent of high crash road segment showed an improvement of 25-38 percent in relative to the existing methods.

  9. Estimation of regeneration coverage in a temperate forest by 3D segmentation using airborne laser scanning data

    Science.gov (United States)

    Amiri, Nina; Yao, Wei; Heurich, Marco; Krzystek, Peter; Skidmore, Andrew K.

    2016-10-01

    Forest understory and regeneration are important factors in sustainable forest management. However, understanding their spatial distribution in multilayered forests requires accurate and continuously updated field data, which are difficult and time-consuming to obtain. Therefore, cost-efficient inventory methods are required, and airborne laser scanning (ALS) is a promising tool for obtaining such information. In this study, we examine a clustering-based 3D segmentation in combination with ALS data for regeneration coverage estimation in a multilayered temperate forest. The core of our method is a two-tiered segmentation of the 3D point clouds into segments associated with regeneration trees. First, small parts of trees (super-voxels) are constructed through mean shift clustering, a nonparametric procedure for finding the local maxima of a density function. In the second step, we form a graph based on the mean shift clusters and merge them into larger segments using the normalized cut algorithm. These segments are used to obtain regeneration coverage of the target plot. Results show that, based on validation data from field inventory and terrestrial laser scanning (TLS), our approach correctly estimates up to 70% of regeneration coverage across the plots with different properties, such as tree height and tree species. The proposed method is negatively impacted by the density of the overstory because of decreasing ground point density. In addition, the estimated coverage has a strong relationship with the overstory tree species composition.

  10. Examples of cost reduction and energy saving by thermal storage heat pump system. Part 5. Control of the flowering season of alstroemeria by using 'ice storage ground cooler'. Chikunetsushiki heat pump system katsuyo ni yoru costdown sho energy jirei no shokai. 5. 'Kori chikunetsushiki chichu reikyaku sochi' ni yori arusutoromeria no kaika jiki wo chosetsu

    Energy Technology Data Exchange (ETDEWEB)

    1999-07-01

    Alstroemeria has a habit to flower by sensing temperature through an organ in rhizome. Since its market price is higher in late fall and early winter, a culture method cooling the ground in summer is in wide use. Although the ground is cooled with an equipment composed of a chiller, ground piping for heat exchange and cold water pump during the whole day, cost reduction is a major problem. To study a heat storage ground cooler, its culture test was made by using a prototype ice storage ground cooler. The test result showed that ground temperature of both test zone and reference zone was constantly 18-20 degrees C during the test period, and both the whole yield and that every class were nearly equivalent between the test and reference zones. The estimation result on the profitability of a full-scale ice storage ground cooler based on the above result showed that this ground cooler probably can reduce annual electric charge by nearly 200,000 yen as compared with a cooler without heat storage. (NEDO)

  11. Examples of cost reduction and energy saving by thermal storage heat pump system. Part 5. Control of the flowering season of alstroemeria by using `ice storage ground cooler`; Chikunetsushiki heat pump system katsuyo ni yoru costdown sho energy jirei no shokai. 5. `Kori chikunetsushiki chichu reikyaku sochi` ni yori arusutoromeria no kaika jiki wo chosetsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-07-01

    Alstroemeria has a habit to flower by sensing temperature through an organ in rhizome. Since its market price is higher in late fall and early winter, a culture method cooling the ground in summer is in wide use. Although the ground is cooled with an equipment composed of a chiller, ground piping for heat exchange and cold water pump during the whole day, cost reduction is a major problem. To study a heat storage ground cooler, its culture test was made by using a prototype ice storage ground cooler. The test result showed that ground temperature of both test zone and reference zone was constantly 18-20 degrees C during the test period, and both the whole yield and that every class were nearly equivalent between the test and reference zones. The estimation result on the profitability of a full-scale ice storage ground cooler based on the above result showed that this ground cooler probably can reduce annual electric charge by nearly 200,000 yen as compared with a cooler without heat storage. (NEDO)

  12. Identifying spatial segments in international markets

    NARCIS (Netherlands)

    Ter Hofstede, F; Wedel, M; Steenkamp, JBEM

    2002-01-01

    The identification of geographic target markets is critical to the success of companies that are expanding internationally. Country borders have traditionally been used to delineate such target markets, resulting in accessible segments and cost efficient entry strategies. However, at present such "c

  13. Histological image segmentation using fast mean shift clustering method

    OpenAIRE

    Wu, Geming; Zhao, Xinyan; Luo, Shuqian; Shi, Hongli

    2015-01-01

    Background Colour image segmentation is fundamental and critical for quantitative histological image analysis. The complexity of the microstructure and the approach to make histological images results in variable staining and illumination variations. And ultra-high resolution of histological images makes it is hard for image segmentation methods to achieve high-quality segmentation results and low computation cost at the same time. Methods Mean Shift clustering approach is employed for histol...

  14. Segmenting fluid effect on PCR reactions in microfluidic platforms.

    Science.gov (United States)

    Walsh, E J; King, C; Grimes, R; Gonzalez, A

    2005-12-01

    This paper evaluates the compatibility of segmenting fluids for two phase flow applications in biomedical microdevices. The evaluated fluids are chosen due to the variations in fluid properties and cost, while also reflecting their use in the recent literature. These segmenting fluids are examined to determine their compatibility with the Polymerase Chain Reaction (PCR), through controlled experiments. The results are the first to provide a quantitative measure of segmenting fluid compatibility with PCR.

  15. From interpretation to segmentation

    NARCIS (Netherlands)

    Koning, A.R.; Lier, R.J. van

    2005-01-01

    In visual perception, part segmentation of an object is considered to be guided by image-based properties, such as occurrences of deep concavities in the outer contour. However, object-based properties can also provide information regarding segmentation. In this study, outer contours and interpretat

  16. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two

  17. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    2008-01-01

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two consume

  18. Benign segmental bronchial obstruction

    Energy Technology Data Exchange (ETDEWEB)

    Loercher, U.

    1988-09-01

    The benigne segmental bronchial obstruction - mostly discovered on routine chest films - can well be diagnosed by CT. The specific findings in CT are the site of the bronchial obstruction, the mucocele and the localized empysema of the involved segment. Furthermore CT allows a better approach to the underlying process.

  19. Hospital benefit segmentation.

    Science.gov (United States)

    Finn, D W; Lamb, C W

    1986-12-01

    Market segmentation is an important topic to both health care practitioners and researchers. The authors explore the relative importance that health care consumers attach to various benefits available in a major metropolitan area hospital. The purposes of the study are to test, and provide data to illustrate, the efficacy of one approach to hospital benefit segmentation analysis.

  20. Knowledge-based segmentation for automatic Map interpretation

    NARCIS (Netherlands)

    Hartog, J. den; Kate, T. ten; Gerbrands, J.

    1996-01-01

    In this paper, a knowledge-based framework for the top-down interpretation and segmentation of maps is presented. The interpretation is based on a priori knowledge about map objects, their mutual spatial relationships and potential segmentation problems. To reduce computational costs, a global segme

  1. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    DEFF Research Database (Denmark)

    Bertholet, Jenny; Wan, Hanlin; Toftegaard, Jakob;

    2017-01-01

    algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated....... The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio....... For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations....

  2. IMPROVED HYBRID SEGMENTATION OF BRAIN MRI TISSUE AND TUMOR USING STATISTICAL FEATURES

    Directory of Open Access Journals (Sweden)

    S. Allin Christe

    2010-08-01

    Full Text Available Medical image segmentation is the most essential and crucial process in order to facilitate the characterization and visualization of the structure of interest in medical images. Relevant application in neuroradiology is the segmentation of MRI data sets of the human brain into the structure classes gray matter, white matter and cerebrospinal fluid (CSF and tumor. In this paper, brain image segmentation algorithms such as Fuzzy C means (FCM segmentation and Kohonen means(K means segmentation were implemented. In addition to this, new hybrid segmentation technique, namely, Fuzzy Kohonen means of image segmentation based on statistical feature clustering is proposed and implemented along with standard pixel value clustering method. The clustered segmented tissue images are compared with the Ground truth and its performance metric is also found. It is found that the feature based hybrid segmentation gives improved performance metric and improved classification accuracy rather than pixel based segmentation.

  3. OPTIMIZING HIGHWAY PROFILES FOR INDIVIDUAL COST ITEMS

    Directory of Open Access Journals (Sweden)

    Essam Dabbour

    2013-12-01

    Full Text Available According to the current practice, vertical alignment of a highway segment is usually selected by creating a profile showing the actual ground surface and selecting initial and final grades to minimize the overall cut and fill quantities. Those grades are connected together with a parabolic curve. However, in many highway construction or rehabilitation projects, the cost of cut may be substantially different from that of fill (e.g. in extremely hard soils where blasting is needed to cut the soil. In that case, an optimization process will be needed to minimize the overall cost of cut and fill rather than to minimize their quantities. This paper proposes a nonlinear optimization model to select optimum vertical curve parameters based on individual cost items of cut and fill. The parameters selected by the optimization model include the initial grade, the final grade, the station and elevation of the point of vertical curvature (PVC, and the station and elevation of the point of vertical tangency (PVT. The model is flexible to include any design constraints for particular design problems. Different application examples are provided using the Evolutionary Algorithm in Microsoft Excel’s Solver add-in. The application examples validated the model and demonstrated its advantage of minimizing the overall cost rather than minimizing the overall volume of cut and fill quantities.

  4. An objective evaluation framework for segmentation techniques of functional positron emission tomography studies

    CERN Document Server

    Kim, J; Eberl, S; Feng, D

    2004-01-01

    Segmentation of multi-dimensional functional positron emission tomography (PET) studies into regions of interest (ROI) exhibiting similar temporal behavior is useful in diagnosis and evaluation of neurological images. Quantitative evaluation plays a crucial role in measuring the segmentation algorithm's performance. Due to the lack of "ground truth" available for evaluating segmentation of clinical images, automated segmentation results are usually compared with manual delineation of structures which is, however, subjective, and is difficult to perform. Alternatively, segmentation of co-registered anatomical images such as magnetic resonance imaging (MRI) can be used as the ground truth to the PET segmentation. However, this is limited to PET studies which have corresponding MRI. In this study, we introduce a framework for the objective and quantitative evaluation of functional PET study segmentation without the need for manual delineation or registration to anatomical images of the patient. The segmentation ...

  5. Features of the Deployed NPOESS Ground System

    Science.gov (United States)

    Smith, D.; Grant, K. D.; Route, G.; Heckmann, G.

    2009-12-01

    NOAA, DoD, and NASA are jointly acquiring the National Polar-orbiting Operational Environmental Satellite System (NPOESS) replacing the current NOAA Polar-orbiting Operational Environmental Satellites (POES) and the DoD's Defense Meteorological Satellite Program (DMSP). The NPOESS satellites will carry a suite of sensors to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere and space. The ground data processing segment is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence & Information Systems (IIS). The IDPS processes NPOESS satellite data to provide environmental data products (aka, Environmental Data Records or EDRs) to US NOAA and DoD processing centers. The IDPS will process EDRs beginning with the NPOESS Preparatory Project (NPP) and through the lifetime of the NPOESS system. The command and telemetry segment is the Command, Control and Communications Segment (C3S), also developed by Raytheon IIS. C3S is responsible for managing the overall NPOESS mission from control and status of the space and ground assets to ensuring delivery of timely, high quality data from the Space Segment (SS) to IDPS for processing. In addition, the C3S provides the globally distributed ground assets necessary to collect and transport mission, telemetry, and command data between the satellites and the processing locations. The C3S provides all functions required for day-to-day commanding and state-of-health monitoring of the NPP and NPOESS satellites, and delivery of SMD to each Central IDP for data products development and transfer to System subscribers. The C3S also monitors and reports system-wide health, status and data communications with external systems and between the NPOESS segments. The NPOESS C3S and IDPS ground segments have been delivered and transitioned to operations for NPP. C3S was transitioned to operations at the NOAA Satellite Operations Facility in Suitland MD in August

  6. HyMaP: A hybrid magnitude-phase approach to unsupervised segmentation of tumor areas in breast cancer histology images

    Directory of Open Access Journals (Sweden)

    Adnan M Khan

    2013-01-01

    Full Text Available Background: Segmentation of areas containing tumor cells in standard H&E histopathology images of breast (and several other tissues is a key task for computer-assisted assessment and grading of histopathology slides. Good segmentation of tumor regions is also vital for automated scoring of immunohistochemical stained slides to restrict the scoring or analysis to areas containing tumor cells only and avoid potentially misleading results from analysis of stromal regions. Furthermore, detection of mitotic cells is critical for calculating key measures such as mitotic index; a key criteria for grading several types of cancers including breast cancer. We show that tumor segmentation can allow detection and quantification of mitotic cells from the standard H&E slides with a high degree of accuracy without need for special stains, in turn making the whole process more cost-effective. Method: Based on the tissue morphology, breast histology image contents can be divided into four regions: Tumor, Hypocellular Stroma (HypoCS, Hypercellular Stroma (HyperCS, and tissue fat (Background. Background is removed during the preprocessing stage on the basis of color thresholding, while HypoCS and HyperCS regions are segmented by calculating features using magnitude and phase spectra in the frequency domain, respectively, and performing unsupervised segmentation on these features. Results: All images in the database were hand segmented by two expert pathologists. The algorithms considered here are evaluated on three pixel-wise accuracy measures: precision, recall, and F1-Score. The segmentation results obtained by combining HypoCS and HyperCS yield high F1-Score of 0.86 and 0.89 with re-spect to the ground truth. Conclusions: In this paper, we show that segmentation of breast histopathology image into hypocellular stroma and hypercellular stroma can be achieved using magnitude and phase spectra in the frequency domain. The segmentation leads to demarcation of tumor

  7. Block-o-Matic: a Web Page Segmentation Tool and its Evaluation

    OpenAIRE

    Sanoja, Andrés; Gançarski, Stéphane

    2013-01-01

    National audience; In this paper we present our prototype for the web page segmentation called Block-o-matic and its counterpart Block-o-manual, for manual segmentation. The main idea is to evaluate the correctness of the segmentation algorithm. Build a ground truth database for evaluation can take days or months depending on the collection size, however we address our solution with our manual segmentation tool intended to minimize the time of annotation of blocks in web pages. Both tools imp...

  8. COST MEASUREMENT AND COST MANAGEMENT IN TARGET COSTING

    Directory of Open Access Journals (Sweden)

    Moisello Anna Maria

    2012-07-01

    Full Text Available Firms are coping with a competitive scenario characterized by quick changes produced by internationalization, concentration, restructuring, technological innovation processes and financial market crisis. On the one hand market enlargement have increased the number and the segmentation of customers and have raised the number of competitors, on the other hand technological innovation has reduced product life cycle. So firms have to adjust their management models to this scenario, pursuing customer satisfaction and respecting cost constraints. In a context where price is a variable fixed by the market, firms have to switch from the cost measurement logic to the cost management one, adopting target costing methodology. The target costing process is a price driven, customer oriented profit planning and cost management system. It works, in a cross functional way, from the design stage throughout all the product life cycle and it involves the entire value chain. The process implementation needs a costing methodology consistent with the cost management logic. The aim of the paper is to focus on Activity Based Costing (ABC application to target costing process. So: -it analyzes target costing logic and phases, basing on a literary review, in order to highlight the costing needs related to this process; -it shows, through a numerical example, how to structure a flexible ABC model – characterized by the separation between variable, fixed in the short and fixed costs - that effectively supports target costing process in the cost measurement phase (drifting cost determination and in the target cost alignment; -it points out the effectiveness of the Activity Based Costing as a model of cost measurement applicable to the supplier choice and as a support for supply cost management which have an important role in target costing process. The activity based information allows a firm to optimize the supplier choice by following the method of minimizing the

  9. Keypoint Transfer Segmentation.

    Science.gov (United States)

    Wachinger, C; Toews, M; Langs, G; Wells, W; Golland, P

    2015-01-01

    We present an image segmentation method that transfers label maps of entire organs from the training images to the novel image to be segmented. The transfer is based on sparse correspondences between keypoints that represent automatically identified distinctive image locations. Our segmentation algorithm consists of three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ label maps. We introduce generative models for the inference of keypoint labels and for image segmentation, where keypoint matches are treated as a latent random variable and are marginalized out as part of the algorithm. We report segmentation results for abdominal organs in whole-body CT and in contrast-enhanced CT images. The accuracy of our method compares favorably to common multi-atlas segmentation while offering a speed-up of about three orders of magnitude. Furthermore, keypoint transfer requires no training phase or registration to an atlas. The algorithm's robustness enables the segmentation of scans with highly variable field-of-view.

  10. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  11. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  12. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.

    Directory of Open Access Journals (Sweden)

    Stephen M Plaza

    Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

  13. Dermoscopic Image Segmentation using Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    L. P. Suresh

    2011-01-01

    Full Text Available Problem statement: Malignant melanoma is the most frequent type of skin cancer. Its incidence has been rapidly increasing over the last few decades. Medical image segmentation is the most essential and crucial process in order to facilitate the characterization and visualization of the structure of interest in medical images. Approach: This study explains the task of segmenting skin lesions in Dermoscopy images based on intelligent systems such as Fuzzy and Neural Networks clustering techniques for the early diagnosis of Malignant Melanoma. The various intelligent system based clustering techniques used are Fuzzy C Means Algorithm (FCM, Possibilistic C Means Algorithm (PCM, Hierarchical C Means Algorithm (HCM; C-mean based Fuzzy Hopfield Neural Network, Adaline Neural Network and Regression Neural Network. Results: The segmented images are compared with the ground truth image using various parameters such as False Positive Error (FPE, False Negative Error (FNE Coefficient of similarity, spatial overlap and their performance is evaluated. Conclusion: The experimental results show that the Hierarchical C Means algorithm( Fuzzy provides better segmentation than other (Fuzzy C Means, Possibilistic C Means, Adaline Neural Network, FHNN and GRNN clustering algorithms. Thus Hierarchical C Means approach can handle uncertainties that exist in the data efficiently and useful for the lesion segmentation in a computer aided diagnosis system to assist the clinical diagnosis of dermatologists.

  14. An attribute-based image segmentation method

    Directory of Open Access Journals (Sweden)

    M.C. de Andrade

    1999-07-01

    Full Text Available This work addresses a new image segmentation method founded on Digital Topology and Mathematical Morphology grounds. The ABA (attribute based absorptions transform can be viewed as a region-growing method by flooding simulation working at the scale of the main structures of the image. In this method, the gray level image is treated as a relief flooded from all its local minima, which are progressively detected and merged as the flooding takes place. Each local minimum is exclusively associated to one catchment basin (CB. The CBs merging process is guided by their geometric parameters as depth, area and/or volume. This solution enables the direct segmentation of the original image without the need of a preprocessing step or the explicit marker extraction step, often required by other flooding simulation methods. Some examples of image segmentation, employing the ABA transform, are illustrated for uranium oxide samples. It is shown that the ABA transform presents very good segmentation results even in presence of noisy images. Moreover, it's use is often easier and faster when compared to similar image segmentation methods.

  15. Segmentation of antiperspirants and deodorants

    OpenAIRE

    KRÁL, Tomáš

    2009-01-01

    The goal of Master's Thesis on topic Segmentation of antiperspirants and deodorants is to discover differences in consumer's behaviour, determinate and describe segments of consumers based on these differences and propose marketing strategy for the most attractive segments. Theoretical part describes market segmentation in general, process of segmentation and segmentation criteria. Analytic part characterizes Czech market of antiperspirants and deodorants, analyzes ACNielsen market data and d...

  16. a segmentation approach

    African Journals Online (AJOL)

    kirstam

    a visitor survey was conducted at the Cape Town International Jazz ... 13Key words: dining motives, tipping, black diners, market segmentation, South .... and tipping behaviour as well as the findings from cross-cultural tipping and market.

  17. Segmental tuberculosis verrucosa cutis

    Directory of Open Access Journals (Sweden)

    Hanumanthappa H

    1994-01-01

    Full Text Available A case of segmental Tuberculosis Verrucosa Cutis is reported in 10 year old boy. The condition was resembling the ascending lymphangitic type of sporotrichosis. The lesions cleared on treatment with INH 150 mg daily for 6 months.

  18. Gaussian multiscale aggregation applied to segmentation in hand biometrics.

    Science.gov (United States)

    de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  19. Image Segmentation by Discounted Cumulative Ranking on Maximal Cliques

    CERN Document Server

    Carreira, Joao; Sminchisescu, Cristian

    2010-01-01

    We propose a mid-level image segmentation framework that combines multiple figure-ground hypothesis (FG) constrained at different locations and scales, into interpretations that tile the entire image. The problem is cast as optimization over sets of maximal cliques sampled from the graph connecting non-overlapping, putative figure-ground segment hypotheses. Potential functions over cliques combine unary Gestalt-based figure quality scores and pairwise compatibilities among spatially neighboring segments, constrained by T-junctions and the boundary interface statistics resulting from projections of real 3d scenes. Learning the model parameters is formulated as rank optimization, alternating between sampling image tilings and optimizing their potential function parameters. State of the art results are reported on both the Berkeley and the VOC2009 segmentation dataset, where a 28% improvement was achieved.

  20. Market segmentation: venezuelan adrs

    OpenAIRE

    Urbi Garay; Maximiliano González

    2012-01-01

    The foreign exchange controls imposed by Venezuela in 2003, constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that, although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, shares in the firm CANTV were, through its American Depositary Receipts (ADRs), partially integrated with the global market. Following the imposition of the ...

  1. Adjacent segment disease.

    Science.gov (United States)

    Virk, Sohrab S; Niedermeier, Steven; Yu, Elizabeth; Khan, Safdar N

    2014-08-01

    EDUCATIONAL OBJECTIVES As a result of reading this article, physicians should be able to: 1. Understand the forces that predispose adjacent cervical segments to degeneration. 2. Understand the challenges of radiographic evaluation in the diagnosis of cervical and lumbar adjacent segment disease. 3. Describe the changes in biomechanical forces applied to adjacent segments of lumbar vertebrae with fusion. 4. Know the risk factors for adjacent segment disease in spinal fusion. Adjacent segment disease (ASD) is a broad term encompassing many complications of spinal fusion, including listhesis, instability, herniated nucleus pulposus, stenosis, hypertrophic facet arthritis, scoliosis, and vertebral compression fracture. The area of the cervical spine where most fusions occur (C3-C7) is adjacent to a highly mobile upper cervical region, and this contributes to the biomechanical stress put on the adjacent cervical segments postfusion. Studies have shown that after fusion surgery, there is increased load on adjacent segments. Definitive treatment of ASD is a topic of continuing research, but in general, treatment choices are dictated by patient age and degree of debilitation. Investigators have also studied the risk factors associated with spinal fusion that may predispose certain patients to ASD postfusion, and these data are invaluable for properly counseling patients considering spinal fusion surgery. Biomechanical studies have confirmed the added stress on adjacent segments in the cervical and lumbar spine. The diagnosis of cervical ASD is complicated given the imprecise correlation of radiographic and clinical findings. Although radiological and clinical diagnoses do not always correlate, radiographs and clinical examination dictate how a patient with prolonged pain is treated. Options for both cervical and lumbar spine ASD include fusion and/or decompression. Current studies are encouraging regarding the adoption of arthroplasty in spinal surgery, but more long

  2. An adipose segmentation and quantification scheme for the intra abdominal region on minipigs

    Science.gov (United States)

    Engholm, Rasmus; Dubinskiy, Aleksandr; Larsen, Rasmus; Hanson, Lars G.; Christoffersen, Berit Østergaard

    2006-03-01

    This article describes a method for automatic segmentation of the abdomen into three anatomical regions: subcutaneous, retroperitoneal and visceral. For the last two regions the amount of adipose tissue (fat) is quantified. According to recent medical research, the distinction between retroperitoneal and visceral fat is important for studying metabolic syndrome, which is closely related to diabetes. However previous work has neglected to address this point, treating the two types of fat together. We use T1-weighted three-dimensional magnetic resonance data of the abdomen of obese minipigs. The pigs were manually dissected right after the scan, to produce the "ground truth" segmentation. We perform automatic segmentation on a representative slice, which on humans has been shown to correlate with the amount of adipose tissue in the abdomen. The process of automatic fat estimation consists of three steps. First, the subcutaneous fat is removed with a modified active contour approach. The energy formulation of the active contour exploits the homogeneous nature of the subcutaneous fat and the smoothness of the boundary. Subsequently the retroperitoneal fat located around the abdominal cavity is separated from the visceral fat. For this, we formulate a cost function on a contour, based on intensities, edges, distance to center and smoothness, so as to exploit the properties of the retroperitoneal fat. We then globally optimize this function using dynamic programming. Finally, the fat content of the retroperitoneal and visceral regions is quantified based on a fuzzy c-means clustering of the intensities within the segmented regions. The segmentation proved satisfactory by visual inspection, and closely correlated with the manual dissection data. The correlation was 0.89 for the retroperitoneal fat, and 0.74 for the visceral fat.

  3. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  4. Skin Images Segmentation

    Directory of Open Access Journals (Sweden)

    Ali E. Zaart

    2010-01-01

    Full Text Available Problem statement: Image segmentation is a fundamental step in many applications of image processing. Skin cancer has been the most common of all new cancers detected each year. At early stage detection of skin cancer, simple and economic treatment can cure it mostly. An accurate segmentation of skin images can help the diagnosis to define well the region of the cancer. The principal approach of segmentation is based on thresholding (classification that is lied to the problem of the thresholds estimation. Approach: The objective of this study is to develop a method to segment the skin images based on a mixture of Beta distributions. We assume that the data in skin images can be modeled by a mixture of Beta distributions. We used an unsupervised learning technique with Beta distribution to estimate the statistical parameters of the data in skin image and then estimate the thresholds for segmentation. Results: The proposed method of skin images segmentation was implemented and tested on different skin images. We obtained very good results in comparing with the same techniques with Gamma distribution. Conclusion: The experiment showed that the proposed method obtained very good results but it requires more testing on different types of skin images.

  5. 'Grounded' Politics

    DEFF Research Database (Denmark)

    Schmidt, Garbi

    2012-01-01

    play within one particular neighbourhood: Nørrebro in the Danish capital, Copenhagen. The article introduces the concept of grounded politics to analyse how groups of Muslim immigrants in Nørrebro use the space, relationships and history of the neighbourhood for identity political statements....... The article further describes how national political debates over the Muslim presence in Denmark affect identity political manifestations within Nørrebro. By using Duncan Bell’s concept of mythscape (Bell, 2003), the article shows how some political actors idealize Nørrebro’s past to contest the present...

  6. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  7. Segmented conjugated polymers

    Indian Academy of Sciences (India)

    G Padmanaban; S Ramakrishnan

    2003-08-01

    Segmented conjugated polymers, wherein the conjugation is randomly truncated by varying lengths of non-conjugated segments, form an interesting class of polymers as they not only represent systems of varying stiffness, but also ones where the backbone can be construed as being made up of chromophores of varying excitation energies. The latter feature, especially when the chromophores are fluorescent, like in MEHPPV, makes these systems particularly interesting from the photophysics point of view. Segmented MEHPPV- samples, where x represents the mole fraction of conjugated segments, were prepared by a novel approach that utilizes a suitable precursor wherein selective elimination of one of the two eliminatable groups is affected; the uneliminated units serve as conjugation truncations. Control of the composition x of the precursor therefore permits one to prepare segmented MEHPPV- samples with varying levels of conjugation (elimination). Using fluorescence spectroscopy, we have seen that even in single isolated polymer chains, energy migration from the shorter (higher energy) chromophores to longer (lower energy) ones occurs – the extent of which depends on the level of conjugation. Further, by varying the solvent composition, it is seen that the extent of energy transfer and the formation of poorly emissive inter-chromophore excitons are greatly enhanced with increasing amounts of non-solvent. A typical S-shaped curve represents the variation of emission yields as a function of composition suggestive of a cooperative collapse of the polymer coil, reminiscent of conformational transitions seen in biological macromolecules.

  8. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  9. Segmented heterochromia in scalp hair.

    Science.gov (United States)

    Yoon, Kyeong Han; Kim, Daehwan; Sohn, Seonghyang; Lee, Won Soo

    2003-12-01

    Segmented heterochromia of scalp hair is characterized by the irregularly alternating segmentation of hair into dark and light bands and is known to be associated with iron deficiency anemia. The authors report the case of an 11-year-old boy with segmented heterochromia associated with iron deficiency anemia. After 11 months of iron replacement, the boy's segmented heterochromic hair recovered completely.

  10. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus;

    ], or are applicable only to analytically solvable geometries[4]. In addition, some questions remained fundamentally unanswered, such as how to segment a given design into N uniformly magnetized pieces.Our method calculates the globally optimal shape and magnetization direction of each segment inside a certain......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... designarea with an optional constraint on the total amount of magnetic material. The method can be applied to any objective functional which is linear respect to the field, and with any combination of linear materials. Being based on an analytical-optimization approach, the algorithm is not computationally...

  11. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  12. Microscopic Halftone Image Segmentation

    Institute of Scientific and Technical Information of China (English)

    WANG Yong-gang; YANG Jie; DING Yong-sheng

    2004-01-01

    Microscopic halftone image recognition and analysis can provide quantitative evidence for printing quality control and fault diagnosis of printing devices, while halftone image segmentation is one of the significant steps during the procedure. Automatic segmentation on microscopic dots by the aid of the Fuzzy C-Means (FCM) method that takes account of the fuzziness of halftone image and utilizes its color information adequately is realized. Then some examples show the technique effective and simple with better performance of noise immunity than some usual methods. In addition, the segmentation results obtained by the FCM in different color spaces are compared, which indicates that the method using the FCM in the f1f2f3 color space is superior to the rest.

  13. GPS Control Segment Improvements

    Science.gov (United States)

    2015-04-29

    Systems Center GPS Control Segment Improvements Mr. Tim McIntyre GPS Product Support Manager GPS Ops Support and Sustainment Division Peterson...DATE 29 APR 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE GPS Control Segment Improvements 5a. CONTRACT...ORGANIZATION NAME(S) AND ADDRESS(ES) Air Force Space Command,Space and Missile Systems Center, GPS Ops Support and Sustainment Division,Peterson AFB,CO,80916 8

  14. Statistical Images Segmentation

    Directory of Open Access Journals (Sweden)

    Corina Curilă

    2008-05-01

    Full Text Available This paper deals with fuzzy statistical imagesegmentation. We introduce a new hierarchicalMarkovian fuzzy hidden field model, which extends to thefuzzy case the classical Pérez and Heitz hard model. Twofuzzy statistical segmentation methods related with themodel proposed are defined in this paper and we show viasimulations that they are competitive with, in some casesthan, the classical Maximum Posterior Mode (MPMbased methods. Furthermore, they are faster, which willshould facilitate extensions to more than two hard classesin future work. In addition, the model proposed isapplicable to the multiscale segmentation andmultiresolution images fusion problems.

  15. Open System of Agile Ground Stations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — There is an opportunity to build the HETE-2/TESS network of ground stations into an innovative and powerful Open System of Agile Stations, by developing a low-cost...

  16. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  17. 赫歇尔空间天文台科学地面段软件开发管理方法的研究%A Study of the Method for Software-Development Management of the Herschel Science Ground Segment

    Institute of Scientific and Technical Information of China (English)

    张洁; 黄茂海

    2015-01-01

    Currently project software managements for space observatories in China adopt methods based on the waterfall model and the requirement management for long-term fixed requirements.These methods cannot meet the demand to develop complex systems for ground-based applications of spaced-based observation. In this paper we present a study of the method of software-development management for the Herschel Science Ground Segment ( HSGS) , which is a first-class successful model of software-development management of the world.The HSGS uses the method of branched development.Based on an iterative model the method of branched development adopts the Software Project Management Plan (SPMP), which is practically reasonable and applicable.The implementation of the method in the HSGS synthetically meets the requirements of the HSGS and the payloads of the entire project.The method is an open-management approach capable of incorporating application requirements in practically emerging use cases.With the method the HSGS changes the conventional situation that a system for ground-based applications is developed at the final stage of a project of a spaced-based observatory.Instead, the HSGS works right from the payload-development stage, and it is frequently adjusted to meet changing requirements.The HSGS can thus always support data-analysis systems highly efficiently.The instrument engineers and scientists can accept training of operation of the scientific instruments from the start of the project to reduce chances for operational mistakes.Meanwhile, the software in the HSGS can be improved in the course of operations to ensure mission success.The merits of the HSGS are absent in managements of Chinese space projects.Our study of the HSGS shows a new method and a new line of thoughts for software-engineering managements of space-observatory projects in China.%国内空间项目软件管理方法已经不能满足当前日益复杂的地面应用系统的要求。深入研究了具

  18. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    Comparative molecular, developmental and morphogenetic analyses show that the three major segmented animal groups- Lophotrochozoa, Ecdysozoa and Vertebrata-use a wide range of ontogenetic pathways to establish metameric body organization. Even in the life history of a single specimen, different m...

  19. [Segmental testicular infarction].

    Science.gov (United States)

    Ripa Saldías, L; Guarch Troyas, R; Hualde Alfaro, A; de Pablo Cárdenas, A; Ruiz Ramo, M; Pinós Paul, M

    2006-02-01

    We report the case of a 47 years old man previously diagnosed of left hidrocele. After having a recent mild left testicular pain, an ultrasonografic study revealed a solid hipoecoic testicular lesion rounded by a big hidrocele, suggesting a testicular neoplasm. Radical inguinal orchiectomy was made and pathologic study showed segmental testicular infarction. No malignancy was found. We review the literature of the topic.

  20. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  1. Simple system for locating ground loops.

    Science.gov (United States)

    Bellan, P M

    2007-06-01

    A simple low-cost system for rapid identification of the cables causing ground loops in complex instrumentation configurations is described. The system consists of an exciter module that generates a 100 kHz ground loop current and a detector module that determines which cable conducts this test current. Both the exciter and detector are magnetically coupled to the ground circuit so there is no physical contact to the instrumentation system under test.

  2. Construction of five-piece segment using high-fluidity concrete; Koryudo togo bunkatsu segment no seko jisseki

    Energy Technology Data Exchange (ETDEWEB)

    Suda, Y.; Fukuzawa, I.; Matsunaga, H. [Tokyo Electric Power Co. Inc., Tokyo (Japan)

    1998-09-05

    After reviewing RC segment materials and the number of joints required, a segment of five equal pieces using high-fluidity concrete is employed in conduit construction work (for 500kV transmission lines) near Hommoku Wharf, Kanagawa Prefecture, which is for reduction in the shield tunnel construction cost (segment cost rate approximately 1/3). The use of high-fluidity concrete raises the materials cost a little but the factory overhead, expenditures for fabrication and formwork construction are lowered because some working processes may be dispensed with, such as the processes of compaction by vibration, surface finish, and formwork movement, and because the formwork main body may be simplified in structure. Although the standard tunnel specifications mention a segment consisting of 6 pieces, the 5-piece segment adopted in this construction work lowers the cost approximately 20% thanks to reduction in the numbers of joints between segment pieces and between rings. In the shield tunnelling process, assembly is easier and the construction work is executed without hitches deserving special mention. The new technique is comparable to the conventional ones in terms of quality and process management. Since the number of joints per piece between rings is reduced from the 3 in the conventional method to the 2 in this new method, two core-sensing pins are provided per piece between rings for assuring accuracy and shortening time in assembling. 2 refs., 8 figs., 11 tabs.

  3. Design and Optimization of the SPOT Primary Mirror Segment

    Science.gov (United States)

    Budinoff, Jason G.; Michaels, Gregory J.

    2005-01-01

    The 3m Spherical Primary Optical Telescope (SPOT) will utilize a single ring of 0.86111 point-to-point hexagonal mirror segments. The f2.85 spherical mirror blanks will be fabricated by the same replication process used for mass-produced commercial telescope mirrors. Diffraction-limited phasing will require segment-to-segment radius of curvature (ROC) variation of approx.1 micron. Low-cost, replicated segment ROC variations are estimated to be almost 1 mm, necessitating a method for segment ROC adjustment & matching. A mechanical architecture has been designed that allows segment ROC to be adjusted up to 400 microns while introducing a minimum figure error, allowing segment-to-segment ROC matching. A key feature of the architecture is the unique back profile of the mirror segments. The back profile of the mirror was developed with shape optimization in MSC.Nastran(TradeMark) using optical performance response equations written with SigFit. A candidate back profile was generated which minimized ROC-adjustment-induced surface error while meeting the constraints imposed by the fabrication method. Keywords: optimization, radius of curvature, Pyrex spherical mirror, Sigfit

  4. Segmentation in Tardigrada and diversification of segmental patterns in Panarthropoda.

    Science.gov (United States)

    Smith, Frank W; Goldstein, Bob

    2016-10-31

    The origin and diversification of segmented metazoan body plans has fascinated biologists for over a century. The superphylum Panarthropoda includes three phyla of segmented animals-Euarthropoda, Onychophora, and Tardigrada. This superphylum includes representatives with relatively simple and representatives with relatively complex segmented body plans. At one extreme of this continuum, euarthropods exhibit an incredible diversity of serially homologous segments. Furthermore, distinct tagmosis patterns are exhibited by different classes of euarthropods. At the other extreme, all tardigrades share a simple segmented body plan that consists of a head and four leg-bearing segments. The modular body plans of panarthropods make them a tractable model for understanding diversification of animal body plans more generally. Here we review results of recent morphological and developmental studies of tardigrade segmentation. These results complement investigations of segmentation processes in other panarthropods and paleontological studies to illuminate the earliest steps in the evolution of panarthropod body plans.

  5. SPEED: the Segmented Pupil Experiment for Exoplanet Detection

    CERN Document Server

    Patrice, Martinez; Carole, Gouvret; Julien, Dejongue; Jean-Baptiste, Daban; Alain, Spang; Frantz, Martinache; Mathilde, Beaulieu; Pierre, Janin-Potiron; Lyu, Abe; Yan, Fantei-Caujolle; Damien, Mattei; Sebastien, Ottogali

    2014-01-01

    Searching for nearby exoplanets with direct imaging is one of the major scientific drivers for both space and ground-based programs. While the second generation of dedicated high-contrast instruments on 8-m class telescopes is about to greatly expand the sample of directly imaged planets, exploring the planetary parameter space to hitherto-unseen regions ideally down to Terrestrial planets is a major technological challenge for the forthcoming decades. This requires increasing spatial resolution and significantly improving high contrast imaging capabilities at close angular separations. Segmented telescopes offer a practical path toward dramatically enlarging telescope diameter from the ground (ELTs), or achieving optimal diameter in space. However, translating current technological advances in the domain of high-contrast imaging for monolithic apertures to the case of segmented apertures is far from trivial. SPEED (the segmented pupil experiment for exoplanet detection) is a new instrumental facility in deve...

  6. Parallel fuzzy connected image segmentation on GPU

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA’s compute unified device Architecture (cuda) platform for segmenting medical image data sets. Methods: In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as cuda kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Results: Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. Conclusions: The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set. PMID:21859037

  7. Joint Rendering and Segmentation of Free-Viewpoint Video

    Directory of Open Access Journals (Sweden)

    Ishii Masato

    2010-01-01

    Full Text Available Abstract This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multiview video as the input. This method is designed to achieve robust segmentation from online video input without per-frame user interaction and precomputations. This method shares a calculation process between the synthesis and segmentation steps; the matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. In the experiments, our method can process online video input captured by a 25-camera array and show the result image at 4.55 fps.

  8. Market segmentation: Venezuelan ADRs

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2012-12-01

    Full Text Available The control on foreign exchange imposed by Venezuela in 2003 constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, the shares in the firm CANTV were, through their American Depositary Receipts (ADRs, partially integrated with the global market. Following the imposition of the exchange controls this integration was lost. Research also documents the spectacular and apparently contradictory rise experienced by the Caracas Stock Exchange during the serious economic crisis of 2003. It is argued that, as it happened in Argentina in 2002, the rise in share prices occurred because the depreciation of the Bolívar in the parallel currency market increased the local price of the stocks that had associated ADRs, which were negotiated in dollars.

  9. Segmentation Using Symmetry Deviation

    DEFF Research Database (Denmark)

    Hollensen, Christian; Højgaard, L.; Specht, L.;

    2011-01-01

    segmentations on manual contours was evaluated using concordance index and sensitivity for the hypopharyngeal patients. The resulting concordance index and sensitivity was compared with the result of using a threshold of 3 SUV using a paired t-test. Results: The anatomical and symmetrical atlas was constructed...... and sensitivity of respectively 0.43±0.15 and 0.56±0.18 was acquired. It was compared to the concordance index of segmentation using absolute threshold of 3 SUV giving respectively 0.41±0.16 and 0.51±0.19 for concordance index and sensitivity yielding p-values of 0.33 and 0.01 for a paired t-test respectively....

  10. Quantitative evaluation of six graph based semi-automatic liver tumor segmentation techniques using multiple sets of reference segmentation

    Science.gov (United States)

    Su, Zihua; Deng, Xiang; Chefd'hotel, Christophe; Grady, Leo; Fei, Jun; Zheng, Dong; Chen, Ning; Xu, Xiaodong

    2011-03-01

    Graph based semi-automatic tumor segmentation techniques have demonstrated great potential in efficiently measuring tumor size from CT images. Comprehensive and quantitative validation is essential to ensure the efficacy of graph based tumor segmentation techniques in clinical applications. In this paper, we present a quantitative validation study of six graph based 3D semi-automatic tumor segmentation techniques using multiple sets of expert segmentation. The six segmentation techniques are Random Walk (RW), Watershed based Random Walk (WRW), LazySnapping (LS), GraphCut (GHC), GrabCut (GBC), and GrowCut (GWC) algorithms. The validation was conducted using clinical CT data of 29 liver tumors and four sets of expert segmentation. The performance of the six algorithms was evaluated using accuracy and reproducibility. The accuracy was quantified using Normalized Probabilistic Rand Index (NPRI), which takes into account of the variation of multiple expert segmentations. The reproducibility was evaluated by the change of the NPRI from 10 different sets of user initializations. Our results from the accuracy test demonstrated that RW (0.63) showed the highest NPRI value, compared to WRW (0.61), GWC (0.60), GHC (0.58), LS (0.57), GBC (0.27). The results from the reproducibility test indicated that GBC is more sensitive to user initialization than the other five algorithms. Compared to previous tumor segmentation validation studies using one set of reference segmentation, our evaluation methods use multiple sets of expert segmentation to address the inter or intra rater variability issue in ground truth annotation, and provide quantitative assessment for comparing different segmentation algorithms.

  11. Laser ranging ground station development

    Science.gov (United States)

    Faller, J. E.

    1973-01-01

    The employment of ground to conduct radar range measurements of the lunar distance is discussed. The advantages of additional ground stations for this purpose are analyzed. The goals which are desirable for any new type of ranging station are: (1) full time availability of the station for laser ranging, (2) optimization for signal strength, (3) automation to the greatest extent possible, (4) the capability for blind pointing, (5) reasonable initial and modest operational costs, and (6) transportability to enhance the value of the station for geophysical purposes.

  12. Automated segmentation of atherosclerotic histology based on pattern classification

    Directory of Open Access Journals (Sweden)

    Arna van Engelen

    2013-01-01

    Full Text Available Background: Histology sections provide accurate information on atherosclerotic plaque composition, and are used in various applications. To our knowledge, no automated systems for plaque component segmentation in histology sections currently exist. Materials and Methods: We perform pixel-wise classification of fibrous, lipid, and necrotic tissue in Elastica Von Gieson-stained histology sections, using features based on color channel intensity and local image texture and structure. We compare an approach where we train on independent data to an approach where we train on one or two sections per specimen in order to segment the remaining sections. We evaluate the results on segmentation accuracy in histology, and we use the obtained histology segmentations to train plaque component classification methods in ex vivo Magnetic resonance imaging (MRI and in vivo MRI and computed tomography (CT. Results: In leave-one-specimen-out experiments on 176 histology slices of 13 plaques, a pixel-wise accuracy of 75.7 ± 6.8% was obtained. This increased to 77.6 ± 6.5% when two manually annotated slices of the specimen to be segmented were used for training. Rank correlations of relative component volumes with manually annotated volumes were high in this situation (P = 0.82-0.98. Using the obtained histology segmentations to train plaque component classification methods in ex vivo MRI and in vivo MRI and CT resulted in similar image segmentations for training on the automated histology segmentations as for training on a fully manual ground truth. The size of the lipid-rich necrotic core was significantly smaller when training on fully automated histology segmentations than when manually annotated histology sections were used. This difference was reduced and not statistically significant when one or two slices per section were manually annotated for histology segmentation. Conclusions: Good histology segmentations can be obtained by automated segmentation

  13. Connecting textual segments

    DEFF Research Database (Denmark)

    Brügger, Niels

    2017-01-01

    In “Connecting textual segments: A brief history of the web hyperlink” Niels Brügger investigates the history of one of the most fundamental features of the web: the hyperlink. Based on the argument that the web hyperlink is best understood if it is seen as another step in a much longer and broader......-alone computers and in local and global digital networks....

  14. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S. W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  15. Segmental Refinement: A Multigrid Technique for Data Locality

    KAUST Repository

    Adams, Mark F.

    2016-08-04

    We investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. We present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinement and report performance results with up to 64K cores on a Cray XC30.

  16. Tunnel Cost-Estimating Methods.

    Science.gov (United States)

    1981-10-01

    8 ae1e 066 c LINING CALCULATES THE LINING COSTS AND THE FORMWORK COST FOR A 982928 ees C TUNNEL OR SHAFT SEGMENT 682636 0066...AD-AIO . 890 ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURGETC F/B 13/13 TUNNEL COST-ESTIMATING METNDS(U) OCT 81 R D BENNETT UNCLASSIFIED WES...TR/L-81-101-3lEEEEEE EIIIl-IIIIIIIu IIIIEIIIEIIIIE llllEEEEllEEEI EEEEEEEEEIIII C EllTE-CHNICAL RGPORT GL-81-10 LI10 TUNNEL COST-ESTIMATING METHODS by

  17. Market Segmentation for Information Services.

    Science.gov (United States)

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  18. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  19. Market Segmentation for Information Services.

    Science.gov (United States)

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  20. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  1. Optimization of a ground coupled heat pump

    Energy Technology Data Exchange (ETDEWEB)

    Baxter, V.D.; Catan, M.A.

    1984-01-01

    A cooperative analytical project has optimized a horizontal ground coil heat pump system for the Pittsburgh climate. This is the first step in the exploration of several advanced designs including various ground coil devices and advanced heat pump components. The project made use of new and existing design tools to simulate system performance and determine first cost. The system life cycle cost was minimized while constraining the system to meet the design day cooling load using a function minimizing program. Among the system parameters considered were: air-to-refrigerant frontal area, air-to-refrigerant fin pitch, air-to-refrigerant air flowrate, compressor displacement, liquid-to-refrigerant coil length, liquid-to-refrigerant coil diameter, ground coil fluid flowrate, ground coil length, and ground coil depth.

  2. Optimized ground coupled heat pump mechanical package

    Energy Technology Data Exchange (ETDEWEB)

    Catan, M.A.

    1987-01-01

    This project addresses the question of how well a ground coupled heat pump system could perform with a heat pump which was designed specifically for such systems operating in a northern climate. Conventionally, systems are designed around water source heat pumps which are not designed for ground coupled heat pump application. The objective of the project is to minimize the life cycle cost for a ground coupled system given the freedom to design the heat pump and the ground coil in concert. In order to achieve this objective a number of modeling tools were developed which will likely be of interest in their own right.

  3. Signs of segmentation?

    DEFF Research Database (Denmark)

    Ilsøe, Anna

    2012-01-01

    This article addresses the contribution of decentralized collective bargaining to the development of different forms of flexicurity for different groups of employees on the Danish labour market. Based on five case studies of company-level bargaining on flexible working hours in Danish industry...... the text of the agreements. On the other hand, less flexible employees often face difficulties in meeting the demands of the agreements and may ultimately be forced to leave the company and rely on unemployment benefits and active labour market policies. In a flexicurity perspective, this development seems...... to imply a segmentation of the Danish workforce regarding hard and soft versions of flexicurity....

  4. Noncooperative Iris Segmentation

    Directory of Open Access Journals (Sweden)

    Elsayed Mostafa

    2012-01-01

    Full Text Available In noncooperative iris recognition one should deal with uncontrolled behavior of the subject as well as uncontrolled lighting conditions. That means eyelids and eyelashes occlusion, non uniform intensities, reflections, imperfect focus, and orientation among the others are to be considered. To cope with this situation a noncooperative iris segmentation algorithm based on numerically stable direct least squares fitting of ellipses model and modified Chan-Vese model (local binary fitting energy with variational level set formulation is to be proposed. The proposed algorithm is tested using CASIA-IrisV3.

  5. Accurate Segmentation for Infrared Flying Bird Tracking

    Institute of Scientific and Technical Information of China (English)

    ZHENG Hong; HUANG Ying; LING Haibin; ZOU Qi; YANG Hao

    2016-01-01

    Bird strikes present a huge risk for air ve-hicles, especially since traditional airport bird surveillance is mainly dependent on inefficient human observation. For improving the effectiveness and efficiency of bird monitor-ing, computer vision techniques have been proposed to detect birds, determine bird flying trajectories, and pre-dict aircraft takeoff delays. Flying bird with a huge de-formation causes a great challenge to current tracking al-gorithms. We propose a segmentation based approach to enable tracking can adapt to the varying shape of bird. The approach works by segmenting object at a region of inter-est, where is determined by the object localization method and heuristic edge information. The segmentation is per-formed by Markov random field, which is trained by fore-ground and background mixture Gaussian models. Exper-iments demonstrate that the proposed approach provides the ability to handle large deformations and outperforms the m ost state-of-the-art tracker in the infrared flying bird tracking problem.

  6. Ground based materials science experiments

    Science.gov (United States)

    Meyer, M. B.; Johnston, J. C.; Glasgow, T. K.

    1988-01-01

    The facilities at the Microgravity Materials Science Laboratory (MMSL) at the Lewis Research Center, created to offer immediate and low-cost access to ground-based testing facilities for industrial, academic, and government researchers, are described. The equipment in the MMSL falls into three categories: (1) devices which emulate some aspect of low gravitational forces, (2) specialized capabilities for 1-g development and refinement of microgravity experiments, and (3) functional duplicates of flight hardware. Equipment diagrams are included.

  7. Ground based materials science experiments

    Science.gov (United States)

    Meyer, M. B.; Johnston, J. C.; Glasgow, T. K.

    1988-01-01

    The facilities at the Microgravity Materials Science Laboratory (MMSL) at the Lewis Research Center, created to offer immediate and low-cost access to ground-based testing facilities for industrial, academic, and government researchers, are described. The equipment in the MMSL falls into three categories: (1) devices which emulate some aspect of low gravitational forces, (2) specialized capabilities for 1-g development and refinement of microgravity experiments, and (3) functional duplicates of flight hardware. Equipment diagrams are included.

  8. Space/ground systems as cooperating agents

    Science.gov (United States)

    Grant, T. J.

    1994-01-01

    Within NASA and the European Space Agency (ESA) it is agreed that autonomy is an important goal for the design of future spacecraft and that this requires on-board artificial intelligence. NASA emphasizes deep space and planetary rover missions, while ESA considers on-board autonomy as an enabling technology for missions that must cope with imperfect communications. ESA's attention is on the space/ground system. A major issue is the optimal distribution of intelligent functions within the space/ground system. This paper describes the multi-agent architecture for space/ground systems (MAASGS) which would enable this issue to be investigated. A MAASGS agent may model a complete spacecraft, a spacecraft subsystem or payload, a ground segment, a spacecraft control system, a human operator, or an environment. The MAASGS architecture has evolved through a series of prototypes. The paper recommends that the MAASGS architecture should be implemented in the operational Dutch Utilization Center.

  9. Segmenting the prostate and rectum in CT imagery using anatomical constraints.

    Science.gov (United States)

    Chen, Siqi; Lovelock, D Michael; Radke, Richard J

    2011-02-01

    The automatic segmentation of the prostate and rectum from 3D computed tomography (CT) images is still a challenging problem, and is critical for image-guided therapy applications. We present a new, automatic segmentation algorithm based on deformable organ models built from previously segmented training data. The major contributions of this work are a new segmentation cost function based on a Bayesian framework that incorporates anatomical constraints from surrounding bones and a new appearance model that learns a nonparametric distribution of the intensity histograms inside and outside organ contours. We report segmentation results on 185 datasets of the prostate site, demonstrating improved performance over previous models.

  10. Segmented Target Design

    Science.gov (United States)

    Merhi, Abdul Rahman; Frank, Nathan; Gueye, Paul; Thoennessen, Michael; MoNA Collaboration

    2013-10-01

    A proposed segmented target would improve decay energy measurements of neutron-unbound nuclei. Experiments like this have been performed at the National Superconducting Cyclotron Laboratory (NSCL) located at Michigan State University. Many different nuclei are produced in such experiments, some of which immediately decay into a charged particle and neutron. The charged particles are bent by a large magnet and measured by a suite of charged particle detectors. The neutrons are measured by the Modular Neutron Array (MoNA) and Large Multi-Institutional Scintillation Array (LISA). With the current target setup, a nucleus in a neutron-unbound state is produced with a radioactive beam impinged upon a beryllium target. The resolution of these measurements is very dependent on the target thickness since the nuclear interaction point is unknown. In a segmented target using alternating layers of silicon detectors and Be-targets, the Be-target in which the nuclear reaction takes place would be determined. Thus the experimental resolution would improve. This poster will describe the improvement over the current target along with the status of the design. Work supported by Augustana College and the National Science Foundation grant #0969173.

  11. Cost Behavior

    DEFF Research Database (Denmark)

    Hoffmann, Kira

    The objective of this dissertation is to investigate determinants and consequences of asymmetric cost behavior. Asymmetric cost behavior arises if the change in costs is different for increases in activity compared to equivalent decreases in activity. In this case, costs are termed “sticky......” if the change is less when activity falls than when activity rises, whereas costs are termed “anti-sticky” if the change is more when activity falls than when activity rises. Understanding such cost behavior is especially relevant for decision-makers and financial analysts that rely on accurate cost information...

  12. Cost Behavior

    DEFF Research Database (Denmark)

    Hoffmann, Kira

    The objective of this dissertation is to investigate determinants and consequences of asymmetric cost behavior. Asymmetric cost behavior arises if the change in costs is different for increases in activity compared to equivalent decreases in activity. In this case, costs are termed “sticky......” if the change is less when activity falls than when activity rises, whereas costs are termed “anti-sticky” if the change is more when activity falls than when activity rises. Understanding such cost behavior is especially relevant for decision-makers and financial analysts that rely on accurate cost information...

  13. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  14. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  15. Studies on the key parameters in segmental lining design

    Institute of Scientific and Technical Information of China (English)

    Zhenchang Guan; Tao Deng; Gang Wang; Yujing Jiang

    2015-01-01

    The uniform ring model and the shell-spring model for segmental lining design are reviewed in this article. The former is the most promising means to reflect the real behavior of segmental lining, while the latter is the most popular means in practice due to its simplicity. To understand the relationship and the difference between these two models, both of them are applied to the engineering practice of Fuzhou Metro Line I, where the key parameters used in both models are described and compared. The effective ratio of bending rigidity h reflecting the relative stiffness between segmental lining and surrounding ground and the transfer ratio of bending moment x reflecting the relative stiffness between segment and joint, which are two key parameters used in the uniform ring model, are especially emphasized. The reasonable values for these two key parameters are calibrated by comparing the bending moments calculated from both two models. Through case studies, it is concluded that the effective ratio of bending rigidity h increases significantly with good soil properties, increases slightly with increasing overburden, and decreases slightly with increasing water head. Meanwhile, the transfer ratio of bending moment x seems to only relate to the properties of segmental lining itself and has a minor relation with the ground conditions. These results could facilitate the design practice for Fuzhou Metro Line I, and could also provide some references to other projects with respect to similar scenarios.

  16. Lung tumor segmentation in PET images using graph cuts.

    Science.gov (United States)

    Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan

    2013-03-01

    The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill.

  17. Target segmentation in IR imagery using a wavelet-based technique

    Science.gov (United States)

    Sadjadi, Firooz A.

    1995-10-01

    Segmentation of ground based targets embedded in clutter obtained by airborne Infrared (IR) imaging sensors is one of the challenging problems in automatic target recognition. In this paper a new texture based segmentation technique is presented that uses the statistics of 2D wavelet decomposition components of the local sections of the image. A measure of statistical similarity is then used to segment the image and separate the target from the background. This technique is applied on a set of real sequential IR imagery and has shown to produce a high degree of segmentation accuracy across varying ranges.

  18. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Seoungjae Cho

    2014-01-01

    Full Text Available A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  19. Segmented heat exchanger

    Science.gov (United States)

    Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann

    2010-12-14

    A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.

  20. Schizophrenia as segmental progeria

    Science.gov (United States)

    Papanastasiou, Evangelos; Gaughran, Fiona; Smith, Shubulade

    2011-01-01

    Schizophrenia is associated with a variety of physical manifestations (i.e. metabolic, neurological) and despite psychotropic medication being blamed for some of these (in particular obesity and diabetes), there is evidence that schizophrenia itself confers an increased risk of physical disease and early death. The observation that schizophrenia and progeroid syndromes share common clinical features and molecular profiles gives rise to the hypothesis that schizophrenia could be conceptualized as a whole body disorder, namely a segmental progeria. Mammalian cells employ the mechanisms of cellular senescence and apoptosis (programmed cell death) as a means to control inevitable DNA damage and cancer. Exacerbation of those processes is associated with accelerated ageing and schizophrenia and this warrants further investigation into possible underlying biological mechanisms, such as epigenetic control of the genome. PMID:22048679

  1. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  2. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  3. Selection of plain or segmented finned tubes for heat recovery

    Energy Technology Data Exchange (ETDEWEB)

    Reid, D.R.; Taborek, Jerry (Fintube Corp. (United States))

    1994-01-01

    Heat recovery heat exchangers with gas as one of the streams depend on the use of finned tubes to compensate for the inherently low gas heat transfer coefficient. Standard frequency welded ''plain'' fins were generally used in the past - until the high-frequency resistance welding technology permitted the cost-effective manufacture of segmented fins. The main advantage of the segmented fin design is that it permits higher heat flux and hence smaller, lighter units for most operating conditions. While the criteria that dictate optimum design, such as compactness, weight and cost per unit area, favour the segmented fin design, a few other considerations, such as fouling, ease of cleaning and availability of dependable design methods, have to be considered. This article analyses the performance parameters that affect the selection of either fin type. (4 figures, 1 table, 10 references) (Author)

  4. Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics

    Directory of Open Access Journals (Sweden)

    Gonzalo Bailador del Pozo

    2011-11-01

    Full Text Available This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC and Normalized Cuts (NCuts. The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  5. Segment clustering methodology for unsupervised Holter recordings analysis

    Science.gov (United States)

    Rodríguez-Sotelo, Jose Luis; Peluffo-Ordoñez, Diego; Castellanos Dominguez, German

    2015-01-01

    Cardiac arrhythmia analysis on Holter recordings is an important issue in clinical settings, however such issue implicitly involves attending other problems related to the large amount of unlabelled data which means a high computational cost. In this work an unsupervised methodology based in a segment framework is presented, which consists of dividing the raw data into a balanced number of segments in order to identify fiducial points, characterize and cluster the heartbeats in each segment separately. The resulting clusters are merged or split according to an assumed criterion of homogeneity. This framework compensates the high computational cost employed in Holter analysis, being possible its implementation for further real time applications. The performance of the method is measure over the records from the MIT/BIH arrhythmia database and achieves high values of sensibility and specificity, taking advantage of database labels, for a broad kind of heartbeats types recommended by the AAMI.

  6. Ground water and energy

    Energy Technology Data Exchange (ETDEWEB)

    1980-11-01

    This national workshop on ground water and energy was conceived by the US Department of Energy's Office of Environmental Assessments. Generally, OEA needed to know what data are available on ground water, what information is still needed, and how DOE can best utilize what has already been learned. The workshop focussed on three areas: (1) ground water supply; (2) conflicts and barriers to ground water use; and (3) alternatives or solutions to the various issues relating to ground water. (ACR)

  7. Automatic segmentation of pulmonary segments from volumetric chest CT scans.

    NARCIS (Netherlands)

    Rikxoort, E.M. van; Hoop, B. de; Vorst, S. van de; Prokop, M.; Ginneken, B. van

    2009-01-01

    Automated extraction of pulmonary anatomy provides a foundation for computerized analysis of computed tomography (CT) scans of the chest. A completely automatic method is presented to segment the lungs, lobes and pulmonary segments from volumetric CT chest scans. The method starts with lung segmenta

  8. 48 CFR 9904.403 - Allocation of home office expenses to segments.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Allocation of home office expenses to segments. 9904.403 Section 9904.403 Federal Acquisition Regulations System COST ACCOUNTING... AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.403 Allocation of home office expenses...

  9. Television broadcast from space systems: Technology, costs

    Science.gov (United States)

    Cuccia, C. L.

    1981-01-01

    Broadcast satellite systems are described. The technologies which are unique to both high power broadcast satellites and small TV receive-only earth terminals are also described. A cost assessment of both space and earth segments is included and appendices present both a computer model for satellite cost and the pertinent reported experience with the Japanese BSE.

  10. Optimal retinal cyst segmentation from OCT images

    Science.gov (United States)

    Oguz, Ipek; Zhang, Li; Abramoff, Michael D.; Sonka, Milan

    2016-03-01

    Accurate and reproducible segmentation of cysts and fluid-filled regions from retinal OCT images is an important step allowing quantification of the disease status, longitudinal disease progression, and response to therapy in wet-pathology retinal diseases. However, segmentation of fluid-filled regions from OCT images is a challenging task due to their inhomogeneous appearance, the unpredictability of their number, size and location, as well as the intensity profile similarity between such regions and certain healthy tissue types. While machine learning techniques can be beneficial for this task, they require large training datasets and are often over-fitted to the appearance models of specific scanner vendors. We propose a knowledge-based approach that leverages a carefully designed cost function and graph-based segmentation techniques to provide a vendor-independent solution to this problem. We illustrate the results of this approach on two publicly available datasets with a variety of scanner vendors and retinal disease status. Compared to a previous machine-learning based approach, the volume similarity error was dramatically reduced from 81:3+/-56:4% to 22:2+/-21:3% (paired t-test, p << 0:001).

  11. A* Path Planning for Line Segmentation of Handwritten Documents

    NARCIS (Netherlands)

    Surinta, Olarik; Karaaba, Mahir; van Oosten, Jean-Paul; Schomaker, Lambertus; Wiering, Marco

    2014-01-01

    This paper describes the use of a novel A∗ path-planning algorithm for performing line segmentation of handwritten documents. The novelty of the proposed approach lies in the use of a smart combination of simple soft cost functions that allows an artificial agent to compute paths separating the uppe

  12. Cost Behavior

    DEFF Research Database (Denmark)

    Hoffmann, Kira

    The objective of this dissertation is to investigate determinants and consequences of asymmetric cost behavior. Asymmetric cost behavior arises if the change in costs is different for increases in activity compared to equivalent decreases in activity. In this case, costs are termed “sticky......” if the change is less when activity falls than when activity rises, whereas costs are termed “anti-sticky” if the change is more when activity falls than when activity rises. Understanding such cost behavior is especially relevant for decision-makers and financial analysts that rely on accurate cost information...... to facilitate resource planning and earnings forecasting. As such, this dissertation relates to the topic of firm profitability and the interpretation of cost variability. The dissertation consists of three parts that are written in the form of separate academic papers. The following section briefly summarizes...

  13. GRACAT, Software for grounding and collision analysis

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Simonsen, Bo Cerup

    2002-01-01

    From 1998 to 2001 an integrated software package for grounding and collision analysis was developed at the Technical University of Denmark within the ISESO project at the cost of six man years (0.75M US$). The software provides a toolbox for a multitude of analyses related to collision and ground......From 1998 to 2001 an integrated software package for grounding and collision analysis was developed at the Technical University of Denmark within the ISESO project at the cost of six man years (0.75M US$). The software provides a toolbox for a multitude of analyses related to collision...... route where the result is the probability density functions for the cost of oil outflow in a given area per year for the two vessels. In this paper we describe the basic modelling principles and the capabilities of the software package. The software package can be downloaded for research purposes from...

  14. Segmented AC-coupled readout from continuous collection electrodes in semiconductor sensors

    Energy Technology Data Exchange (ETDEWEB)

    Sadrozinski, Hartmut F. W.; Seiden, Abraham; Cartiglia, Nicolo

    2017-04-04

    Position sensitive radiation detection is provided using a continuous electrode in a semiconductor radiation detector, as opposed to the conventional use of a segmented electrode. Time constants relating to AC coupling between the continuous electrode and segmented contacts to the electrode are selected to provide position resolution from the resulting configurations. The resulting detectors advantageously have a more uniform electric field than conventional detectors having segmented electrodes, and are expected to have much lower cost of production and of integration with readout electronics.

  15. Shape-Memory Properties of Segmented Polymers Containing Aramid Hard Segments and Polycaprolactone Soft Segments

    Directory of Open Access Journals (Sweden)

    Arno Kraft

    2010-06-01

    Full Text Available A series of segmented multiblock copolymers containing aramid hard segments and extended polycaprolactone soft segments (with an Mn of 4,200 or 8,200 g mol–1 was prepared and tested for their shape-memory properties. Chain extenders were essential to raise the hard segment concentration so that an extended rubbery plateau could be observed. Dynamic mechanical thermal analysis provided a useful guide in identifying (i the presence of a rubbery plateau, (ii the flow temperature, and (iii the temperature when samples started to deform irreversibly.

  16. Tracking Costs

    Science.gov (United States)

    Erickson, Paul W.

    2010-01-01

    Even though there's been a slight reprieve in energy costs, the reality is that the cost of non-renewable energy is increasing, and state education budgets are shrinking. One way to keep energy and operations costs from overshadowing education budgets is to develop a 10-year energy audit plan to eliminate waste. First, facility managers should…

  17. 25 CFR Appendix D to Subpart C - Cost To Construct

    Science.gov (United States)

    2010-04-01

    ... would meet the Adequate Standard Characteristics (see Table 1). For roadways, the recommended design of... Costs, Pavement Costs, and Incidental Costs. For bridges, costs are derived from costs in the National...) with inadequate drainage and alignment that generally follows existing ground 100 4 A designed...

  18. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    Science.gov (United States)

    Bertholet, J.; Wan, H.; Toftegaard, J.; Schmidt, M. L.; Chotard, F.; Parikh, P. J.; Poulsen, P. R.

    2017-02-01

    Radio-opaque fiducial markers of different shapes are often implanted in or near abdominal or thoracic tumors to act as surrogates for the tumor position during radiotherapy. They can be used for real-time treatment adaptation, but this requires a robust, automatic segmentation method able to handle arbitrarily shaped markers in a rotational imaging geometry such as cone-beam computed tomography (CBCT) projection images and intra-treatment images. In this study, we propose a fully automatic dynamic programming (DP) assisted template-based (TB) segmentation method. Based on an initial DP segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated. The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio. For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations.

  19. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector...

  20. Upper medium segment cooling down

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>The sluggish growth of the passenger car market in top provinces was also reflected in a depression of the upper medium segment. In Jan-Apr, 2008, the top 3 upper medium models accounting for nearly 40% of this segment performed poorly, with the Passat-Lingyu and the Accord decreasing. The Camry also saw a decrease in three top provinces: Guangdong,

  1. Essays in International Market Segmentation

    NARCIS (Netherlands)

    Hofstede, ter F.

    1999-01-01

    The primary objective of this thesis is to develop and validate new methodologies to improve the effectiveness of international segmentation strategies. The current status of international market segmentation research is reviewed in an introductory chapter, which provided a number of methodological

  2. Adaptive Segmentation for Scientific Databases

    NARCIS (Netherlands)

    Ivanova, M.G.; Kersten, M.L.; Nes, N.J.

    2008-01-01

    In this paper we explore database segmentation in the context of a column-store DBMS targeted at a scientific database. We present a novel hardware- and scheme-oblivious segmentation algorithm, which learns and adapts to the workload immediately. The approach taken is to capitalize on (intermediate)

  3. Dermatology case: segmental lichen aureus

    OpenAIRE

    Fernandes, I.; S. Carvalho; Machado, S.; Alves,R.; Selores, M.

    2012-01-01

    ABSTRACT The authors describe a clinical case of a six-year-old boy with history of a segmental brownish maculopapular skin eruption on his left thoracic and lumbar wall, since the last four months. Based on clinical and histological findings he was diagnosed with segmental lichen aureus.

  4. Essays in international market segmentation

    NARCIS (Netherlands)

    Hofstede, ter F.

    1999-01-01

    The primary objective of this thesis is to develop and validate new methodologies to improve the effectiveness of international segmentation strategies. The current status of international market segmentation research is reviewed in an introductory chapter, which provided a number of

  5. Adaptive segmentation for scientific databases

    NARCIS (Netherlands)

    Ivanova, M.; Kersten, M.L.; Nes, N.

    2008-01-01

    In this paper we explore database segmentation in the context of a column-store DBMS targeted at a scientific database. We present a novel hardware- and scheme-oblivious segmentation algorithm, which learns and adapts to the workload immediately. The approach taken is to capitalize on (intermediate)

  6. Market segmentation using perceived constraints

    Science.gov (United States)

    Jinhee Jun; Gerard Kyle; Andrew Mowen

    2008-01-01

    We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...

  7. The Importance of Marketing Segmentation

    Science.gov (United States)

    Martin, Gillian

    2011-01-01

    The rationale behind marketing segmentation is to allow businesses to focus on their consumers' behaviors and purchasing patterns. If done effectively, marketing segmentation allows an organization to achieve its highest return on investment (ROI) in turn for its marketing and sales expenses. If an organization markets its products or services to…

  8. Market Segmentation: An Instructional Module.

    Science.gov (United States)

    Wright, Peter H.

    A concept-based introduction to market segmentation is provided in this instructional module for undergraduate and graduate transportation-related courses. The material can be used in many disciplines including engineering, business, marketing, and technology. The concept of market segmentation is primarily a transportation planning technique by…

  9. Segmentation: Slicing the Urban Pie.

    Science.gov (United States)

    Keim, William A.

    1981-01-01

    Explains market segmentation and defines undifferentiated, concentrated, and differentiated marketing strategies. Describes in detail the marketing planning process at the Metropolitan Community Colleges. Focuses on the development and implementation of an ongoing recruitment program designed for the market segment composed of business employees.…

  10. Adaptive segmentation for scientific databases

    NARCIS (Netherlands)

    Ivanova, M.; Kersten, M.L.; Nes, N.

    2008-01-01

    In this paper we explore database segmentation in the context of a column-store DBMS targeted at a scientific database. We present a novel hardware- and scheme-oblivious segmentation algorithm, which learns and adapts to the workload immediately. The approach taken is to capitalize on (intermediate)

  11. New color segmentation method and its applications

    Science.gov (United States)

    Wang, Jian

    1999-01-01

    Segmentation is an important step in the early stage of image analysis. Color or multi-spectral image segmentation usually involves search and clustering techniques in a three or higher dimensional spectral space - an exercise which is considered computationally expensive. This paper presents a new color segmentation method for color image analysis with its application to plant leaf area measurement. A 3D histogram for an RGB color image is established basing on an octree data structure. The histogram represents the color distribution of the image in the RGB color space on which a 3D Gaussian filter is applied to smooth out small maxima of this distribution. The color space is then searched to find out al the major maxima. Around each maxima, a covering cube with a controlled side width is established. These maxima and covering cubes are considered to be potential color classes. Each cube may expand according to the value of surrounding neighbors. Once enough modes and their cover cubes have been found, a k-means clustering algorithm is used to classify these maxima into a predetermined number of classes. Then, the classified modes and the color covered by the cubes are used as training samples for a Bayes classifier which can be used to classify all the pixels in the image. A statistical relaxation method is then sued as a find segmentation. This method can either be supervised or unsupervised, depending on the different requirements of specific applications. The octree data structure significantly reduces the color space to be searched and consequently reduces computational cost. An extension of this method can also be applied to multi-spectral image analysis.

  12. Embedded Implementation of VHR Satellite Image Segmentation.

    Science.gov (United States)

    Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan

    2016-05-27

    Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage.

  13. JPSS Common Ground System Multimission Support

    Science.gov (United States)

    Jamilkowski, M. L.; Miller, S. W.; Grant, K. D.

    2013-12-01

    NOAA & NASA jointly acquire the next-generation civilian operational weather satellite: Joint Polar Satellite System (JPSS). JPSS contributes the afternoon orbit & restructured NPOESS ground system (GS) to replace the current Polar-orbiting Operational Environmental Satellite (POES) system run by NOAA. JPSS sensors will collect meteorological, oceanographic, climatological & solar-geophysical observations of the earth, atmosphere & space. The JPSS GS is the Common Ground System (CGS), consisting of Command, Control, & Communications (C3S) and Interface Data Processing (IDPS) segments, both developed by Raytheon Intelligence, Information & Services (IIS). CGS now flies the Suomi National Polar-orbiting Partnership (S-NPP) satellite, transfers its mission data between ground facilities and processes its data into Environmental Data Records for NOAA & Defense (DoD) weather centers. CGS will expand to support JPSS-1 in 2017. The JPSS CGS currently does data processing (DP) for S-NPP, creating multiple TBs/day across over two dozen environmental data products (EDPs). The workload doubles after JPSS-1 launch. But CGS goes well beyond S-NPP & JPSS mission management & DP by providing data routing support to operational centers & missions worldwide. The CGS supports several other missions: It also provides raw data acquisition, routing & some DP for GCOM-W1. The CGS does data routing for numerous other missions & systems, including USN's Coriolis/Windsat, NASA's SCaN network (including EOS), NSF's McMurdo Station communications, Defense Meteorological Satellite Program (DMSP), and NOAA's POES & EUMETSAT's MetOp satellites. Each of these satellite systems orbits the Earth 14 times/day, downlinking data once or twice/orbit at up to 100s of MBs/second, to support the creation of 10s of TBs of data/day across 100s of EDPs. Raytheon and the US government invested much in Raytheon's mission-management, command & control and data-processing products & capabilities. CGS's flexible

  14. Evolution of segmented strings

    CERN Document Server

    Gubser, Steven S

    2016-01-01

    I explain how to evolve segmented strings in de Sitter and anti-de Sitter spaces of any dimension in terms of forward-directed null displacements. The evolution is described entirely in terms of discrete hops which do not require a continuum spacetime. Moreover, the evolution rule is purely algebraic, so it can be defined not only on ordinary real de Sitter and anti-de Sitter, but also on the rational points of the quadratic equations that define these spaces. For three-dimensional anti-de Sitter space, a simpler evolution rule is possible that descends from the Wess-Zumino-Witten equations of motion. In this case, one may replace three-dimensional anti-de Sitter space by a non-compact discrete subgroup of SL(2,R) whose structure is related to the Pell equation. A discrete version of the BTZ black hole can be constructed as a quotient of this subgroup. This discrete black hole avoids the firewall paradox by a curious mechanism: even for large black holes, there are no points inside the horizon until one reach...

  15. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  16. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    Science.gov (United States)

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78

  17. Unsupervised Tattoo Segmentation Combining Bottom-Up and Top-Down Cues

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Josef D [ORNL

    2011-01-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for nding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a gure-ground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is e cient and suitable for further tattoo classi cation and retrieval purpose.

  18. Low-complexity topological derivative-based segmentation.

    Science.gov (United States)

    Cho, Choong Sang; Lee, Sangkeun

    2015-02-01

    Topological derivative has been employed for image segmentation and restoration. The topological derivative-based segmentation uses two sparse matrices, and the computational complexity of the segmentation grows up dramatically as the image size increases due to the size of the sparse matrix. Therefore, to provide a fast and accurate segmentation with low complexity, an effective scheme is proposed with keeping the same segmentation performance. To further reduce the computational complexity, the parallel processing structure for the proposed scheme is designed and implemented on graphics processing unit (GPU). In particular, to reduce the computational cost of generating and multiplying sparse matrices that are squared symmetric, the 2D filters consisting of the coefficients at nonborder regions of sparse matrices are defined, and the multiplication is converted into a convolution filtering. In addition, to design a parallel processing for the segmentation with the proposed scheme on a GPU, an image is divided into several blocks and they are processed in parallel. Experimental results show that the proposed scheme for topological derivative-based segmentation reduces the computational complexity ~ 908 times, and the complexity of the proposed scheme is reduced ~ 17 times more from the parallel structure. In particular, the higher efficiency can be obtained from large sized images because the complexity of the proposed scheme does not depend on the image size. Moreover, the proposed scheme can provide almost identical segmentation result with the original sparse matrix-based approach. Therefore, we believe that the proposed scheme can be a useful tool for efficient topological derivative-based segmentation.

  19. Robust Object Segmentation Using a Multi-Layer Laser Scanner

    Directory of Open Access Journals (Sweden)

    Beomseong Kim

    2014-10-01

    Full Text Available The major problem in an advanced driver assistance system (ADAS is the proper use of sensor measurements and recognition of the surrounding environment. To this end, there are several types of sensors to consider, one of which is the laser scanner. In this paper, we propose a method to segment the measurement of the surrounding environment as obtained by a multi-layer laser scanner. In the segmentation, a full set of measurements is decomposed into several segments, each representing a single object. Sometimes a ghost is detected due to the ground or fog, and the ghost has to be eliminated to ensure the stability of the system. The proposed method is implemented on a real vehicle, and its performance is tested in a real-world environment. The experiments show that the proposed method demonstrates good performance in many real-life situations.

  20. Robust object segmentation using a multi-layer laser scanner.

    Science.gov (United States)

    Kim, Beomseong; Choi, Baehoon; Yoo, Minkyun; Kim, Hyunju; Kim, Euntai

    2014-10-29

    The major problem in an advanced driver assistance system (ADAS) is the proper use of sensor measurements and recognition of the surrounding environment. To this end, there are several types of sensors to consider, one of which is the laser scanner. In this paper, we propose a method to segment the measurement of the surrounding environment as obtained by a multi-layer laser scanner. In the segmentation, a full set of measurements is decomposed into several segments, each representing a single object. Sometimes a ghost is detected due to the ground or fog, and the ghost has to be eliminated to ensure the stability of the system. The proposed method is implemented on a real vehicle, and its performance is tested in a real-world environment. The experiments show that the proposed method demonstrates good performance in many real-life situations.

  1. Training time and quality of smartphone-based anterior segment screening in rural India

    National Research Council Canada - National Science Library

    Ludwig CA; Newsom MR; Jais A; Myung DJ; Murthy SI; Chang RT

    2017-01-01

    ...: We aimed at evaluating the ability of individuals without ophthalmologic training to quickly capture high-quality images of the cornea by using a smartphone and low-cost anterior segment imaging adapter...

  2. Comparison of thyroid segmentation techniques for 3D ultrasound

    Science.gov (United States)

    Wunderling, T.; Golla, B.; Poudel, P.; Arens, C.; Friebe, M.; Hansen, C.

    2017-02-01

    The segmentation of the thyroid in ultrasound images is a field of active research. The thyroid is a gland of the endocrine system and regulates several body functions. Measuring the volume of the thyroid is regular practice of diagnosing pathological changes. In this work, we compare three approaches for semi-automatic thyroid segmentation in freehand-tracked three-dimensional ultrasound images. The approaches are based on level set, graph cut and feature classification. For validation, sixteen 3D ultrasound records were created with ground truth segmentations, which we make publicly available. The properties analyzed are the Dice coefficient when compared against the ground truth reference and the effort of required interaction. Our results show that in terms of Dice coefficient, all algorithms perform similarly. For interaction, however, each algorithm has advantages over the other. The graph cut-based approach gives the practitioner direct influence on the final segmentation. Level set and feature classifier require less interaction, but offer less control over the result. All three compared methods show promising results for future work and provide several possible extensions.

  3. Sometimes spelling is easier than phonemic segmentation

    NARCIS (Netherlands)

    Bon, W.H.J. van; Duighuisen, H.C.M.

    1995-01-01

    Poor spellers from the Netherlands segmented and spelled the same words on different occasions. If they base their spellings on the segmentations that they produce in the segmentation task, the correlation between segmentation and spelling scores should be high, and segmentation should not be more d

  4. A Segmental Framework for Representing Signs Phonetically

    Science.gov (United States)

    Johnson, Robert E.; Liddell, Scott K.

    2011-01-01

    The arguments for dividing the signing stream in signed languages into sequences of phonetic segments are compelling. The visual records of instances of actually occurring signs provide evidence of two basic types of segments: postural segments and trans-forming segments. Postural segments specify an alignment of articulatory features, both manual…

  5. The Concept of Segmented Wind Turbine Blades: A Review

    Directory of Open Access Journals (Sweden)

    Mathijs Peeters

    2017-07-01

    Full Text Available There is a trend to increase the length of wind turbine blades in an effort to reduce the cost of energy (COE. This causes manufacturing and transportation issues, which have given rise to the concept of segmented wind turbine blades. In this concept, multiple segments can be transported separately. While this idea is not new, it has recently gained renewed interest. In this review paper, the concept of wind turbine blade segmentation and related literature is discussed. The motivation for dividing blades into segments is explained, and the cost of energy is considered to obtain requirements for such blades. An overview of possible implementations is provided, considering the split location and orientation, as well as the type of joint to be used. Many implementations draw from experience with similar joints such as the joint at the blade root, hub and root extenders and joints used in rotor tips and glider wings. Adhesive bonds are expected to provide structural and economic efficiency, but in-field assembly poses a big issue. Prototype segmented blades using T-bolt joints, studs and spar bridge concepts have proven successful, as well as aerodynamically-shaped root and hub extenders.

  6. Pixel Intensity Clustering Algorithm for Multilevel Image Segmentation

    Directory of Open Access Journals (Sweden)

    Oludayo O. Olugbara

    2015-01-01

    Full Text Available Image segmentation is an important problem that has received significant attention in the literature. Over the last few decades, a lot of algorithms were developed to solve image segmentation problem; prominent amongst these are the thresholding algorithms. However, the computational time complexity of thresholding exponentially increases with increasing number of desired thresholds. A wealth of alternative algorithms, notably those based on particle swarm optimization and evolutionary metaheuristics, were proposed to tackle the intrinsic challenges of thresholding. In codicil, clustering based algorithms were developed as multidimensional extensions of thresholding. While these algorithms have demonstrated successful results for fewer thresholds, their computational costs for a large number of thresholds are still a limiting factor. We propose a new clustering algorithm based on linear partitioning of the pixel intensity set and between-cluster variance criterion function for multilevel image segmentation. The results of testing the proposed algorithm on real images from Berkeley Segmentation Dataset and Benchmark show that the algorithm is comparable with state-of-the-art multilevel segmentation algorithms and consistently produces high quality results. The attractive properties of the algorithm are its simplicity, generalization to a large number of clusters, and computational cost effectiveness.

  7. How Many Templates Does It Take for a Good Segmentation?: Error Analysis in Multiatlas Segmentation as a Function of Database Size.

    Science.gov (United States)

    Awate, Suyash P; Zhu, Peihong; Whitaker, Ross T

    2012-01-01

    This paper proposes a novel formulation to model and analyze the statistical characteristics of some types of segmentation problems that are based on combining label maps / templates / atlases. Such segmentation-by-example approaches are quite powerful on their own for several clinical applications and they provide prior information, through spatial context, when combined with intensity-based segmentation methods. The proposed formulation models a class of multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of images. The paper presents a systematic analysis of the nonparametric estimation's convergence behavior (i.e. characterizing segmentation error as a function of the size of the multiatlas database) and shows that it has a specific analytic form involving several parameters that are fundamental to the specific segmentation problem (i.e. chosen anatomical structure, imaging modality, registration method, label-fusion algorithm, etc.). We describe how to estimate these parameters and show that several brain anatomical structures exhibit the trends determined analytically. The proposed framework also provides per-voxel confidence measures for the segmentation. We show that the segmentation error for large database sizes can be predicted using small-sized databases. Thus, small databases can be exploited to predict the database sizes required ("how many templates") to achieve "good" segmentations having errors lower than a specified tolerance. Such cost-benefit analysis is crucial for designing and deploying multiatlas segmentation systems.

  8. Iris Pattern Segmentation using Automatic Segmentation and Window Technique

    OpenAIRE

    Swati Pandey; Prof. Rajeev Gupta

    2013-01-01

    A Biometric system is an automatic identification of an individual based on a unique feature or characteristic. Iris recognition has great advantage such as variability, stability and security. In thispaper, use the two methods for iris segmentation -An automatic segmentation method and Window method. Window method is a novel approach which comprises two steps first finds pupils' center andthen two radial coefficients because sometime pupil is not perfect circle. The second step extract the i...

  9. Principles of Video Segmentation Scenarios

    Directory of Open Access Journals (Sweden)

    M. R. KHAMMAR

    2013-05-01

    Full Text Available Video segmentation is the first step toward automatic video processing such as browsing, retrieval, and indexing. Many algorithms and techniques have been proposed a few years ago. They can cover the topic of video segmentation from different angles and it is beneficial to review the most important properties of them in brief in order to clarify the subject and find out the latest challenges and drawbacks. In this paper, the important parameters which are involved in video segmentation are discussed and video shot detection systems are compared together.

  10. Multiple Segmentation of Image Stacks

    DEFF Research Database (Denmark)

    Smets, Jonathan; Jaeger, Manfred

    2014-01-01

    We propose a method for the simultaneous construction of multiple image segmentations by combining a recently proposed “convolution of mixtures of Gaussians” model with a multi-layer hidden Markov random field structure. The resulting method constructs for a single image several, alternative...... segmentations that capture different structural elements of the image. We also apply the method to collections of images with identical pixel dimensions, which we call image stacks. Here it turns out that the method is able to both identify groups of similar images in the stack, and to provide segmentations...

  11. Brain tumor classification and segmentation using sparse coding and dictionary learning.

    Science.gov (United States)

    Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo

    2016-08-01

    This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.

  12. A method for the evaluation of thousands of automated 3D stem cell segmentations.

    Science.gov (United States)

    Bajcsy, P; Simon, M; Florczyk, S J; Simon, C G; Juba, D; Brady, M C

    2015-12-01

    There is no segmentation method that performs perfectly with any dataset in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of three-dimensional (3D) image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate 'ground truth' of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations and (3) minimizing human labour needed to create surrogate 'truth' by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial

  13. Circular economy in drinking water treatment: reuse of ground pellets as seeding material in the pellet softening process.

    Science.gov (United States)

    Schetters, M J A; van der Hoek, J P; Kramer, O J I; Kors, L J; Palmen, L J; Hofs, B; Koppers, H

    2015-01-01

    Calcium carbonate pellets are produced as a by-product in the pellet softening process. In the Netherlands, these pellets are applied as a raw material in several industrial and agricultural processes. The sand grain inside the pellet hinders the application in some high-potential market segments such as paper and glass. Substitution of the sand grain with a calcite grain (100% calcium carbonate) is in principle possible, and could significantly improve the pellet quality. In this study, the grinding and sieving of pellets, and the subsequent reuse as seeding material in pellet softening were tested with two pilot reactors in parallel. In one reactor, garnet sand was used as seeding material, in the other ground calcite. Garnet sand and ground calcite performed equally well. An economic comparison and a life-cycle assessment were made as well. The results show that the reuse of ground calcite as seeding material in pellet softening is technologically possible, reduces the operational costs by €38,000 (1%) and reduces the environmental impact by 5%. Therefore, at the drinking water facility, Weesperkarspel of Waternet, the transition from garnet sand to ground calcite will be made at full scale, based on this pilot plant research.

  14. An Active Contour for Range Image Segmentation

    OpenAIRE

    Khaldi Amine; Merouani Hayet Farida

    2012-01-01

    In this paper a new classification of range image segmentation method is proposed according to the criterion of homogeneity which obeys the segmentation, then, a deformable model-type active contour “Snake” is applied to segment range images.

  15. Ensemble segmentation using efficient integer linear programming.

    Science.gov (United States)

    Alush, Amir; Goldberger, Jacob

    2012-10-01

    We present a method for combining several segmentations of an image into a single one that in some sense is the average segmentation in order to achieve a more reliable and accurate segmentation result. The goal is to find a point in the "space of segmentations" which is close to all the individual segmentations. We present an algorithm for segmentation averaging. The image is first oversegmented into superpixels. Next, each segmentation is projected onto the superpixel map. An instance of the EM algorithm combined with integer linear programming is applied on the set of binary merging decisions of neighboring superpixels to obtain the average segmentation. Apart from segmentation averaging, the algorithm also reports the reliability of each segmentation. The performance of the proposed algorithm is demonstrated on manually annotated images from the Berkeley segmentation data set and on the results of automatic segmentation algorithms.

  16. SPEED: the segmented pupil experiment for exoplanet detection

    Science.gov (United States)

    Martinez, P.; Preis, Olivier; Gouvret, C.; Dejonghe, J.; Daban, J.-B.; Spang, A.; Martinache, F.; Beaulieu, M.; Janin-Potiron, P.; Abe, L.; Fantei-Caujolle, Y.; Mattei, D.; Ottogalli, S.

    2014-07-01

    Searching for nearby exoplanets with direct imaging is one of the major scientific drivers for both space and groundbased programs. While the second generation of dedicated high-contrast instruments on 8-m class telescopes is about to greatly expand the sample of directly imaged planets, exploring the planetary parameter space to hitherto-unseen regions ideally down to Terrestrial planets is a major technological challenge for the forthcoming decades. This requires increasing spatial resolution and significantly improving high contrast imaging capabilities at close angular separations. Segmented telescopes offer a practical path toward dramatically enlarging telescope diameter from the ground (ELTs), or achieving optimal diameter in space. However, translating current technological advances in the domain of highcontrast imaging for monolithic apertures to the case of segmented apertures is far from trivial. SPEED - the segmented pupil experiment for exoplanet detection - is a new instrumental facility in development at the Lagrange laboratory for enabling strategies and technologies for high-contrast instrumentation with segmented telescopes. SPEED combines wavefront control including precision segment phasing architectures, wavefront shaping using two sequential high order deformable mirrors for both phase and amplitude control, and advanced coronagraphy struggled to very close angular separations (PIAACMC). SPEED represents significant investments and technology developments towards the ELT area and future spatial missions, and will offer an ideal cocoon to pave the road of technological progress in both phasing and high-contrast domains with complex/irregular apertures. In this paper, we describe the overall design and philosophy of the SPEED bench.

  17. Polarization image segmentation of radiofrequency ablated porcine myocardial tissue

    Science.gov (United States)

    Ahmad, Iftikhar; Gribble, Adam; Murtza, Iqbal; Ikram, Masroor; Pop, Mihaela; Vitkin, Alex

    2017-01-01

    Optical polarimetry has previously imaged the spatial extent of a typical radiofrequency ablated (RFA) lesion in myocardial tissue, exhibiting significantly lower total depolarization at the necrotic core compared to healthy tissue, and intermediate values at the RFA rim region. Here, total depolarization in ablated myocardium was used to segment the total depolarization image into three (core, rim and healthy) zones. A local fuzzy thresholding algorithm was used for this multi-region segmentation, and then compared with a ground truth segmentation obtained from manual demarcation of RFA core and rim regions on the histopathology image. Quantitative comparison of the algorithm segmentation results was performed with evaluation metrics such as dice similarity coefficient (DSC = 0.78 ± 0.02 and 0.80 ± 0.02), sensitivity (Sn = 0.83 ± 0.10 and 0.91 ± 0.08), specificity (Sp = 0.76 ± 0.17 and 0.72 ± 0.17) and accuracy (Acc = 0.81 ± 0.09 and 0.71 ± 0.10) for RFA core and rim regions, respectively. This automatic segmentation of parametric depolarization images suggests a novel application of optical polarimetry, namely its use in objective RFA image quantification. PMID:28380013

  18. Filter Design and Performance Evaluation for Fingerprint Image Segmentation.

    Directory of Open Access Journals (Sweden)

    Duy Hoang Thai

    Full Text Available Fingerprint recognition plays an important role in many commercial applications and is used by millions of people every day, e.g. for unlocking mobile phones. Fingerprint image segmentation is typically the first processing step of most fingerprint algorithms and it divides an image into foreground, the region of interest, and background. Two types of error can occur during this step which both have a negative impact on the recognition performance: 'true' foreground can be labeled as background and features like minutiae can be lost, or conversely 'true' background can be misclassified as foreground and spurious features can be introduced. The contribution of this paper is threefold: firstly, we propose a novel factorized directional bandpass (FDB segmentation method for texture extraction based on the directional Hilbert transform of a Butterworth bandpass (DHBB filter interwoven with soft-thresholding. Secondly, we provide a manually marked ground truth segmentation for 10560 images as an evaluation benchmark. Thirdly, we conduct a systematic performance comparison between the FDB method and four of the most often cited fingerprint segmentation algorithms showing that the FDB segmentation method clearly outperforms these four widely used methods. The benchmark and the implementation of the FDB method are made publicly available.

  19. Image Segmentation Based on Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    XU Hai-xiang; ZHU Guang-xi; TIAN Jin-wen; ZHANG Xiang; PENG Fu-yuan

    2005-01-01

    Image segmentation is a necessary step in image analysis. Support vector machine (SVM) approach is proposed to segment images and its segmentation performance is evaluated.Experimental results show that: the effects of kernel function and model parameters on the segmentation performance are significant; SVM approach is less sensitive to noise in image segmentation; The segmentation performance of SVM approach is better than that of back-propagation multi-layer perceptron (BP-MLP) approach and fuzzy c-means (FCM) approach.

  20. Accurate and Fast Iris Segmentation

    Directory of Open Access Journals (Sweden)

    G. AnnaPoorani,

    2010-06-01

    Full Text Available A novel iris segmentation approach for noisy iris is proposed in this paper. The proposed approach comprises of specular reflection removal, pupil localization, iris localization and eyelid localization. Reflection map computation is devised to get the reflection ROI of eye image using adaptive threshold technique. Bilinear interpolation is used to fill these reflection points in the eye image. Variant of edge-based segmentation technique is adopted to detect the pupil boundary from the eye image. Gradient based heuristic approach is devised to detect the iris boundary from theeye image. Eyelid localization is designed to detect the eyelids using the edge detection and curve fitting. Feature sequence combined into spatial domain segments the iris texture patterns properly. Empirical results show that the proposed approach is effective and suitable to deal with the noisy eye image for iris segmentation.

  1. Metrology of IXO Mirror Segments

    Science.gov (United States)

    Chan, Kai-Wing

    2011-01-01

    For future x-ray astrophysics mission that demands optics with large throughput and excellent angular resolution, many telescope concepts build around assembling thin mirror segments in a Wolter I geometry, such as that originally proposed for the International X-ray Observatory. The arc-second resolution requirement posts unique challenges not just for fabrication, mounting but also for metrology of these mirror segments. In this paper, we shall discuss the metrology of these segments using normal incidence metrological method with interferometers and null lenses. We present results of the calibration of the metrology systems we are currently using, discuss their accuracy and address the precision in measuring near-cylindrical mirror segments and the stability of the measurements.

  2. When to "Fire" Customers: Customer Cost-Based Pricing

    OpenAIRE

    Jiwoong Shin; Sudhir, K.; Dae-Hee Yoon

    2012-01-01

    The widespread adoption of activity-based costing enables firms to allocate common service costs to each customer, allowing for precise measurement of both the cost to serve a particular customer and the customer's profitability. In this paper, we investigate how pricing strategies based on customer cost information affects a firm's customer acquisition and retention dynamics, and ultimately its profit, using a two-period monopoly model with high- and low-cost customer segments. Although past...

  3. Neural network for image segmentation

    Science.gov (United States)

    Skourikhine, Alexei N.; Prasad, Lakshman; Schlei, Bernd R.

    2000-10-01

    Image analysis is an important requirement of many artificial intelligence systems. Though great effort has been devoted to inventing efficient algorithms for image analysis, there is still much work to be done. It is natural to turn to mammalian vision systems for guidance because they are the best known performers of visual tasks. The pulse- coupled neural network (PCNN) model of the cat visual cortex has proven to have interesting properties for image processing. This article describes the PCNN application to the processing of images of heterogeneous materials; specifically PCNN is applied to image denoising and image segmentation. Our results show that PCNNs do well at segmentation if we perform image smoothing prior to segmentation. We use PCNN for obth smoothing and segmentation. Combining smoothing and segmentation enable us to eliminate PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection can be difficult and can vary even for the same problem. This approach makes image processing based on PCNN more automatic in our application and also results in better segmentation.

  4. Cost comparisons

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    How much does the LHC cost? And how much does this represent in other currencies? Below we present a table showing some comparisons with the cost of other projects. Looking at the figures, you will see that the cost of the LHC can be likened to that of three skyscrapers, or two seasons of Formula 1 racing! One year's budget of a single large F1 team is comparable to the entire materials cost of the ATLAS or CMS experiments.   Please note that all the figures are rounded for ease of reading.    CHF € $   LHC 4.6 billions 3 billions  4 billions   Space Shuttle Endeavour (NASA) 1.9 billion 1.3 billion 1.7 billion   Hubble Space Telescope (cost at launch – NASA/...

  5. B-Spline Active Contour with Handling of Topology Changes for Fast Video Segmentation

    Directory of Open Access Journals (Sweden)

    Frederic Precioso

    2002-06-01

    Full Text Available This paper deals with video segmentation for MPEG-4 and MPEG-7 applications. Region-based active contour is a powerful technique for segmentation. However most of these methods are implemented using level sets. Although level-set methods provide accurate segmentation, they suffer from large computational cost. We propose to use a regular B-spline parametric method to provide a fast and accurate segmentation. Our B-spline interpolation is based on a fixed number of points 2j depending on the level of the desired details. Through this spatial multiresolution approach, the computational cost of the segmentation is reduced. We introduce a length penalty. This results in improving both smoothness and accuracy. Then we show some experiments on real-video sequences.

  6. Life Cycle Assessment of Residential Heating and Cooling Systems in Minnesota A comprehensive analysis on life cycle greenhouse gas (GHG) emissions and cost-effectiveness of ground source heat pump (GSHP) systems compared to the conventional gas furnace and air conditioner system

    Science.gov (United States)

    Li, Mo

    Ground Source Heat Pump (GSHP) technologies for residential heating and cooling are often suggested as an effective means to curb energy consumption, reduce greenhouse gas (GHG) emissions and lower homeowners' heating and cooling costs. As such, numerous federal, state and utility-based incentives, most often in the forms of financial incentives, installation rebates, and loan programs, have been made available for these technologies. While GSHP technology for space heating and cooling is well understood, with widespread implementation across the U.S., research specific to the environmental and economic performance of these systems in cold climates, such as Minnesota, is limited. In this study, a comparative environmental life cycle assessment (LCA) is conducted of typical residential HVAC (Heating, Ventilation, and Air Conditioning) systems in Minnesota to investigate greenhouse gas (GHG) emissions for delivering 20 years of residential heating and cooling—maintaining indoor temperatures of 68°F (20°C) and 75°F (24°C) in Minnesota-specific heating and cooling seasons, respectively. Eight residential GSHP design scenarios (i.e. horizontal loop field, vertical loop field, high coefficient of performance, low coefficient of performance, hybrid natural gas heat back-up) and one conventional natural gas furnace and air conditioner system are assessed for GHG and life cycle economic costs. Life cycle GHG emissions were found to range between 1.09 × 105 kg CO2 eq. and 1.86 × 10 5 kg CO2 eq. Six of the eight GSHP technology scenarios had fewer carbon impacts than the conventional system. Only in cases of horizontal low-efficiency GSHP and hybrid, do results suggest increased GHGs. Life cycle costs and present value analyses suggest GSHP technologies can be cost competitive over their 20-year life, but that policy incentives may be required to reduce the high up-front capital costs of GSHPs and relatively long payback periods of more than 20 years. In addition

  7. Validation of model-based pelvis bone segmentation from MR images for PET/MR attenuation correction

    Science.gov (United States)

    Renisch, S.; Blaffert, T.; Tang, J.; Hu, Z.

    2012-02-01

    With the recent introduction of combined Magnetic Resonance Imaging (MRI) / Positron Emission Tomography (PET) systems, the generation of attenuation maps for PET based on MR images gained substantial attention. One approach for this problem is the segmentation of structures on the MR images with subsequent filling of the segments with respective attenuation values. Structures of particular interest for the segmentation are the pelvis bones, since those are among the most heavily absorbing structures for many applications, and they can serve at the same time as valuable landmarks for further structure identification. In this work the model-based segmentation of the pelvis bones on gradient-echo MR images is investigated. A processing chain for the detection and segmentation of the pelvic bones is introduced, and the results are evaluated using CT-generated "ground truth" data. The results indicate that a model based segmentation of the pelvis bone is feasible with moderate requirements to the pre- and postprocessing steps of the segmentation.

  8. Medical image segmentation by MDP model

    Science.gov (United States)

    Lu, Yisu; Chen, Wufan

    2011-11-01

    MDP (Dirichlet Process Mixtures) model is applied to segment medical images in this paper. Segmentation can been automatically done without initializing segmentation class numbers. The MDP model segmentation algorithm is used to segment natural images and MR (Magnetic Resonance) images in the paper. To demonstrate the accuracy of the MDP model segmentation algorithm, many compared experiments, such as EM (Expectation Maximization) image segmentation algorithm, K-means image segmentation algorithm and MRF (Markov Field) image segmentation algorithm, have been done to segment medical MR images. All the methods are also analyzed quantitatively by using DSC (Dice Similarity Coefficients). The experiments results show that DSC of MDP model segmentation algorithm of all slices exceed 90%, which show that the proposed method is robust and accurate.

  9. Communicative functions integrate segments in prosodies and prosodies in segments.

    Science.gov (United States)

    Kohler, Klaus J

    2011-01-01

    This paper takes a new look at the traditionally established divide between sounds and prosodies, viewing it as a useful heuristics in language descriptions that focus on the segmental make- up of words. It pleads for a new approach that bridges this reified compartmentalization of speech in a more global communicative perspective. Data are presented from a German perception experiment in the framework of the Semantic Differential that shows interdependence of f0 contours and the spectral characteristics of a following fricative segment, for the expression of semantic functions along the scales questioning - asserting, excited - calm, forceful - not forceful, contrary - agreeable. The results lead to the conclusion that segments shape prosodies and are shaped by them in varying ways in the coding of semantic functions. This implies that the analysis of sentence prosodies needs to integrate the manifestation of segments, just as the analysis of segments needs to consider their prosodic embedding. In communicative interaction, speakers set broad prosodic time windows of varying sizes, and listeners respond to them. So, future phonetic research needs to concentrate on speech analysis in such windows.

  10. Minimizing the cost of locomotion with inclined trunk predicts crouched leg kinematics of small birds at realistic levels of elastic recoil.

    Science.gov (United States)

    Rode, Christian; Sutedja, Yefta; Kilbourne, Brandon M; Blickhan, Reinhard; Andrada, Emanuel

    2016-02-01

    Small birds move with pronograde trunk orientation and crouched legs. Although the pronograde trunk has been suggested to be beneficial for grounded running, the cause(s) of the specific leg kinematics are unknown. Here we show that three charadriiform bird species (northern lapwing, oystercatcher, and avocet; great examples of closely related species that differ remarkably in their hind limb design) move their leg segments during stance in a way that minimizes the cost of locomotion. We imposed measured trunk motions and ground reaction forces on a kinematic model of the birds. The model was used to search for leg configurations that minimize leg work that accounts for two factors: elastic recoil in the intertarsal joint, and cheaper negative muscle work relative to positive muscle work. A physiological level of elasticity (∼ 0.6) yielded segment motions that match the experimental data best, with a root mean square of angular deviations of ∼ 2.1 deg. This finding suggests that the exploitation of elastic recoil shapes the crouched leg kinematics of small birds under the constraint of pronograde trunk motion. Considering that an upright trunk and more extended legs likely decrease the cost of locomotion, our results imply that the cost of locomotion is a secondary movement criterion for small birds. Scaling arguments suggest that our approach may be utilized to provide new insights into the motion of extinct species such as dinosaurs.

  11. Optimal production policy for multi-product with inventory-level-dependent demand in segmented market

    Directory of Open Access Journals (Sweden)

    Singh Yogender

    2013-01-01

    Full Text Available Market segmentation has emerged as the primary means by which firms achieve optimal production policy. In this paper, we use market segmentation approach in multi-product inventory system with inventory-level-dependent demand. The objective is to make use of optimal control theory to solve the inventory-production problem and develop an optimal production policy that minimizes the total cost associated with inventory and production rate in segmented market. First, we consider a single production and inventory problem with multi-destination demand that vary from segment to segment. Further, we describe a single source production and multi destination inventory and demand problem under the assumption that firm may choose independently the inventory directed to each segment. The optimal control is applied to study and solve the proposed problem.

  12. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  13. Chinese license plate character segmentation using multiscale template matching

    Science.gov (United States)

    Tian, Jiangmin; Wang, Guoyou; Liu, Jianguo; Xia, Yuanchun

    2016-09-01

    Character segmentation (CS) plays an important role in automatic license plate recognition and has been studied for decades. A method using multiscale template matching is proposed to settle the problem of CS for Chinese license plates. It is carried out on a binary image integrated from maximally stable extremal region detection and Otsu thresholding. Afterward, a uniform harrow-shaped template with variable length is designed, by virtue of which a three-dimensional matching space is constructed for searching of candidate segmentations. These segmentations are detected at matches with local minimum responses. Finally, the vertical boundaries of each single character are located for subsequent recognition. Experiments on a data set including 2349 license plate images of different quality levels show that the proposed method can achieve a higher accuracy at comparable time cost and is robust to images in poor conditions.

  14. Segment-based traffic smoothing algorithm for VBR video stream

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Transmission of variable bit rate (VBR) video, because of the burstiness of VBR video traffic, has high fluctuation in bandwidth requirement. Traffic smoothing algorithm is very efficient in reducing burstiness of the VBR video stream by transmitting data in a series of fixed rates. We propose in this paper a novel segment-based bandwidth allocation algorithm which dynamically adjusts the segmentation boundary and changes the transmission rate at the latest possible point so that the video segment will be extended as long as possible and the number of rate changes can be as small as possible while keeping the peak rate low. Simulation results showed that our approach has small bandwidth requirement, high bandwidth utilization and low computation cost.

  15. Midbrain segmentation in transcranial 3D ultrasound for Parkinson diagnosis.

    Science.gov (United States)

    Ahmadi, Seyed-Ahmad; Baust, Maximilian; Karamalis, Athanasios; Plate, Annika; Boetzel, Kai; Klein, Tassilo; Navab, Nassir

    2011-01-01

    Ultrasound examination of the human brain through the temporal bone window, also called transcranial ultrasound (TC-US), is a completely non-invasive and cost-efficient technique, which has established itself for differential diagnosis of Parkinson's Disease (PD) in the past decade. The method requires spatial analysis of ultrasound hyperechogenicities produced by pathological changes within the Substantia Nigra (SN), which belongs to the basal ganglia within the midbrain. Related work on computer aided PD diagnosis shows the urgent need for an accurate and robust segmentation of the midbrain from 3D TC-US, which is an extremely difficult task due to poor image quality of TC-US. In contrast to 2D segmentations within earlier approaches, we develop the first method for semi-automatic midbrain segmentation from 3D TC-US and demonstrate its potential benefit on a database of 11 diagnosed Parkinson patients and 11 healthy controls.

  16. Medical image segmentation using object atlas versus object cloud models

    Science.gov (United States)

    Phellan, Renzo; Falcão, Alexandre X.; Udupa, Jayaram K.

    2015-03-01

    Medical image segmentation is crucial for quantitative organ analysis and surgical planning. Since interactive segmentation is not practical in a production-mode clinical setting, automatic methods based on 3D object appearance models have been proposed. Among them, approaches based on object atlas are the most actively investigated. A key drawback of these approaches is that they require a time-costly image registration process to build and deploy the atlas. Object cloud models (OCM) have been introduced to avoid registration, considerably speeding up the whole process, but they have not been compared to object atlas models (OAM). The present paper fills this gap by presenting a comparative analysis of the two approaches in the task of individually segmenting nine anatomical structures of the human body. Our results indicate that OCM achieve a statistically significant better accuracy for seven anatomical structures, in terms of Dice Similarity Coefficient and Average Symmetric Surface Distance.

  17. VENTILATION SYSTEM WITH GROUND HEAT EXCHANGER

    Directory of Open Access Journals (Sweden)

    Vyacheslav Pisarev

    2016-11-01

    Full Text Available Ventilation systems consume more and more energy because of the often complex treatment of the air supplied to closed spaces. Looking for sources of energy allow for significant savings costs, which often translate into renewable energy sources. One of the more popular solutions is to use energy from the ground by various methods. Known and relatively common solutions are based on ground heat exchanger and ground collector cooperating with a heat pump. The paper presents the possibility of cooperation ventilation system with ground air heat exchanger and heat pump both in summer and winter period. A number solutions of this type of system, supported by calculation examples and moist air transformation in the Moliere chart have been presented. Support ventilation system with renewable energy sources allows significant savings in operating as shown in the article.

  18. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  19. Airport Ground Staff Scheduling

    DEFF Research Database (Denmark)

    Clausen, Tommy

    travels safely and efficiently through the airport. When an aircraft lands, a significant number of tasks must be performed by different groups of ground crew, such as fueling, baggage handling and cleaning. These tasks must be complete before the aircraft is able to depart, as well as check......-in and security services. These tasks are collectively known as ground handling, and are the major source of activity with airports. The business environments of modern airports are becoming increasingly competitive, as both airports themselves and their ground handling operations are changing to private...... ownership. As airports are in competition to attract airline routes, efficient and reliable ground handling operations are imperative for the viability and continued growth of both airports and airlines. The increasing liberalization of the ground handling market prompts ground handling operators...

  20. [Introduction to grounded theory].

    Science.gov (United States)

    Wang, Shou-Yu; Windsor, Carol; Yates, Patsy

    2012-02-01

    Grounded theory, first developed by Glaser and Strauss in the 1960s, was introduced into nursing education as a distinct research methodology in the 1970s. The theory is grounded in a critique of the dominant contemporary approach to social inquiry, which imposed "enduring" theoretical propositions onto study data. Rather than starting from a set theoretical framework, grounded theory relies on researchers distinguishing meaningful constructs from generated data and then identifying an appropriate theory. Grounded theory is thus particularly useful in investigating complex issues and behaviours not previously addressed and concepts and relationships in particular populations or places that are still undeveloped or weakly connected. Grounded theory data analysis processes include open, axial and selective coding levels. The purpose of this article was to explore the grounded theory research process and provide an initial understanding of this methodology.

  1. Generalized framework for a user-aware interactive texture segmentation system

    Science.gov (United States)

    Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric

    2012-07-01

    We present a new framework for an interactive image delineation technique, which we term as interactive texture-snapping system (IT-SNAPS). One of the unique features of IT-SNAPS stems from the fact that it can effectively aid the user in accurately segmenting images with complex texture, without placing undue burden on the user. This is made possible through the formulation of IT-SNAPS, which enables it to be user-aware, i.e., it unobtrusively elicits information from the user during the segmentation process, and hence, adapts itself on-the-fly to the boundary being segmented. In addition to generating an accurate segmentation, it is shown that the framework of IT-SNAPS allows for extraction of useful information post-segmentation, which can potentially assist in the development of customized automatic segmentation algorithms. The afore mentioned features of IT-SNAPS are demonstrated on a set of texture images, as well as on a real-world biomedical application. Using appropriate segmentation protocols in conjunction with expert-provided ground truth, experiments are designed to quantitatively evaluate and compare the segmentation accuracy and user-friendliness of IT-SNAPS with another popular interactive segmentation technique. Promising results indicate the efficacy of IT-SNAPS and its potential to positively impact a broad spectrum of computer vision applications.

  2. Ground Vehicle Robotics

    Science.gov (United States)

    2013-08-20

    Ground Vehicle Robotics Jim Parker Associate Director, Ground Vehicle Robotics UNCLASSIFIED: Distribution Statement A. Approved for public...DATE 20 AUG 2013 2. REPORT TYPE Briefing Charts 3. DATES COVERED 09-05-2013 to 15-08-2013 4. TITLE AND SUBTITLE Ground Vehicle Robotics 5a...Willing to take Risk on technology -User Evaluated -Contested Environments -Operational Data Applied Robotics for Installation & Base Ops -Low Risk

  3. Plugin procedure in segmentation and application to hyperspectral image segmentation

    CERN Document Server

    Girard, R

    2010-01-01

    In this article we give our contribution to the problem of segmentation with plug-in procedures. We give general sufficient conditions under which plug in procedure are efficient. We also give an algorithm that satisfy these conditions. We give an application of the used algorithm to hyperspectral images segmentation. Hyperspectral images are images that have both spatial and spectral coherence with thousands of spectral bands on each pixel. In the proposed procedure we combine a reduction dimension technique and a spatial regularisation technique. This regularisation is based on the mixlet modelisation of Kolaczyck and Al.

  4. Segmentation of Breast Regions in Mammogram Based on Density: A Review

    Directory of Open Access Journals (Sweden)

    Nafiza Saidin

    2012-07-01

    Full Text Available The focus of this paper is to review approaches for segmentation of breast regions in mammograms according to breast density. Studies based on density have been undertaken because of the relationship between breast cancer and density. Breast cancer usually occurs in the fibroglandular area of breast tissue, which appears bright on mammograms and describes as breast density. Most of the studies focused on the classification method for the glandular tissue detection. Others highlighted on the segmentation method of fibroglandular tissue, while few researchers performed segmentation of the anatomical regions based on density. There have also been works on the segmentation of other specific parts of breast regions such as either detection of nipple position, skin-air interface or pectoral muscle. The problem on evaluation performance of the segmentation result in relation to ground truth is also discussed in this paper.

  5. Automatic Cell Segmentation Using a Shape-Classification Model in Immunohistochemically Stained Cytological Images

    Science.gov (United States)

    Shah, Shishir

    This paper presents a segmentation method for detecting cells in immunohistochemically stained cytological images. A two-phase approach to segmentation is used where an unsupervised clustering approach coupled with cluster merging based on a fitness function is used as the first phase to obtain a first approximation of the cell locations. A joint segmentation-classification approach incorporating ellipse as a shape model is used as the second phase to detect the final cell contour. The segmentation model estimates a multivariate density function of low-level image features from training samples and uses it as a measure of how likely each image pixel is to be a cell. This estimate is constrained by the zero level set, which is obtained as a solution to an implicit representation of an ellipse. Results of segmentation are presented and compared to ground truth measurements.

  6. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  7. The Grounded Theory Bookshelf

    Directory of Open Access Journals (Sweden)

    Vivian B. Martin, Ph.D.

    2005-03-01

    Full Text Available Bookshelf will provide critical reviews and perspectives on books on theory and methodology of interest to grounded theory. This issue includes a review of Heaton’s Reworking Qualitative Data, of special interest for some of its references to grounded theory as a secondary analysis tool; and Goulding’s Grounded Theory: A practical guide for management, business, and market researchers, a book that attempts to explicate the method and presents a grounded theory study that falls a little short of the mark of a fully elaborated theory.Reworking Qualitative Data, Janet Heaton (Sage, 2004. Paperback, 176 pages, $29.95. Hardcover also available.

  8. Channel Modeling for Air-to-Ground Wireless Communication

    Institute of Scientific and Technical Information of China (English)

    Yingcheng Shi; Di He; Bin Li; Jianwu Dou

    2015-01-01

    In this paper, we discuss several large⁃scale fading models for different environments. The COST231⁃Hata model is adapted for air⁃to⁃ground modeling. We propose two criteria for air⁃to⁃ground channel modelling based on test data derived from field testing in Beijing. We develop a new propagation model that is more suitable for air⁃to⁃ground communication that pre⁃vious models. We focus on improving this propagation model using the field test data.

  9. Optimized ground-coupled heat pump system design for northern climate applications. [Including ground coil

    Energy Technology Data Exchange (ETDEWEB)

    Catan, M.A.; Baxter, V.D.

    1985-01-01

    This paper addresses the question of the performance of a ground coupled heat pump (GCHP) system with a water - source heat pump package designed expressly for such systems operating in a northern climate. The project objective was to minimize the life-cycle cost of a GCHP system by optimizing the design of both the heat pump package and the ground coil in concert. In order to achieve this objective, a number of modelling tools were developed or modified to analyze the heat pump's performance and cost and the ground coil's performance. The life-cycle cost of a GCHP system (water-source heat pump with a horizontal ground coil) for an 1800 ft/sup 2/ (167 m/sup 2/) house in Pittsburgh, PA, was minimized over a 7 year economic life. Simple payback for the optimized GCHP system, relative to conventional air-source heat pumps, was under 3 years. The water-source heat pump package resulting from this optimization is calculated to cost 21% more than its conventional counterpart with a heating coefficient of performance (COP) about 20% higher, and a cooling COP about 23% higher. In the GCHP system modeled, its annual energy savings are predicted to be about 11% compared to a system designed around the conventional heat pump while having about the same installation cost. The major conclusion of this study is - GCHP system performance improvement can be attained by improving the water-source heat pump package at less cost than by buying more ground coil. The following conclusions were drawn from the steady-state performance optimization results: (1) By adding about $100.00 to the manufacturer's cost off construction, both the heating and cooling COPs can be improved by 20% or more. (2) Cooling COP need not be sacrificed for the sake of heating performance and vice versa. 13 refs., 11 figs., 12 tabs.

  10. Identifying Benefit Segments among College Students.

    Science.gov (United States)

    Brown, Joseph D.

    1991-01-01

    Using concept of market segmentation (dividing market into distinct groups requiring different product benefits), surveyed 398 college students to determine benefit segments among students selecting a college to attend and factors describing each benefit segment. Identified one major segment of students (classroomers) plus three minor segments…

  11. U.S. Army Custom Segmentation System

    Science.gov (United States)

    2007-06-01

    segmentation is individual or intergroup differences in response to marketing - mix variables. Presumptions about segments: •different demands in a...product or service category, •respond differently to changes in the marketing mix Criteria for segments: •The segments must exist in the environment

  12. Discourse segmentation and ambiguity in discourse structure

    NARCIS (Netherlands)

    Hoek, J.; Evers-Vermeul, J.; Sanders, T.J.M.

    2016-01-01

    Discourse relations hold between two or more text segments. The process of discourse annotation not only involves determining what type of relation holds between segments, but also indicating the segments themselves. Often, segmentation and annotation are treated as individual steps, and separate gu

  13. An interactive segmentation method based on superpixel

    DEFF Research Database (Denmark)

    Yang, Shu; Zhu, Yaping; Wu, Xiaoyu

    2015-01-01

    This paper proposes an interactive image-segmentation method which is based on superpixel. To achieve fast segmentation, the method is used to establish a Graphcut model using superpixels as nodes, and a new energy function is proposed. Experimental results demonstrate that the authors' method has...... excellent performance in terms of segmentation accuracy and computation efficiency compared with other segmentation algorithm based on pixels....

  14. Study on effect of segments erection tolerance and wedge-shaped segment on segment ring in shield tunnel

    Institute of Scientific and Technical Information of China (English)

    CHEN Jun-sheng; MO Hai-hong

    2006-01-01

    Deformation and dislocations of segments of shield tunnel in construction stage have apparent effect on tunnel structure stress and even cause local cracks and breakage in tunnel. 3D finite element method was used to analyze two segment ring models under uniform injected pressure: (1) segment ring without wedge-shaped segment, which has 16 types of preinstall erection tolerance; (2) segment ring with wedge-shaped segment, which has no preinstall erection tolerance. The analysis results indicate that different erection tolerances can cause irregular deformation in segment ring under uniform injected pressure, and that the tolerance values are enlarged further. Wedge-shaped segment apparently affects the overall deformation of segment ring without erection tolerances. The uniform injected pressure can cause deformation of ring with wedge-shaped segment irregular,and dislocations also appear in this situation. The stress of segment with erection tolerances is much larger than that of segment without erection tolerances. Enlarging the central angle of wedge-shaped segment can make the irregular deformation and dislocations of segments smaller. The analysis results also provide basis for erection tolerance control and improvement of segment constitution.

  15. Unsupervised segmentation with dynamical units.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Peck, Charles C; Kozloski, James R

    2008-01-01

    In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.

  16. Compliance with Segment Disclosure Initiatives

    DEFF Research Database (Denmark)

    Arya, Anil; Frimor, Hans; Mittendorf, Brian

    2013-01-01

    Regulatory oversight of capital markets has intensified in recent years, with a particular emphasis on expanding financial transparency. A notable instance is efforts by the Financial Accounting Standards Board that push firms to identify and report performance of individual business units...... (segments). This paper seeks to address short-run and long-run consequences of stringent enforcement of and uniform compliance with these segment disclosure standards. To do so, we develop a parsimonious model wherein a regulatory agency promulgates disclosure standards and either permits voluntary...... compliance or mandates strict compliance from firms. Under voluntary compliance, a firm is able to credibly withhold individual segment information from its competitors by disclosing data only at the aggregate firm level. Consistent with regulatory hopes, we show that mandatory compliance enhances welfare...

  17. Vehicle License Plate Character Segmentation

    Institute of Scientific and Technical Information of China (English)

    Mei-Sen Pan; Jun-Biao Yan; Zheng-Hong Xiao

    2008-01-01

    Vehicle license plate (VLP) character segmentation is an important part of the vehicle license plate recognition system (VLPRS). This paper proposes a least square method (LSM) to treat horizontal tilt and vertical tilt in VLP images. Auxiliary lines are added into the image (or the tilt-corrected image) to make the separated parts of each Chinese character to be an interconnected region. The noise regions will be eliminated after two fusing images are merged according to the minimum principle of gray values.Then, the characters are segmented by projection method (PM) and the final character images are obtained. The experimental results show that this method features fast processing and good performance in segmentation.

  18. Segmental Rescoring in Text Recognition

    Science.gov (United States)

    2014-02-04

    ttm № tes/m, m* tmvr mowm* a Smyrna Of l δrtA£ACf02S’ A w m - y i p m AmiKSiS € f № ) C № № m .. sg6#?«rA fiθN ; Atφ h Sft№’·’Spxn mm m fim f№b t&m&mm...applying a Hidden Markov Model (HMM) recognition approach. Generating the plurality text hypotheses for the image forming includes generating a first...image. Applying segmental analysis to a segmentation determined by a first OCR engine, such as a segmentation determined by a Hidden Markov Model (HMM

  19. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  20. Jet transport noise - A comparison of predicted and measured noise for ILS and two-segment approaches

    Science.gov (United States)

    White, K. C.; Bourquin, K. R.

    1974-01-01

    Centerline noise measured during standard ILS and two-segment approaches in DC-8-61 aircraft were compared with noise predicted for these procedures using an existing noise prediction technique. Measured data is considered to be in good agreement with predicted data. Ninety EPNdB sideline locations were calculated from flight data obtained during two-segment approaches and were compared with predicted 90 EPNdB contours that were computed using three different models for excess ground attenuation and a contour with no correction for ground attenuation. The contour not corrected for ground attenuation was in better agreement with the measured data.

  1. Image Segmentation for Food Quality Evaluation Using Computer Vision System

    Directory of Open Access Journals (Sweden)

    Nandhini. P

    2014-02-01

    Full Text Available Quality evaluation is an important factor in food processing industries using the computer vision system where human inspection systems provide high variability. In many countries food processing industries aims at producing defect free food materials to the consumers. Human evaluation techniques suffer from high labour costs, inconsistency and variability. Thus this paper provides various steps for identifying defects in the food material using the computer vision systems. Various steps in computer vision system are image acquisition, Preprocessing, image segmentation, feature identification and classification. The proposed framework provides the comparison of various filters where the hybrid median filter was selected as the filter with the high PSNR value and is used in preprocessing. Image segmentation techniques such as Colour based binary Image segmentation, Particle swarm optimization are compared and image segmentation parameters such as accuracy, sensitivity , specificity are calculated and found that colour based binary image segmentation is well suited for food quality evaluation. Finally this paper provides an efficient method for identifying the defected parts in food materials.

  2. Cost estimate guidelines for advanced nuclear power technologies

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, C.R. II

    1987-07-01

    To make comparative assessments of competing technologies, consistent ground rules must be applied when developing cost estimates. This document provides a uniform set of assumptions, ground rules, and requirements that can be used in developing cost estimates for advanced nuclear power technologies.

  3. Cost estimate guidelines for advanced nuclear power technologies

    Energy Technology Data Exchange (ETDEWEB)

    Delene, J.G.; Hudson, C.R. II.

    1990-03-01

    To make comparative assessments of competing technologies, consistent ground rules must be applied when developing cost estimates. This document provides a uniform set of assumptions, ground rules, and requirements that can be used in developing cost estimates for advanced nuclear power technologies. 10 refs., 8 figs., 32 tabs.

  4. Cost estimate guidelines for advanced nuclear power technologies

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, C.R. II

    1986-07-01

    To make comparative assessments of competing technologies, consistent ground rules must be applied when developing cost estimates. This document provides a uniform set of assumptions, ground rules, and requirements that can be used in developing cost estimates for advanced nuclear power technologies.

  5. Possibilities of implementing modern philosophy of cost accounting: The case study of costs in tourism

    Directory of Open Access Journals (Sweden)

    Milenković Zoran

    2015-01-01

    Full Text Available Efficient cost management is one of the key tasks in modern management, primarily because of causal connection of costs, profitability and competitive advantage on the market. As a market-based concept, Target Costing represents modern accounting philosophy of cost accounting and profit planning. In modern business, efficient cost planning and management is provided by accounting information system as an integrated accounting and information solution which supplies enterprises with accounting data processing necessary for making business decisions related to the efficient management in accordance with the declared mission and goals of an enterprise. Basic information support in the process of planning and cost management is carried out by the cost accounting module which is a very important part of the accounting information system in every enterprise. The cost monitoring is provided according to the type, place and bearer within the cost accounting module as a segment of an integral accounting information system.

  6. Segmentation in dermatological hyperspectral images: dedicated methods

    OpenAIRE

    Koprowski, Robert; Olczyk, Paweł

    2016-01-01

    Background Segmentation of hyperspectral medical images is one of many image segmentation methods which require profiling. This profiling involves either the adjustment of existing, known image segmentation methods or a proposal of new dedicated methods of hyperspectral image segmentation. Taking into consideration the size of analysed data, the time of analysis is of major importance. Therefore, the authors proposed three new dedicated methods of hyperspectral image segmentation with special...

  7. Minimizing Costs Can Be Costly

    Directory of Open Access Journals (Sweden)

    Rasmus Rasmussen

    2010-01-01

    Full Text Available A quite common practice, even in academic literature, is to simplify a decision problem and model it as a cost-minimizing problem. In fact, some type of models has been standardized to minimization problems, like Quadratic Assignment Problems (QAPs, where a maximization formulation would be treated as a “generalized” QAP and not solvable by many of the specially designed softwares for QAP. Ignoring revenues when modeling a decision problem works only if costs can be separated from the decisions influencing revenues. More often than we think this is not the case, and minimizing costs will not lead to maximized profit. This will be demonstrated using spreadsheets to solve a small example. The example is also used to demonstrate other pitfalls in network models: the inability to generally balance the problem or allocate costs in advance, and the tendency to anticipate a specific type of solution and thereby make constraints too limiting when formulating the problem.

  8. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  9. SExSeg: SExtractor segmentation

    Science.gov (United States)

    Coe, Dan

    2015-08-01

    SExSeg forces SExtractor (ascl:1010.064) to run using a pre-defined segmentation map (the definition of objects and their borders). The defined segments double as isophotal apertures. SExSeg alters the detection image based on a pre-defined segmenation map while preparing your "analysis image" by subtracting the background in a separate SExtractor run (using parameters you specify). SExtractor is then run in "double-image" mode with the altered detection image and background-subtracted analysis image.

  10. Automated carotid artery intima layer regional segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Meiburger, Kristen M; Molinari, Filippo [Biolab, Department of Electronics, Politecnico di Torino, Torino (Italy); Acharya, U Rajendra [Department of ECE, Ngee Ann Polytechnic (Singapore); Saba, Luca [Department of Radiology, A.O.U. di Cagliari, Cagliari (Italy); Rodrigues, Paulo [Department of Computer Science, Centro Universitario da FEI, Sao Paulo (Brazil); Liboni, William [Neurology Division, Gradenigo Hospital, Torino (Italy); Nicolaides, Andrew [Vascular Screening and Diagnostic Centre, London (United Kingdom); Suri, Jasjit S, E-mail: filippo.molinari@polito.it [Fellow AIMBE, CTO, Global Biomedical Technologies Inc., CA (United States)

    2011-07-07

    Evaluation of the carotid artery wall is essential for the assessment of a patient's cardiovascular risk or for the diagnosis of cardiovascular pathologies. This paper presents a new, completely user-independent algorithm called carotid artery intima layer regional segmentation (CAILRS, a class of AtheroEdge(TM) systems), which automatically segments the intima layer of the far wall of the carotid ultrasound artery based on mean shift classification applied to the far wall. Further, the system extracts the lumen-intima and media-adventitia borders in the far wall of the carotid artery. Our new system is characterized and validated by comparing CAILRS borders with the manual tracings carried out by experts. The new technique is also benchmarked with a semi-automatic technique based on a first-order absolute moment edge operator (FOAM) and compared to our previous edge-based automated methods such as CALEX (Molinari et al 2010 J. Ultrasound Med. 29 399-418, 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CULEX (Delsanto et al 2007 IEEE Trans. Instrum. Meas. 56 1265-74, Molinari et al 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CALSFOAM (Molinari et al Int. Angiol. (at press)), and CAUDLES-EF (Molinari et al J. Digit. Imaging (at press)). Our multi-institutional database consisted of 300 longitudinal B-mode carotid images. In comparison to semi-automated FOAM, CAILRS showed the IMT bias of -0.035 {+-} 0.186 mm while FOAM showed -0.016 {+-} 0.258 mm. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed uniform behavior over the entire database. CAILRS outperformed all the four previous automated methods. The system's figure of merit was 95.6%, which was lower than that of the semi-automated method (98%), but higher than that of the other automated techniques.

  11. Pesticides in Ground Water

    DEFF Research Database (Denmark)

    Bjerg, Poul Løgstrup

    1996-01-01

    Review af: Jack E. Barbash & Elizabeth A. Resek (1996). Pesticides in Ground Water. Distribution trends and governing factors. Ann Arbor Press, Inc. Chelsea, Michigan. pp 588.......Review af: Jack E. Barbash & Elizabeth A. Resek (1996). Pesticides in Ground Water. Distribution trends and governing factors. Ann Arbor Press, Inc. Chelsea, Michigan. pp 588....

  12. Pesticides in Ground Water

    DEFF Research Database (Denmark)

    Bjerg, Poul Løgstrup

    1996-01-01

    Review af: Jack E. Barbash & Elizabeth A. Resek (1996). Pesticides in Ground Water. Distribution trends and governing factors. Ann Arbor Press, Inc. Chelsea, Michigan. pp 588.......Review af: Jack E. Barbash & Elizabeth A. Resek (1996). Pesticides in Ground Water. Distribution trends and governing factors. Ann Arbor Press, Inc. Chelsea, Michigan. pp 588....

  13. Communication, concepts and grounding.

    Science.gov (United States)

    van der Velde, Frank

    2015-02-01

    This article discusses the relation between communication and conceptual grounding. In the brain, neurons, circuits and brain areas are involved in the representation of a concept, grounding it in perception and action. In terms of grounding we can distinguish between communication within the brain and communication between humans or between humans and machines. In the first form of communication, a concept is activated by sensory input. Due to grounding, the information provided by this communication is not just determined by the sensory input but also by the outgoing connection structure of the conceptual representation, which is based on previous experiences and actions. The second form of communication, that between humans or between humans and machines, is influenced by the first form. In particular, a more successful interpersonal communication might require forms of situated cognition and interaction in which the entire representations of grounded concepts are involved.

  14. Stochastic ground motion simulation

    Science.gov (United States)

    Rezaeian, Sanaz; Xiaodan, Sun; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan

    2014-01-01

    Strong earthquake ground motion records are fundamental in engineering applications. Ground motion time series are used in response-history dynamic analysis of structural or geotechnical systems. In such analysis, the validity of predicted responses depends on the validity of the input excitations. Ground motion records are also used to develop ground motion prediction equations(GMPEs) for intensity measures such as spectral accelerations that are used in response-spectrum dynamic analysis. Despite the thousands of available strong ground motion records, there remains a shortage of records for large-magnitude earthquakes at short distances or in specific regions, as well as records that sample specific combinations of source, path, and site characteristics.

  15. Ground energy coupling

    Science.gov (United States)

    Metz, P. D.

    The feasibility of ground coupling for various heat pump systems was investigated. Analytical heat flow models were developed to approximate design ground coupling devices for use in solar heat pump space conditioning systems. A digital computer program called GROCS (GRound Coupled Systems) was written to model 3-dimensional underground heat flow in order to simulate the behavior of ground coupling experiments and to provide performance predictions which have been compared to experimental results. GROCS also has been integrated with TRNSYS. Soil thermal property and ground coupling device experiments are described. Buried tanks, serpentine earth coils in various configurations, lengths and depths, and sealed vertical wells are being investigated. An earth coil used to heat a house without use of resistance heating is described.

  16. Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Martin Längkvist

    2016-04-01

    Full Text Available The availability of high-resolution remote sensing (HRRS data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN can be applied to multispectral orthoimagery and a digital surface model (DSM of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water, and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

  17. Random geometric prior forest for multiclass object segmentation.

    Science.gov (United States)

    Liu, Xiao; Song, Mingli; Tao, Dacheng; Bu, Jiajun; Chen, Chun

    2015-10-01

    Recent advances in object detection have led to the development of segmentation by detection approaches that integrate top-down geometric priors for multiclass object segmentation. A key yet under-addressed issue in utilizing top-down cues for the problem of multiclass object segmentation by detection is efficiently generating robust and accurate geometric priors. In this paper, we propose a random geometric prior forest scheme to obtain object-adaptive geometric priors efficiently and robustly. In the scheme, a testing object first searches for training neighbors with similar geometries using the random geometric prior forest, and then the geometry of the testing object is reconstructed by linearly combining the geometries of its neighbors. Our scheme enjoys several favorable properties when compared with conventional methods. First, it is robust and very fast because its inference does not suffer from bad initializations, poor local minimums or complex optimization. Second, the figure/ground geometries of training samples are utilized in a multitask manner. Third, our scheme is object-adaptive but does not require the labeling of parts or poselets, and thus, it is quite easy to implement. To demonstrate the effectiveness of the proposed scheme, we integrate the obtained top-down geometric priors with conventional bottom-up color cues in the frame of graph cut. The proposed random geometric prior forest achieves the best segmentation results of all of the methods tested on VOC2010/2012 and is 90 times faster than the current state-of-the-art method.

  18. Research Progress on Image Segmentation

    Science.gov (United States)

    1981-06-01

    AUTHOR. HANON A.R. AND RISEMAM E- 𔃻. TITLE G SE MENTATION OF NATURA’ CNIE PAGE 6 REFERENCES FOR< IMAGE...PUBLICAIION-DAIA : 1976 KEY-WORDS : IMAGE SEGMENTATION AU’ihOR I KRUGER K. P., THOMPSON W. B, AND TURNER A. F, TITLE COMPUTER DIAGNOSIS OF TNEUMOCONIOSIS

  19. Increasing Enrollment through Benefit Segmentation.

    Science.gov (United States)

    Goodnow, Betty

    1982-01-01

    The applicability of benefit segmentation, a market research technique which groups people according to benefits expected from a program offering, was tested at the College of DuPage. Preferences and demographic characteristics were analyzed and program improvements adopted, increasing enrollment by 20 percent. (Author/SK)

  20. Leaf segmentation in plant phenotyping

    NARCIS (Netherlands)

    Scharr, Hanno; Minervini, Massimo; French, Andrew P.; Klukas, Christian; Kramer, David M.; Liu, Xiaoming; Luengo, Imanol; Pape, Jean Michel; Polder, Gerrit; Vukadinovic, Danijela; Yin, Xi; Tsaftaris, Sotirios A.

    2016-01-01

    Image-based plant phenotyping is a growing application area of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants, Arabidopsis and young tobacco. Although leaves do share appearance and shape cha

  1. Age Differences in Language Segmentation.

    Science.gov (United States)

    Stine-Morrow, Elizabeth A L; Payne, Brennan R

    2016-01-01

    Reading bears the evolutionary footprint of spoken communication. Prosodic contour in speech helps listeners parse sentences and establish semantic focus. Readers' regulation of input mirrors the segmentation patterns of prosody, such that reading times are longer for words at the ends of syntactic constituents. As reflected in these "micropauses," older readers are often found to segment text into smaller chunks. The mechanisms underlying these micropauses are unclear, with some arguing that they derive from the mental simulation of prosodic contour and others arguing they reflect higher-level language comprehension mechanisms (e.g., conceptual integration, consolidation with existing knowledge, ambiguity resolution) that are common across modality and support the consolidation of the memory representation. The authors review evidence based on reading time and comprehension performance to suggest that (a) age differences in segmentation derive both from age-related declines in working memory, as well as from crystallized ability and knowledge, which have the potential to grow in adulthood, and that (b) shifts in segmentation patterns may be a pathway through which language comprehension is preserved in late life.

  2. XRA image segmentation using regression

    Science.gov (United States)

    Jin, Jesse S.

    1996-04-01

    Segmentation is an important step in image analysis. Thresholding is one of the most important approaches. There are several difficulties in segmentation, such as automatic selecting threshold, dealing with intensity distortion and noise removal. We have developed an adaptive segmentation scheme by applying the Central Limit Theorem in regression. A Gaussian regression is used to separate the distribution of background from foreground in a single peak histogram. The separation will help to automatically determine the threshold. A small 3 by 3 widow is applied and the modal of the local histogram is used to overcome noise. Thresholding is based on local weighting, where regression is used again for parameter estimation. A connectivity test is applied to the final results to remove impulse noise. We have applied the algorithm to x-ray angiogram images to extract brain arteries. The algorithm works well for single peak distribution where there is no valley in the histogram. The regression provides a method to apply knowledge in clustering. Extending regression for multiple-level segmentation needs further investigation.

  3. Multiple Segment Factorial Vignette Designs

    Science.gov (United States)

    Ganong, Lawrence H.; Coleman, Marilyn

    2006-01-01

    The multiple segment factorial vignette design (MSFV) combines elements of experimental designs and probability sampling with the inductive, exploratory approach of qualitative research. MSFVs allow researchers to investigate topics that may be hard to study because of ethical or logistical concerns. Participants are presented with short stories…

  4. Segmental Colitis Complicating Diverticular Disease

    Directory of Open Access Journals (Sweden)

    Guido Ma Van Rosendaal

    1996-01-01

    Full Text Available Two cases of idiopathic colitis affecting the sigmoid colon in elderly patients with underlying diverticulosis are presented. Segmental resection has permitted close review of the histopathology in this syndrome which demonstrates considerable similarity to changes seen in idiopathic ulcerative colitis. The reported experience with this syndrome and its clinical features are reviewed.

  5. Body segments and growth hormone.

    OpenAIRE

    Bundak, R; Hindmarsh, P C; Brook, C. G.

    1988-01-01

    The effects of human growth hormone treatment for five years on sitting height and subischial leg length of 35 prepubertal children with isolated growth hormone deficiency were investigated. Body segments reacted equally to treatment with human growth hormone; this is important when comparing the effect of growth hormone on the growth of children with skeletal dysplasias or after spinal irradiation.

  6. Increasing Enrollment through Benefit Segmentation.

    Science.gov (United States)

    Goodnow, Betty

    1982-01-01

    The applicability of benefit segmentation, a market research technique which groups people according to benefits expected from a program offering, was tested at the College of DuPage. Preferences and demographic characteristics were analyzed and program improvements adopted, increasing enrollment by 20 percent. (Author/SK)

  7. Segmenting Trajectories by Movement States

    NARCIS (Netherlands)

    Buchin, M.; Kruckenberg, H.; Kölzsch, A.; Timpf, S.; Laube, P.

    2013-01-01

    Dividing movement trajectories according to different movement states of animals has become a challenge in movement ecology, as well as in algorithm development. In this study, we revisit and extend a framework for trajectory segmentation based on spatio-temporal criteria for this purpose. We adapt

  8. Multiple Segment Factorial Vignette Designs

    Science.gov (United States)

    Ganong, Lawrence H.; Coleman, Marilyn

    2006-01-01

    The multiple segment factorial vignette design (MSFV) combines elements of experimental designs and probability sampling with the inductive, exploratory approach of qualitative research. MSFVs allow researchers to investigate topics that may be hard to study because of ethical or logistical concerns. Participants are presented with short stories…

  9. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    Method for supervised segmentation of volumetric data. The method is trained from manual annotations, and these annotations make the method very flexible, which we demonstrate in our experiments. Our method infers label information locally by matching the pattern in a neighborhood around a voxel ...... to a dictionary, and hereby accounts for the volume texture....

  10. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  11. Crowdsourcing the creation of image segmentation algorithms for connectomics

    Directory of Open Access Journals (Sweden)

    Ignacio eArganda-Carreras

    2015-11-01

    Full Text Available To stimulate progress in automating the reconstruction of neural circuits,we organized the first international challenge on 2D segmentationof electron microscopic (EM images of the brain. Participants submittedboundary maps predicted for a test set of images, and were scoredbased on their agreement with ground truth from human experts. Thewinning team had no prior experience with EM images, and employeda convolutional network. This ``deep learning'' approach has sincebecome accepted as a standard for segmentation of EM images. The challengehas continued to accept submissions, and the best so far has resultedfrom cooperation between two teams. The challenge has probably saturated,as algorithms cannot progress beyond limits set by ambiguities inherentin 2D scoring. Retrospective evaluation of the challenge scoring systemreveals that it was not sufficiently robust to variations in the widthsof neurite borders. We propose a solution to this problem, which shouldbe useful for a future 3D segmentation challenge.

  12. Key Features of the Deployed NPP/NPOESS Ground System

    Science.gov (United States)

    Heckmann, G.; Grant, K. D.; Mulligan, J. E.

    2010-12-01

    The National Oceanic & Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics & Space Administration (NASA) are jointly acquiring the next-generation weather/environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current NOAA Polar-orbiting Operational Environmental Satellites (POES) and DoD Defense Meteorological Satellite Program (DMSP). NPOESS satellites carry sensors to collect meteorological, oceanographic, climatological, and solar-geophysical data of the earth, atmosphere, and space. The ground data processing segment is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence & Information Systems (IIS). The IDPS processes NPOESS Preparatory Project (NPP)/NPOESS satellite data to provide environmental data products/records (EDRs) to NOAA and DoD processing centers operated by the US government. The IDPS will process EDRs beginning with NPP and continuing through the lifetime of the NPOESS system. The command & telemetry segment is the Command, Control & Communications Segment (C3S), also developed by Raytheon IIS. C3S is responsible for managing the overall NPP/NPOESS missions from control & status of the space and ground assets to ensuring delivery of timely, high quality data from the Space Segment to IDPS for processing. In addition, the C3S provides the globally-distributed ground assets needed to collect and transport mission, telemetry, and command data between the satellites and processing locations. The C3S provides all functions required for day-to-day satellite commanding & state-of-health monitoring, and delivery of Stored Mission Data to each Central IDP for data products development and transfer to system subscribers. The C3S also monitors and reports system-wide health & status and data communications with external systems and between the segments. The C3S & IDPS segments were delivered & transitioned to

  13. Competition between influenza A virus genome segments.

    Directory of Open Access Journals (Sweden)

    Ivy Widjaja

    Full Text Available Influenza A virus (IAV contains a segmented negative-strand RNA genome. How IAV balances the replication and transcription of its multiple genome segments is not understood. We developed a dual competition assay based on the co-transfection of firefly or Gaussia luciferase-encoding genome segments together with plasmids encoding IAV polymerase subunits and nucleoprotein. At limiting amounts of polymerase subunits, expression of the firefly luciferase segment was negatively affected by the presence of its Gaussia luciferase counterpart, indicative of competition between reporter genome segments. This competition could be relieved by increasing or decreasing the relative amounts of firefly or Gaussia reporter segment, respectively. The balance between the luciferase expression levels was also affected by the identity of the untranslated regions (UTRs as well as segment length. In general it appeared that genome segments displaying inherent higher expression levels were more efficient competitors of another segment. When natural genome segments were tested for their ability to suppress reporter gene expression, shorter genome segments generally reduced firefly luciferase expression to a larger extent, with the M and NS segments having the largest effect. The balance between different reporter segments was most dramatically affected by the introduction of UTR panhandle-stabilizing mutations. Furthermore, only reporter genome segments carrying these mutations were able to efficiently compete with the natural genome segments in infected cells. Our data indicate that IAV genome segments compete for available polymerases. Competition is affected by segment length, coding region, and UTRs. This competition is probably most apparent early during infection, when limiting amounts of polymerases are present, and may contribute to the regulation of segment-specific replication and transcription.

  14. Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform

    Directory of Open Access Journals (Sweden)

    Shuo-Tsung Chen

    2015-01-01

    Full Text Available Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  15. a Comparison of Tree Segmentation Methods Using Very High Density Airborne Laser Scanner Data

    Science.gov (United States)

    Pirotti, F.; Kobal, M.; Roussel, J. R.

    2017-09-01

    Developments of LiDAR technology are decreasing the unit cost per single point (e.g. single-photo counting). This brings to the possibility of future LiDAR datasets having very dense point clouds. In this work, we process a very dense point cloud ( 200 points per square meter), using three different methods for segmenting single trees and extracting tree positions and other metrics of interest in forestry, such as tree height distribution and canopy area distribution. The three algorithms are tested at decreasing densities, up to a lowest density of 5 point per square meter. Accuracy assessment is done using Kappa, recall, precision and F-Score metrics comparing results with tree positions from groundtruth measurements in six ground plots where tree positions and heights were surveyed manually. Results show that one method provides better Kappa and recall accuracy results for all cases, and that different point densities, in the range used in this study, do not affect accuracy significantly. Processing time is also considered; the method with better accuracy is several times slower than the other two methods and increases exponentially with point density. Best performer gave Kappa = 0.7. The implications of metrics for determining the accuracy of results of point positions' detection is reported. Motives for the different performances of the three methods is discussed and further research direction is proposed.

  16. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  17. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    Science.gov (United States)

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  18. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 (United States); Chen, Ken-Chung; Tang, Zhen [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Xia, James J., E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery, Shanghai Jiao Tong University School of Medicine, Shanghai Ninth People’s Hospital, Shanghai 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 and Department of Brain and Cognitive Engineering, Korea University, Seoul 02841 (Korea, Republic of)

    2016-01-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  19. Uni-modal versus joint segmentation for region-based image fusion

    NARCIS (Netherlands)

    Lewis, J.J.; Nikolov, S.G.; Canagarajah, C.N.; Bull, D.R.; Toet, A.

    2006-01-01

    A number of segmentation techniques are compared with regard to their usefulness for region-based image and video fusion. In order to achieve this, a new multi-sensor data set is introduced containing a variety of infra-red, visible and pixel fused images together with manually produced 'ground

  20. Uni-modal versus joint segmentation for region-based image fusion

    NARCIS (Netherlands)

    Lewis, J.J.; Nikolov, S.G.; Canagarajah, C.N.; Bull, D.R.; Toet, A.

    2006-01-01

    A number of segmentation techniques are compared with regard to their usefulness for region-based image and video fusion. In order to achieve this, a new multi-sensor data set is introduced containing a variety of infra-red, visible and pixel fused images together with manually produced 'ground trut

  1. Segmental and Kinetic Contributions in Vertical Jumps Performed with and without an Arm Swing

    Science.gov (United States)

    Feltner, Michael E.; Bishop, Elijah J.; Perez, Cassandra M.

    2004-01-01

    To determine the contributions of the motions of the body segments to the vertical ground reaction force ([F.sub.z]), the joint torques produced by the leg muscles, and the time course of vertical velocity generation during a vertical jump, 15 men were videotaped performing countermovement vertical jumps from a force plate with and without an arm…

  2. Performance evaluation of automated segmentation software on optical coherence tomography volume data.

    Science.gov (United States)

    Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E; Debuc, Delia Cabrera

    2016-05-01

    Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth.

  3. Image Segmentation for Connectomics Using Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    Tasdizen, Tolga; Seyedhosseini, Mojtaba; Liu, TIng; Jones, Cory; Jurrus, Elizabeth R.

    2014-12-01

    Reconstruction of neural circuits at the microscopic scale of individual neurons and synapses, also known as connectomics, is an important challenge for neuroscience. While an important motivation of connectomics is providing anatomical ground truth for neural circuit models, the ability to decipher neural wiring maps at the individual cell level is also important in studies of many neurodegenerative diseases. Reconstruction of a neural circuit at the individual neuron level requires the use of electron microscopy images due to their extremely high resolution. Computational challenges include pixel-by-pixel annotation of these images into classes such as cell membrane, mitochondria and synaptic vesicles and the segmentation of individual neurons. State-of-the-art image analysis solutions are still far from the accuracy and robustness of human vision and biologists are still limited to studying small neural circuits using mostly manual analysis. In this chapter, we describe our image analysis pipeline that makes use of novel supervised machine learning techniques to tackle this problem.

  4. Fitness cost

    DEFF Research Database (Denmark)

    Nielsen, Karen L.; Pedersen, Thomas M.; Udekwu, Klas I.

    2012-01-01

    of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 215 were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis...... to that seen in Denmark. We propose a significant fitness cost of resistance as the main bacteriological explanation for the disappearance of the multiresistant complex 83A MRSA in Denmark following a reduction in antibiotic usage.......Denmark and several other countries experienced the first epidemic of methicillin-resistant Staphylococcus aureus (MRSA) during the period 196575, which was caused by multiresistant isolates of phage complex 83A. In Denmark these MRSA isolates disappeared almost completely, being replaced by other...

  5. Segmentation Scheme for Safety Enhancement of Engineered Safety Features Component Control System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sangseok; Sohn, Kwangyoung [Korea Reliability Technology and System, Daejeon (Korea, Republic of); Lee, Junku; Park, Geunok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-05-15

    Common Caused Failure (CCF) or undetectable failure would adversely impact safety functions of ESF-CCS in the existing nuclear power plants. We propose the segmentation scheme to solve these problems. Main function assignment to segments in the proposed segmentation scheme is based on functional dependency and critical function success path by using the dependency depth matrix. The segment has functional independence and physical isolation. The segmentation structure is that prohibit failure propagation to others from undetectable failures. Therefore, the segmentation system structure has robustness to undetectable failures. The segmentation system structure has functional diversity. The specific function in the segment defected by CCF, the specific function could be maintained by diverse control function that assigned to other segments. Device level control signals and system level control signals are separated and also control signal and status signals are separated due to signal transmission paths are allocated independently based on signal type. In this kind of design, single device failure or failures on signal path in the channel couldn't result in the loss of all segmented functions simultaneously. Thus the proposed segmentation function is the design scheme that improves availability of safety functions. In conventional ESF-CCS, the single controller generates the signal to control the multiple safety functions, and the reliability is achieved by multiplication within the channel. This design has a drawback causing the loss of multiple functions due to the CCF (Common Cause Failure) and single failure Heterogeneous controller guarantees the diversity ensuring the execution of safety functions against the CCF and single failure, but requiring a lot of resources like manpower and cost. The segmentation technology based on the compartmentalization and functional diversification decreases the CCF and single failure nonetheless the identical types of

  6. An Interactive Method Based on the Live Wire for Segmentation of the Breast in Mammography Images

    Directory of Open Access Journals (Sweden)

    Zhang Zewei

    2014-01-01

    Full Text Available In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.

  7. An interactive method based on the live wire for segmentation of the breast in mammography images.

    Science.gov (United States)

    Zewei, Zhang; Tianyue, Wang; Li, Guo; Tingting, Wang; Lu, Xu

    2014-01-01

    In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two cases of breast segmentation data. Compared with the traditional method of image segmentation, experimental results show that the method achieves more accurate segmentation of breast lumps and provides more accurate objective basis on quantitative and qualitative analysis of breast lumps.

  8. Fast plane segmentation with line primitives for RGB-D sensor

    Directory of Open Access Journals (Sweden)

    Lizhi Zhang

    2016-12-01

    Full Text Available This article presents a fast and robust plane segmentation approach for RGB-D type sensor, which detects plane candidates by line segments extracted from 2-D scanline projected from row or column points. It neither requires high computation to calculate local normals for the entire point cloud as most of approaches do nor randomly chooses plane candidates such as RANSAC-like approaches. First, a line extraction algorithm is utilized to extract line segments. Second, the plane candidates are detected by estimating local normal of points lying on line segments. Finally, the plane having most inlier is recursively extracted from the plane candidates as the result plane. Experiments were conducted with different data sets and the segmentation performances were evaluated quantitatively and qualitatively. We demonstrated the efficiency and robustness of our proposed approach, especially in the none plane scenario, the approach needs little computational cost.

  9. Statistical evaluation of manual segmentation of a diffuse low-grade glioma MRI dataset.

    Science.gov (United States)

    Ben Abdallah, Meriem; Blonski, Marie; Wantz-Mezieres, Sophie; Gaudeau, Yann; Taillandier, Luc; Moureaux, Jean-Marie

    2016-08-01

    Software-based manual segmentation is critical to the supervision of diffuse low-grade glioma patients and to the optimal treatment's choice. However, manual segmentation being time-consuming, it is difficult to include it in the clinical routine. An alternative to circumvent the time cost of manual segmentation could be to share the task among different practitioners, providing it can be reproduced. The goal of our work is to assess diffuse low-grade gliomas' manual segmentation's reproducibility on MRI scans, with regard to practitioners, their experience and field of expertise. A panel of 13 experts manually segmented 12 diffuse low-grade glioma clinical MRI datasets using the OSIRIX software. A statistical analysis gave promising results, as the practitioner factor, the medical specialty and the years of experience seem to have no significant impact on the average values of the tumor volume variable.

  10. Automatic Spatially-Adaptive Balancing of Energy Terms for Image Segmentation

    CERN Document Server

    Rao, Josna; Abugharbieh, Rafeef

    2009-01-01

    Image segmentation techniques are predominately based on parameter-laden optimization. The objective function typically involves weights for balancing competing image fidelity and segmentation regularization cost terms. Setting these weights suitably has been a painstaking, empirical process. Even if such ideal weights are found for a novel image, most current approaches fix the weight across the whole image domain, ignoring the spatially-varying properties of object shape and image appearance. We propose a novel technique that autonomously balances these terms in a spatially-adaptive manner through the incorporation of image reliability in a graph-based segmentation framework. We validate on synthetic data achieving a reduction in mean error of 47% (p-value << 0.05) when compared to the best fixed parameter segmentation. We also present results on medical images (including segmentations of the corpus callosum and brain tissue in MRI data) and on natural images.

  11. A Time-Consistent Video Segmentation Algorithm Designed for Real-Time Implementation

    Directory of Open Access Journals (Sweden)

    M. El Hassani

    2008-01-01

    Temporal consistency of the segmentation is ensured by incorporating motion information through the use of an improved change-detection mask. This mask is designed using both illumination differences between frames and region segmentation of the previous frame. By considering both pixel and region levels, we obtain a particularly efficient algorithm at a low computational cost, allowing its implementation in real-time on the TriMedia processor for CIF image sequences.

  12. Segmentation and Shape Classification of Nuclei in DAPI Images

    OpenAIRE

    Snell, V; Kittler, J.; Christmas, W

    2011-01-01

    This paper addresses issues of analysis of DAPI-stained microscopy images of cell samples, particularly classification of objects as single nuclei, nuclei clusters or nonnuclear material. First, segmentation is significantly improved compared to Otsu’s method[5] by choosing a more appropriate threshold, using a cost-function that explicitly relates to the quality of resulting boundary, rather than image histogram. This method applies ideas from active contour models to threshold-based segment...

  13. An Interactive Method Based on the Live Wire for Segmentation of the Breast in Mammography Images

    OpenAIRE

    Zhang Zewei; Wang Tianyue; Guo Li; Wang Tingting; Xu Lu

    2014-01-01

    In order to improve accuracy of computer-aided diagnosis of breast lumps, the authors introduce an improved interactive segmentation method based on Live Wire. This paper presents the Gabor filters and FCM clustering algorithm is introduced to the Live Wire cost function definition. According to the image FCM analysis for image edge enhancement, we eliminate the interference of weak edge and access external features clear segmentation results of breast lumps through improving Live Wire on two...

  14. Innovative commercial ``ground source'' heat pump system sources and sinks: Engineering and economics

    Energy Technology Data Exchange (ETDEWEB)

    Sachs, H.M.; Lowenstein, A.I.; Henderson, H.I. Jr.; Carlson, S.W.; Risser, J.E.

    1998-07-01

    Geothermal heat pumps, which will be called GX systems in this paper, have been employed in specialty applications on both residential and commercial buildings for several decades. GX systems generally have very competitive life cycle costs, but somewhat higher initial costs. The incremental cost of the ground heat exchanger cost is close to the average cost per ton, so GX systems work best with very efficient building shells. Innovative methods can reduce the ground heat exchanger cost. These include better coupling of the heat exchange boreholes to the ground, hybrid systems that use low cost closed fluid coolers to supplement the ground heat exchanger where cooling loads dominate, open loop systems, and opportunistic systems that use sewage effluent or other non-standard sources for heat exchange. These approaches and their benefits are illustrated through five case studies.

  15. Polyp Segmentation in NBI Colonoscopy

    Science.gov (United States)

    Gross, Sebastian; Kennel, Manuel; Stehle, Thomas; Wulff, Jonas; Tischendorf, Jens; Trautwein, Christian; Aach, Til

    Endoscopic screening of the colon (colonoscopy) is performed to prevent cancer and to support therapy. During intervention colon polyps are located, inspected and, if need be, removed by the investigator. We propose a segmentation algorithm as a part of an automatic polyp classification system for colonoscopic Narrow-Band images. Our approach includes multi-scale filtering for noise reduction, suppression of small blood vessels, and enhancement of major edges. Results of the subsequent edge detection are compared to a set of elliptic templates and evaluated. We validated our algorithm on our polyp database with images acquired during routine colonoscopic examinations. The presented results show the reliable segmentation performance of our method and its robustness to image variations.

  16. Spatial localization of speech segments

    DEFF Research Database (Denmark)

    Karlsen, Brian Lykkegaard

    1999-01-01

    angle the target is likely to have originated from. The model is trained on the experimental data. On the basis of the experimental results, it is concluded that the human ability to localize speech segments in adverse noise depends on the speech segment as well as its point of origin in space...... the task of the experiment. The psychoacoustical experiment used naturally-spoken Danish consonant-vowel combinations as targets presented in diffuse speech-shaped noise at a peak SNR of -10 dB. The subjects were normal hearing persons. The experiment took place in an anechoic chamber where eight...... loudspeakers were suspended so that they surrounded the subjects in the horizontal plane. The subjects were required to push a button on a pad indicating where they had localized the target to in the horizontal plane. The response pad had twelve buttons arranged uniformly in a circle and two further buttons so...

  17. Aorta Segmentation for Stent Simulation

    CERN Document Server

    Egger, Jan; Setser, Randolph; Renapuraar, Rahul; Biermann, Christina; O'Donnell, Thomas

    2011-01-01

    Simulation of arterial stenting procedures prior to intervention allows for appropriate device selection as well as highlights potential complications. To this end, we present a framework for facilitating virtual aortic stenting from a contrast computer tomography (CT) scan. More specifically, we present a method for both lumen and outer wall segmentation that may be employed in determining both the appropriateness of intervention as well as the selection and localization of the device. The more challenging recovery of the outer wall is based on a novel minimal closure tracking algorithm. Our aortic segmentation method has been validated on over 3000 multiplanar reformatting (MPR) planes from 50 CT angiography data sets yielding a Dice Similarity Coefficient (DSC) of 90.67%.

  18. Text Segmentation Using Exponential Models

    CERN Document Server

    Beeferman, D; Lafferty, G D; Beeferman, Doug; Berger, Adam; Lafferty, John

    1997-01-01

    This paper introduces a new statistical approach to partitioning text automatically into coherent segments. Our approach enlists both short-range and long-range language models to help it sniff out likely sites of topic changes in text. To aid its search, the system consults a set of simple lexical hints it has learned to associate with the presence of boundaries through inspection of a large corpus of annotated data. We also propose a new probabilistically motivated error metric for use by the natural language processing and information retrieval communities, intended to supersede precision and recall for appraising segmentation algorithms. Qualitative assessment of our algorithm as well as evaluation using this new metric demonstrate the effectiveness of our approach in two very different domains, Wall Street Journal articles and the TDT Corpus, a collection of newswire articles and broadcast news transcripts.

  19. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    2015-01-01

    We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...

  20. Older People's Mobility: Segments, Factors, Trends

    DEFF Research Database (Denmark)

    Haustein, Sonja; Siren, Anu

    2015-01-01

    demographic, health-related, or transport-related factors. This paper reviews these studies and compares the segments of older people that different studies have identified. First, as a result of a systematic comparison, we identified four generic segments: (1) an active car-oriented segment; (2) a car......The expanding older population is increasingly diverse with regard to, for example, age, income, location, and health. Within transport research, this diversity has recently been addressed in studies that segment the older population into homogeneous groups based on combinations of various......-dependent segment, restricted in mobility; (3) a mobile multimodal segment; (4) and a segment depending on public transport and other services. Second, we examined the single factors used in the reviewed segmentation studies, with focus on whether there is evidence in the literature for the factors’ effect on older...

  1. Vibration damping for the Segmented Mirror Telescope

    Science.gov (United States)

    Maly, Joseph R.; Yingling, Adam J.; Griffin, Steven F.; Agrawal, Brij N.; Cobb, Richard G.; Chambers, Trevor S.

    2012-09-01

    The Segmented Mirror Telescope (SMT) at the Naval Postgraduate School (NPS) in Monterey is a next-generation deployable telescope, featuring a 3-meter 6-segment primary mirror and advanced wavefront sensing and correction capabilities. In its stowed configuration, the SMT primary mirror segments collapse into a small volume; once on location, these segments open to the full 3-meter diameter. The segments must be very accurately aligned after deployment and the segment surfaces are actively controlled using numerous small, embedded actuators. The SMT employs a passive damping system to complement the actuators and mitigate the effects of low-frequency (operating deflection shapes of the mirror and to quantify segment edge displacements; relative alignment of λ/4 or better was desired. The TMDs attenuated the vibration amplitudes by 80% and reduced adjacent segment phase mismatches to acceptable levels.

  2. HF Transverse Segmentation and Tagging Jet Capability

    CERN Document Server

    Doroshkevich, E A; Kuleshov, Sergey

    1998-01-01

    So called tagging jets and pile-up were simulated for the optimisation of the HF segmentation. The energy resolution, angular resolution and efficiency of jet reconstruction are defined for different calorimeter segmentation.

  3. Human Segmentation Using Haar-Classifier

    Directory of Open Access Journals (Sweden)

    Dharani S

    2014-07-01

    Full Text Available Segmentation is an important process in many aspects of multimedia applications. Fast and perfect segmentation of moving objects in video sequences is a basic task in many computer visions and video investigation applications. Particularly Human detection is an active research area in computer vision applications. Segmentation is very useful for tracking and recognition the object in a moving clip. The motion segmentation problem is studied and reviewed the most important techniques. We illustrate some common methods for segmenting the moving objects including background subtraction, temporal segmentation and edge detection. Contour and threshold are common methods for segmenting the objects in moving clip. These methods are widely exploited for moving object segmentation in many video surveillance applications, such as traffic monitoring, human motion capture. In this paper, Haar Classifier is used to detect humans in a moving video clip some features like face detection, eye detection, full body, upper body and lower body detection.

  4. Ultrasound Common Carotid Artery Segmentation Based on Active Shape Model

    Directory of Open Access Journals (Sweden)

    Xin Yang

    2013-01-01

    Full Text Available Carotid atherosclerosis is a major reason of stroke, a leading cause of death and disability. In this paper, a segmentation method based on Active Shape Model (ASM is developed and evaluated to outline common carotid artery (CCA for carotid atherosclerosis computer-aided evaluation and diagnosis. The proposed method is used to segment both media-adventitia-boundary (MAB and lumen-intima-boundary (LIB on transverse views slices from three-dimensional ultrasound (3D US images. The data set consists of sixty-eight, 17 × 2 × 2, 3D US volume data acquired from the left and right carotid arteries of seventeen patients (eight treated with 80 mg atorvastatin and nine with placebo, who had carotid stenosis of 60% or more, at baseline and after three months of treatment. Manually outlined boundaries by expert are adopted as the ground truth for evaluation. For the MAB and LIB segmentations, respectively, the algorithm yielded Dice Similarity Coefficient (DSC of 94.4% ± 3.2% and 92.8% ± 3.3%, mean absolute distances (MAD of 0.26 ± 0.18 mm and 0.33 ± 0.21 mm, and maximum absolute distances (MAXD of 0.75 ± 0.46 mm and 0.84 ± 0.39 mm. It took 4.3 ± 0.5 mins to segment single 3D US images, while it took 11.7 ± 1.2 mins for manual segmentation. The method would promote the translation of carotid 3D US to clinical care for the monitoring of the atherosclerotic disease progression and regression.

  5. Segmentation-based video coding

    Energy Technology Data Exchange (ETDEWEB)

    Lades, M. [Lawrence Livermore National Lab., CA (United States); Wong, Yiu-fai; Li, Qi [Texas Univ., San Antonio, TX (United States). Div. of Engineering

    1995-10-01

    Low bit rate video coding is gaining attention through a current wave of consumer oriented multimedia applications which aim, e.g., for video conferencing over telephone lines or for wireless communication. In this work we describe a new segmentation-based approach to video coding which belongs to a class of paradigms appearing very promising among the various proposed methods. Our method uses a nonlinear measure of local variance to identify the smooth areas in an image in a more indicative and robust fashion: First, the local minima in the variance image are identified. These minima then serve as seeds for the segmentation of the image with a watershed algorithm. Regions and their contours are extracted. Motion compensation is used to predict the change of regions between previous frames and the current frame. The error signal is then quantized. To reduce the number of regions and contours, we use the motion information to assist the segmentation process, to merge regions, resulting in a further reduction in bit rate. Our scheme has been tested and good results have been obtained.

  6. Primary mirror segment fabrication for CELT

    Science.gov (United States)

    Mast, Terry S.; Nelson, Jerry E.; Sommargren, Gary E.

    2000-07-01

    The primary mirror of the proposed California Extremely Large Telescope is a 30-meter diameter mosaic of hexagonal segments. An initial design calls for about a thousand segments with a hexagon side length of 0.5 meters, a primary-mirror focal ratio of 1.5, and a segment surface quality of about 20 nanometers rms. We describe concepts for fabricating these segments.

  7. Perfect and Dynamic Segmentation via the Internet

    OpenAIRE

    Matthias Huehn

    2007-01-01

    The paper starts from the hypothesis that traditional approaches to segmentation are seriously flawed because the object of segmentation, the consumer, has dramatically changed over the past 30 years. The New Consumer actively defies segmentation attempts by marketing professionals and thus makes a new approach to marketing strategy necessary. The paper suggests to let the consumers segment themselves instead of doing market research. Thereby the filter between consumer and company is dropped...

  8. Nitrate Removal from Ground Water: A Review

    OpenAIRE

    Archna *; Surinder K. Sharma; Ranbir Chander Sobti

    2012-01-01

    Nitrate contamination of ground water resources has increased in Asia, Europe, United States, and various other parts of the world. This trend has raised concern as nitrates cause methemoglobinemia and cancer. Several treatment processes can remove nitrates from water with varying degrees of efficiency, cost, and ease of operation. Available technical data, experience, and economics indicate that biological denitrification is more acceptable for nitrate removal than reverse osmosis and ion ex...

  9. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    Science.gov (United States)

    Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.

  10. Quantifying the total cost of infrastructure to enable environmentally preferable decisions: the case of urban roadway design

    Science.gov (United States)

    Gosse, Conrad A.; Clarens, Andres F.

    2013-03-01

    Efforts to reduce the environmental impacts of transportation infrastructure have generally overlooked many of the efficiencies that can be obtained by considering the relevant engineering and economic aspects as a system. Here, we present a framework for quantifying the burdens of ground transportation in urban settings that incorporates travel time, vehicle fuel and pavement maintenance costs. A Pareto set of bi-directional lane configurations for two-lane roadways yields non-dominated combinations of lane width, bicycle lanes and curb parking. Probabilistic analysis and microsimulation both show dramatic mobility reductions on road segments of insufficient width for heavy vehicles to pass bicycles without encroaching on oncoming traffic. This delay is positively correlated with uphill grades and increasing traffic volumes and inversely proportional to total pavement width. The response is nonlinear with grade and yields mixed uphill/downhill optimal lane configurations. Increasing bicycle mode share is negatively correlated with total costs and emissions for lane configurations allowing motor vehicles to safely pass bicycles, while the opposite is true for configurations that fail to facilitate passing. Spatial impacts on mobility also dictate that curb parking exhibits significant spatial opportunity costs related to the total cost Pareto curve. The proposed framework provides a means to evaluate relatively inexpensive lane reconfiguration options in response to changing modal share and priorities. These results provide quantitative evidence that efforts to reallocate limited pavement space to bicycles, like those being adopted in several US cities, could appreciably reduce costs for all users.

  11. Multi-strategy Segmentation of Melodies

    NARCIS (Netherlands)

    Rodríguez López, M.E.; Volk, Anja; Bountouridis, D.

    2014-01-01

    Melodic segmentation is a fundamental yet unsolved problem in automatic music processing. At present most melody segmentation models rely on a ‘single strategy’ (i.e. they model a single perceptual segmentation cue). However, cognitive studies suggest that multiple cues need to be considered. In thi

  12. Market Segmentation from a Behavioral Perspective

    Science.gov (United States)

    Wells, Victoria K.; Chang, Shing Wan; Oliveira-Castro, Jorge; Pallister, John

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847…

  13. The Process of Marketing Segmentation Strategy Selection

    OpenAIRE

    Ionel Dumitru

    2007-01-01

    The process of marketing segmentation strategy selection represents the essence of strategical marketing. We present hereinafter the main forms of the marketing statategy segmentation: undifferentiated marketing, differentiated marketing, concentrated marketing and personalized marketing. In practice, the companies use a mix of these marketing segmentation methods in order to maximize the proffit and to satisfy the consumers’ needs.

  14. Segmenting the mental health care market.

    Science.gov (United States)

    Stone, T R; Warren, W E; Stevens, R E

    1990-03-01

    The authors report the results of a segmentation study of the mental health care market. A random sample of 387 residents of a western city were interviewed by telephone. Cluster analysis of the data identified six market segments. Each is described according to the mental health care services to which it is most sensitive. Implications for targeting the segments are discussed.

  15. An Active Contour for Range Image Segmentation

    Directory of Open Access Journals (Sweden)

    Khaldi Amine

    2012-06-01

    Full Text Available In this paper a new classification of range image segmentation method is proposed according to the criterion of homogeneity which obeys the segmentation, then, a deformable model-type active contour “Snake” is applied to segment range images.

  16. Market Segmentation from a Behavioral Perspective

    Science.gov (United States)

    Wells, Victoria K.; Chang, Shing Wan; Oliveira-Castro, Jorge; Pallister, John

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847…

  17. LIFE-STYLE SEGMENTATION WITH TAILORED INTERVIEWING

    NARCIS (Netherlands)

    KAMAKURA, WA; WEDEL, M

    1995-01-01

    The authors present a tailored interviewing procedure for life-style segmentation. The procedure assumes that a life-style measurement instrument has been designed. A classification of a sample of consumers into life-style segments is obtained using a latent-class model. With these segments, the tai

  18. Quick Dissection of the Segmental Bronchi

    Science.gov (United States)

    Nakajima, Yuji

    2010-01-01

    Knowledge of the three-dimensional anatomy of the bronchopulmonary segments is essential for respiratory medicine. This report describes a quick guide for dissecting the segmental bronchi in formaldehyde-fixed human material. All segmental bronchi are easy to dissect, and thus, this exercise will help medical students to better understand the…

  19. The Process of Marketing Segmentation Strategy Selection

    OpenAIRE

    Ionel Dumitru

    2007-01-01

    The process of marketing segmentation strategy selection represents the essence of strategical marketing. We present hereinafter the main forms of the marketing statategy segmentation: undifferentiated marketing, differentiated marketing, concentrated marketing and personalized marketing. In practice, the companies use a mix of these marketing segmentation methods in order to maximize the proffit and to satisfy the consumers’ needs.

  20. Mora or syllable? Speech segmentation in Japanese

    NARCIS (Netherlands)

    Otake, T.; Hatano, G.; Cutler, A.; Mehler, J.

    1993-01-01

    Four experiments examined segmentation of spoken Japanese words by native and non-native listeners. Previous studies suggested that language rhythm determines the segmentation unit most natural to native listeners: French has syllabic rhythm, and French listeners use the syllable in segmentation, wh

  1. Ground Enterprise Management System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Emergent Space Technologies Inc. proposes to develop the Ground Enterprise Management System (GEMS) for spacecraft ground systems. GEMS will provide situational...

  2. Pose Estimation and Segmentation of Multiple People in Stereoscopic Movies.

    Science.gov (United States)

    Seguin, Guillaume; Alahari, Karteek; Sivic, Josef; Laptev, Ivan

    2015-08-01

    We describe a method to obtain a pixel-wise segmentation and pose estimation of multiple people in stereoscopic videos. This task involves challenges such as dealing with unconstrained stereoscopic video, non-stationary cameras, and complex indoor and outdoor dynamic scenes with multiple people. We cast the problem as a discrete labelling task involving multiple person labels, devise a suitable cost function, and optimize it efficiently. The contributions of our work are two-fold: First, we develop a segmentation model incorporating person detections and learnt articulated pose segmentation masks, as well as colour, motion, and stereo disparity cues. The model also explicitly represents depth ordering and occlusion. Second, we introduce a stereoscopic dataset with frames extracted from feature-length movies "StreetDance 3D" and "Pina". The dataset contains 587 annotated human poses, 1,158 bounding box annotations and 686 pixel-wise segmentations of people. The dataset is composed of indoor and outdoor scenes depicting multiple people with frequent occlusions. We demonstrate results on our new challenging dataset, as well as on the H2view dataset from (Sheasby et al. ACCV 2012).

  3. Sparse Multi-View Consistency for Object Segmentation.

    Science.gov (United States)

    Djelouah, Abdelaziz; Franco, Jean-Sébastien; Boyer, Edmond; Le Clerc, François; Pérez, Patrick

    2015-09-01

    Multiple view segmentation consists in segmenting objects simultaneously in several views. A key issue in that respect and compared to monocular settings is to ensure propagation of segmentation information between views while minimizing complexity and computational cost. In this work, we first investigate the idea that examining measurements at the projections of a sparse set of 3D points is sufficient to achieve this goal. The proposed algorithm softly assigns each of these 3D samples to the scene background if it projects on the background region in at least one view, or to the foreground if it projects on foreground region in all views. Second, we show how other modalities such as depth may be seamlessly integrated in the model and benefit the segmentation. The paper exposes a detailed set of experiments used to validate the algorithm, showing results comparable with the state of art, with reduced computational complexity. We also discuss the use of different modalities for specific situations, such as dealing with a low number of viewpoints or a scene with color ambiguities between foreground and background.

  4. Information tracking approach to segmentation of ultrasound imagery of prostate

    CERN Document Server

    Xu, Robert Sheng; Salama, Magdy

    2009-01-01

    The size and geometry of the prostate are known to be pivotal quantities used by clinicians to assess the condition of the gland during prostate cancer screening. As an alternative to palpation, an increasing number of methods for estimation of the above-mentioned quantities are based on using imagery data of prostate. The necessity to process large volumes of such data creates a need for automatic segmentation tools which would allow the estimation to be carried out with maximum accuracy and efficiency. In particular, the use of transrectal ultrasound (TRUS) imaging in prostate cancer screening seems to be becoming a standard clinical practice due to the high benefit-to-cost ratio of this imaging modality. Unfortunately, the segmentation of TRUS images is still hampered by relatively low contrast and reduced SNR of the images, thereby requiring the segmentation algorithms to incorporate prior knowledge about the geometry of the gland. In this paper, a novel approach to the problem of segmenting the TRUS imag...

  5. The Feat of Packaging Eight Unique Genome Segments

    Directory of Open Access Journals (Sweden)

    Sebastian Giese

    2016-06-01

    Full Text Available Influenza A viruses (IAVs harbor a segmented RNA genome that is organized into eight distinct viral ribonucleoprotein (vRNP complexes. Although a segmented genome may be a major advantage to adapt to new host environments, it comes at the cost of a highly sophisticated genome packaging mechanism. Newly synthesized vRNPs conquer the cellular endosomal recycling machinery to access the viral budding site at the plasma membrane. Genome packaging sequences unique to each RNA genome segment are thought to be key determinants ensuring the assembly and incorporation of eight distinct vRNPs into progeny viral particles. Recent studies using advanced fluorescence microscopy techniques suggest the formation of vRNP sub-bundles (comprising less than eight vRNPs during their transport on recycling endosomes. The formation of such sub-bundles might be required for efficient packaging of a bundle of eight different genomes segments at the budding site, further highlighting the complexity of IAV genome packaging.

  6. Accounting costs of transactions in real estate

    DEFF Research Database (Denmark)

    Stubkjær, Erik

    2005-01-01

    The costs of transactions in real estate is of importance for households, for investors, for statistical services, for governmental and international bodies concerned with the efficient delivery of basic state functions, as well as for research. The paper takes a multi-disciplinary approach...... in relating theoretical conceptualizations of transaction costs to national accounting and further to the identification and quantification of actions on units of real estate. The notion of satellite accounting of the System of National Accounts is applied to the segment of society concerned with changes...... in real estate. The paper ends up with an estimate of the cost of a major real property transaction in Denmark....

  7. Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning.

    Science.gov (United States)

    Korfiatis, Panagiotis; Kline, Timothy L; Erickson, Bradley J

    2016-12-01

    We present a deep convolutional neural network application based on autoencoders aimed at segmentation of increased signal regions in fluid-attenuated inversion recovery magnetic resonance imaging images. The convolutional autoencoders were trained on the publicly available Brain Tumor Image Segmentation Benchmark (BRATS) data set, and the accuracy was evaluated on a data set where 3 expert segmentations were available. The simultaneous truth and performance level estimation (STAPLE) algorithm was used to provide the ground truth for comparison, and Dice coefficient, Jaccard coefficient, true positive fraction, and false negative fraction were calculated. The proposed technique was within the interobserver variability with respect to Dice, Jaccard, and true positive fraction. The developed method can be used to produce automatic segmentations of tumor regions corresponding to signal-increased fluid-attenuated inversion recovery regions.

  8. On the Automated Segmentation of Epicardial and Mediastinal Cardiac Adipose Tissues Using Classification Algorithms.

    Science.gov (United States)

    Rodrigues, Érick Oliveira; Cordeiro de Morais, Felipe Fernandes; Conci, Aura

    2015-01-01

    The quantification of fat depots on the surroundings of the heart is an accurate procedure for evaluating health risk factors correlated with several diseases. However, this type of evaluation is not widely employed in clinical practice due to the required human workload. This work proposes a novel technique for the automatic segmentation of cardiac fat pads. The technique is based on applying classification algorithms to the segmentation of cardiac CT images. Furthermore, we extensively evaluate the performance of several algorithms on this task and discuss which provided better predictive models. Experimental results have shown that the mean accuracy for the classification of epicardial and mediastinal fats has been 98.4% with a mean true positive rate of 96.2%. On average, the Dice similarity index, regarding the segmented patients and the ground truth, was equal to 96.8%. Therfore, our technique has achieved the most accurate results for the automatic segmentation of cardiac fats, to date.

  9. A Comparison of Two Human Brain Tumor Segmentation Methods for MRI Data

    CERN Document Server

    Egger, Jan; Bauer, Miriam H A; Kuhnt, Daniela; Carl, Barbara; Freisleben, Bernd; Kolb, Andreas; Nimsky, Christopher

    2011-01-01

    The most common primary brain tumors are gliomas, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the preoperative tumor volume is essential. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming process that can be overcome with the help of computerized segmentation methods. In this contribution, two methods for World Health Organization (WHO) grade IV glioma segmentation in the human brain are compared using magnetic resonance imaging (MRI) patient data from the clinical routine. One method uses balloon inflation forces, and relies on detection of high intensity tumor boundaries that are coupled with the use of contrast agent gadolinium. The other method sets up a directed and weighted graph and performs a min-cut for optimal segmentation results. The ground truth of the tumor boundaries - for evaluating the methods on 27 cases - is manually extracted by neurosurgeons with several years of experience in the resection of glio...

  10. A new iterative method for liver segmentation from perfusion CT scans

    Science.gov (United States)

    Draoua, Ahmed; Albouy-Kissi, Adélaïde; Vacavant, Antoine; Sauvage, Vincent

    2014-03-01

    Liver cancer is the third most common cancer in the world, and the majority of patients with liver cancer will die within one year as a result of the cancer. Liver segmentation in the abdominal area is critical for diagnosis of tumor and for surgical procedures. Moreover, it is a challenging task as liver tissue has to be separated from adjacent organs and substantially the heart. In this paper we present a novel liver segmentation iterative method based on Fuzzy C-means (FCM) coupled with a fast marching segmentation and mutual information. A prerequisite for this method is the determination of slice correspondences between ground truth that is, a few images segmented by an expert, and images that contain liver and heart at the same time.

  11. Surface properties of poly(ethylene oxide)-based segmented block copolymers with monodisperse hard segments

    NARCIS (Netherlands)

    Husken, D.; Feijen, Jan; Gaymans, R.J.

    2009-01-01

    The surface properties of segmented block copolymers based on poly(ethylene oxide) (PEO) segments and monodisperse crystallizable tetra-amide segments were studied. The monodisperse crystallizable segments (T6T6T) were based on terephthalate (T) and hexamethylenediamine (6). Due to the crystallinity

  12. Reinventing Grounded Theory: Some Questions about Theory, Ground and Discovery

    Science.gov (United States)

    Thomas, Gary; James, David

    2006-01-01

    Grounded theory's popularity persists after three decades of broad-ranging critique. In this article three problematic notions are discussed--"theory," "ground" and "discovery"--which linger in the continuing use and development of grounded theory procedures. It is argued that far from providing the epistemic security promised by grounded theory,…

  13. Segmentation of radiographic images under topological constraints: application to the femur

    Energy Technology Data Exchange (ETDEWEB)

    Gamage, Pavan; Xie, Sheng Quan [University of Auckland, Department of Mechanical Engineering (Mechatronics), Auckland (New Zealand); Delmas, Patrice [University of Auckland, Department of Computer Science, Auckland (New Zealand); Xu, Wei Liang [Massey University, School of Engineering and Advanced Technology, Auckland (New Zealand)

    2010-09-15

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions. (orig.)

  14. Automatic and hierarchical segmentation of the human skeleton in CT images

    Science.gov (United States)

    Fu, Yabo; Liu, Shi; Li, H. Harold; Yang, Deshan

    2017-04-01

    Accurate segmentation of each bone of the human skeleton is useful in many medical disciplines. The results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulty due to the high image contrast between the bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to the many limitations of the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all the major individual bones of the human skeleton above the upper legs in CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. The degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. The segmentation results are evaluated using the Dice coefficient and point-to-surface error (PSE) against manual segmentation results as the ground-truth. The results suggest that the reported method can automatically segment and label the human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for the mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic

  15. Automatic and hierarchical segmentation of the human skeleton in CT images.

    Science.gov (United States)

    Fu, Yabo; Liu, Shi; Li, Hui Harold; Yang, Deshan

    2017-02-14

    Accurate segmentation of each bone in human skeleton is useful in many medical disciplines. Results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulties due to high image contrast between bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to many limitations in the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all major individual bones of human skeleton above the upper legs in the CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. Degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. Segmentation results are evaluated using Dice coefficient and point-to-surface error (PSE) against manual segmentation results as ground truth. The results suggest that the reported method can automatically segment and label human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic vertebrae, and 1.45 mm for pelvis bones.

  16. Unfolding Implementation in Industrial Market Segmentation

    DEFF Research Database (Denmark)

    Bøjgaard, John; Ellegaard, Chris

    2011-01-01

    of implementing industrial market segmentation is discussed and unfolded in this article. Extant literature has identified segmentation implementation as a core challenge for marketers, but also one, which has received limited empirical attention. Future research opportunities are formulated in this article......Market segmentation is an important method of strategic marketing and constitutes a cornerstone of the marketing literature. It has undergone extensive scientific inquiry during the past 50 years. Reporting on an extensive review of the market segmentation literature, the challenging task...... for marketing management. Three key elements and challenges connected to execution of market segmentation are identified — organization, motivation, and adaptation....

  17. Segmental patterning of the vertebrate embryonic axis.

    Science.gov (United States)

    Dequéant, Mary-Lee; Pourquié, Olivier

    2008-05-01

    The body axis of vertebrates is composed of a serial repetition of similar anatomical modules that are called segments or metameres. This particular mode of organization is especially conspicuous at the level of the periodic arrangement of vertebrae in the spine. The segmental pattern is established during embryogenesis when the somites--the embryonic segments of vertebrates--are rhythmically produced from the paraxial mesoderm. This process involves the segmentation clock, which is a travelling oscillator that interacts with a maturation wave called the wavefront to produce the periodic series of somites. Here, we review our current understanding of the segmentation process in vertebrates.

  18. Advanced Testing Method for Ground Thermal Conductivity

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiaobing [ORNL; Clemenzi, Rick [Geothermal Design Center Inc.; Liu, Su [University of Tennessee (UT)

    2017-04-01

    A new method is developed that can quickly and more accurately determine the effective ground thermal conductivity (GTC) based on thermal response test (TRT) results. Ground thermal conductivity is an important parameter for sizing ground heat exchangers (GHEXs) used by geothermal heat pump systems. The conventional GTC test method usually requires a TRT for 48 hours with a very stable electric power supply throughout the entire test. In contrast, the new method reduces the required test time by 40%–60% or more, and it can determine GTC even with an unstable or intermittent power supply. Consequently, it can significantly reduce the cost of GTC testing and increase its use, which will enable optimal design of geothermal heat pump systems. Further, this new method provides more information about the thermal properties of the GHEX and the ground than previous techniques. It can verify the installation quality of GHEXs and has the potential, if developed, to characterize the heterogeneous thermal properties of the ground formation surrounding the GHEXs.

  19. Ground water in Oklahoma

    Science.gov (United States)

    Leonard, A.R.

    1960-01-01

    One of the first requisites for the intelligent planning of utilization and control of water and for the administration of laws relating to its use is data on the quantity, quality, and mode of occurrence of the available supplies. The collection, evaluation and interpretation, and publication of such data are among the primary functions of the U.S. Geological Survey. Since 1895 the Congress has made appropriations to the Survey for investigation of the water resources of the Nation. In 1929 the Congress adopted the policy of dollar-for-dollar cooperation with the States and local governmental agencies in water-resources investigations of the U.S. Geological Survey. In 1937 a program of ground-water investigations was started in cooperation with the Oklahoma Geological Survey, and in 1949 this program was expanded to include cooperation with the Oklahoma Planning and Resources Board. In 1957 the State Legislature created the Oklahoma Water Resources Board as the principal State water agency and it became the principal local cooperator. The Ground Water Branch of the U.S. Geological Survey collects, analyzes, and evaluates basic information on ground-water resources and prepares interpretive reports based on those data. Cooperative ground-water work was first concentrated in the Panhandle counties. During World War II most work was related to problems of water supply for defense requirements. Since 1945 detailed investigations of ground-water availability have been made in 11 areas, chiefly in the western and central parts of the State. In addition, water levels in more than 300 wells are measured periodically, principally in the western half of the State. In Oklahoma current studies are directed toward determining the source, occurrence, and availability of ground water and toward estimating the quantity of water and rate of replenishment to specific areas and water-bearing formations. Ground water plays an important role in the economy of the State. It is

  20. Evaluating data worth for ground-water management under uncertainty

    Science.gov (United States)

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring

  1. Interactive and scale invariant segmentation of the rectum/sigmoid via user-defined templates

    Science.gov (United States)

    Lüddemann, Tobias; Egger, Jan

    2016-03-01

    Among all types of cancer, gynecological malignancies belong to the 4th most frequent type of cancer among women. Besides chemotherapy and external beam radiation, brachytherapy is the standard procedure for the treatment of these malignancies. In the progress of treatment planning, localization of the tumor as the target volume and adjacent organs of risks by segmentation is crucial to accomplish an optimal radiation distribution to the tumor while simultaneously preserving healthy tissue. Segmentation is performed manually and represents a time-consuming task in clinical daily routine. This study focuses on the segmentation of the rectum/sigmoid colon as an Organ-At-Risk in gynecological brachytherapy. The proposed segmentation method uses an interactive, graph-based segmentation scheme with a user-defined template. The scheme creates a directed two dimensional graph, followed by the minimal cost closed set computation on the graph, resulting in an outlining of the rectum. The graphs outline is dynamically adapted to the last calculated cut. Evaluation was performed by comparing manual segmentations of the rectum/sigmoid colon to results achieved with the proposed method. The comparison of the algorithmic to manual results yielded to a Dice Similarity Coefficient value of 83.85+/-4.08%, in comparison to 83.97+/-8.08% for the comparison of two manual segmentations of the same physician. Utilizing the proposed methodology resulted in a median time of 128 seconds per dataset, compared to 300 seconds needed for pure manual segmentation.

  2. Top Level Space Cost Methodology (TLSCM)

    Science.gov (United States)

    2007-11-02

    Software 7 6. ACEIT . 7 C. Ground Rules and Assumptions 7 D. Typical Life Cycle Cost Distribution 7 E. Methodologies 7 1. Cost/budget Threshold 9 2. Analogy...which is based on real-time Air Force and space programs. Ref.(25:2- 8, 2-9) 6. ACEIT : Automated Cost Estimating Integrated Tools( ACEIT ), Tecolote...Research, Inc. There is a way to use the ACEIT cost program to get a print-out of an expanded WBS. Therefore, find someone that has ACEIT experience and

  3. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  4. Three-dimensional brain magnetic resonance imaging segmentation via knowledge-driven decision theory.

    Science.gov (United States)

    Verma, Nishant; Muralidhar, Gautam S; Bovik, Alan C; Cowperthwaite, Matthew C; Burnett, Mark G; Markey, Mia K

    2014-10-01

    Brain tissue segmentation on magnetic resonance (MR) imaging is a difficult task because of significant intensity overlap between the tissue classes. We present a new knowledge-driven decision theory (KDT) approach that incorporates prior information of the relative extents of intensity overlap between tissue class pairs for volumetric MR tissue segmentation. The proposed approach better handles intensity overlap between tissues without explicitly employing methods for removal of MR image corruptions (such as bias field). Adaptive tissue class priors are employed that combine probabilistic atlas maps with spatial contextual information obtained from Markov random fields to guide tissue segmentation. The energy function is minimized using a variational level-set-based framework, which has shown great promise for MR image analysis. We evaluate the proposed method on two well-established real MR datasets with expert ground-truth segmentations and compare our approach against existing segmentation methods. KDT has low-computational complexity and shows better segmentation performance than other segmentation methods evaluated using these MR datasets.

  5. Detection of ground ice using ground penetrating radar method

    Institute of Scientific and Technical Information of China (English)

    Gennady M. Stoyanovich; Viktor V. Pupatenko; Yury A. Sukhobok

    2015-01-01

    The paper presents the results of a ground penetrating radar (GPR) application for the detection of ground ice. We com-bined a reflection traveltime curves analysis with a frequency spectrogram analysis. We found special anomalies at specific traces in the traveltime curves and ground boundaries analysis, and obtained a ground model for subsurface structure which allows the ground ice layer to be identified and delineated.

  6. Automatic Speech Segmentation Based on HMM

    Directory of Open Access Journals (Sweden)

    M. Kroul

    2007-06-01

    Full Text Available This contribution deals with the problem of automatic phoneme segmentation using HMMs. Automatization of speech segmentation task is important for applications, where large amount of data is needed to process, so manual segmentation is out of the question. In this paper we focus on automatic segmentation of recordings, which will be used for triphone synthesis unit database creation. For speech synthesis, the speech unit quality is a crucial aspect, so the maximal accuracy in segmentation is needed here. In this work, different kinds of HMMs with various parameters have been trained and their usefulness for automatic segmentation is discussed. At the end of this work, some segmentation accuracy tests of all models are presented.

  7. Unsupervised Performance Evaluation of Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chabrier Sebastien

    2006-01-01

    Full Text Available We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinet's measure (correct classification rate is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images.

  8. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  9. Image Segmentation Using Hierarchical Merge Tree

    Science.gov (United States)

    Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-10-01

    This paper investigates one of the most fundamental computer vision problems: image segmentation. We propose a supervised hierarchical approach to object-independent image segmentation. Starting with over-segmenting superpixels, we use a tree structure to represent the hierarchy of region merging, by which we reduce the problem of segmenting image regions to finding a set of label assignment to tree nodes. We formulate the tree structure as a constrained conditional model to associate region merging with likelihoods predicted using an ensemble boundary classifier. Final segmentations can then be inferred by finding globally optimal solutions to the model efficiently. We also present an iterative training and testing algorithm that generates various tree structures and combines them to emphasize accurate boundaries by segmentation accumulation. Experiment results and comparisons with other very recent methods on six public data sets demonstrate that our approach achieves the state-of-the-art region accuracy and is very competitive in image segmentation without semantic priors.

  10. Collison and Grounding

    DEFF Research Database (Denmark)

    Wang, G.; Ji, C.; Kuhala, P.;

    2006-01-01

    COMMITTEE MANDATE Concern for structural arrangements on ships and floating structures with regard to their integrity and adequacy in the events of collision and grounding, with the view towards risk assessment and management. Consideration shall be given to the frequency of occurrence, the proba......COMMITTEE MANDATE Concern for structural arrangements on ships and floating structures with regard to their integrity and adequacy in the events of collision and grounding, with the view towards risk assessment and management. Consideration shall be given to the frequency of occurrence...

  11. Protein-segment universe exhibiting transitions at intermediate segment length in conformational subspaces

    OpenAIRE

    Hirokawa Takatsugu; Ikeda Kazuyoshi; Higo Junichi; Tomii Kentaro

    2008-01-01

    Abstract Background Many studies have examined rules governing two aspects of protein structures: short segments and proteins' structural domains. Nevertheless, the organization and nature of the conformational space of segments with intermediate length between short segments and domains remain unclear. Conformational spaces of intermediate length segments probably differ from those of short segments. We investigated the identification and characterization of the boundary(s) between peptide-l...

  12. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching

    Directory of Open Access Journals (Sweden)

    Ward Kevin R

    2009-11-01

    Full Text Available Abstract Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI. Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM and Maximum A Posteriori Spatial Probability (MASP are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The

  13. An efficient iris segmentation approach

    Science.gov (United States)

    Gomai, Abdu; El-Zaart, A.; Mathkour, H.

    2011-10-01

    Iris recognition system became a reliable system for authentication and verification tasks. It consists of five stages: image acquisition, iris segmentation, iris normalization, feature encoding, and feature matching. Iris segmentation stage is one of the most important stages. It plays an essential role to locate the iris efficiently and accurately. In this paper, we present a new approach for iris segmentation using image processing technique. This approach is composed of four main parts. (1) Eliminating reflections of light on the eye image based on inverting the color of the grayscale image, filling holes in the intensity image, and inverting the color of the intensity image to get the original grayscale image without any reflections. (2) Pupil boundary detection based on dividing an eye image to nine sub-images and finding the minimum value of the mean intensity for each sub-image to get a suitable threshold value of pupil. (3) Enhancing the contrast of outer iris boundary using exponential operator to have sharp variation. (4) Outer iris boundary localization based on applying a gray threshold and morphological operations on the rectangular part of an eye image including the pupil and the outer boundaries of iris to find the small radius of outer iris boundary from the center of pupil. The proposed approach has been tested on CASIA v1.0 iris image database and other collected iris image database. The experimental results show that the approach is able to detect pupil and outer iris boundary with high accuracy results approximately 100% and reduce time consuming.

  14. Figure-ground interaction in the human visual cortex

    OpenAIRE

    Appelbaum, Lawrence G.; Wade, Alex R.; Pettet, Mark W.; Vildavski, Vladimir Y.; Anthony M Norcia

    2008-01-01

    Discontinuities in feature maps serve as important cues for the location of object boundaries. Here we used multi-input nonlinear analysis methods and EEG source imaging to assess the role of several different boundary cues in visual scene segmentation. Synthetic figure/ground displays portraying a circular figure region were defined solely by differences in the temporal frequency of the figure and background regions in the limiting case and by the addition of orientation or relative alignmen...

  15. Heterologous Packaging Signals on Segment 4, but Not Segment 6 or Segment 8, Limit Influenza A Virus Reassortment.

    Science.gov (United States)

    White, Maria C; Steel, John; Lowen, Anice C

    2017-06-01

    Influenza A virus (IAV) RNA packaging signals serve to direct the incorporation of IAV gene segments into virus particles, and this process is thought to be mediated by segment-segment interactions. These packaging signals are segment and strain specific, and as such, they have the potential to impact reassortment outcomes between different IAV strains. Our study aimed to quantify the impact of packaging signal mismatch on IAV reassortment using the human seasonal influenza A/Panama/2007/99 (H3N2) and pandemic influenza A/Netherlands/602/2009 (H1N1) viruses. Focusing on the three most divergent segments, we constructed pairs of viruses that encoded identical proteins but differed in the packaging signal regions on a single segment. We then evaluated the frequency with which segments carrying homologous versus heterologous packaging signals were incorporated into reassortant progeny viruses. We found that, when segment 4 (HA) of coinfecting parental viruses was modified, there was a significant preference for the segment containing matched packaging signals relative to the background of the virus. This preference was apparent even when the homologous HA constituted a minority of the HA segment population available in the cell for packaging. Conversely, when segment 6 (NA) or segment 8 (NS) carried modified packaging signals, there was no significant preference for homologous packaging signals. These data suggest that movement of NA and NS segments between the human H3N2 and H1N1 lineages is unlikely to be restricted by packaging signal mismatch, while movement of the HA segment would be more constrained. Our results indicate that the importance of packaging signals in IAV reassortment is segment dependent.IMPORTANCE Influenza A viruses (IAVs) can exchange genes through reassortment. This process contributes to both the highly diverse population of IAVs found in nature and the formation of novel epidemic and pandemic IAV strains. Our study sought to determine the

  16. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation

    Science.gov (United States)

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-01-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through ‘concept frames’ to ‘concept segments’ and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by

  17. Novel Facial Features Segmentation Algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An efficient algorithm for facial features extractions is proposed. The facial features we segment are the two eyes, nose and mouth. The algorithm is based on an improved Gabor wavelets edge detector, morphological approach to detect the face region and facial features regions, and an improved T-shape face mask to locate the extract location of facial features. The experimental results show that the proposed method is robust against facial expression, illumination, and can be also effective if the person wearing glasses, and so on.

  18. Operational Gesture Segmentation and Recognition

    Institute of Scientific and Technical Information of China (English)

    马赓宇; 林学訚

    2003-01-01

    Gesture analysis by computer is an important part of the human computer interface (HCI) and agesture analysis method was developed using a skin-color-based method to extract the area representing thehand in a single image with a distribution feature measurement designed to describe the hand shape in theimages. A hidden Markov model (HMM) based method was used to analyze the temporal variation andsegmentation of continuous operational gestures. Furthermore, a transition HMM was used to represent theperiod between gestures, so the method could segment continuous gestures and eliminate non-standardgestures. The system can analyze 2 frames per second, which is sufficient for real time analysis.

  19. Interferon Induced Focal Segmental Glomerulosclerosis

    Science.gov (United States)

    Bayram Kayar, Nuket; Alpay, Nadir; Hamdard, Jamshid; Emegil, Sebnem; Bag Soydas, Rabia; Baysal, Birol

    2016-01-01

    Behçet's disease is an inflammatory disease of unknown etiology which involves recurring oral and genital aphthous ulcers and ocular lesions as well as articular, vascular, and nervous system involvement. Focal segmental glomerulosclerosis (FSGS) is usually seen in viral infections, immune deficiency syndrome, sickle cell anemia, and hyperfiltration and secondary to interferon therapy. Here, we present a case of FSGS identified with kidney biopsy in a patient who had been diagnosed with Behçet's disease and received interferon-alpha treatment for uveitis and presented with acute renal failure and nephrotic syndrome associated with interferon. PMID:27847659

  20. A contrario line segment detection

    CERN Document Server

    von Gioi, Rafael Grompone

    2014-01-01

    The reliable detection of low-level image structures is an old and still challenging problem in computer vision. This?book leads a detailed tour through the LSD algorithm, a line segment detector designed to be fully automatic. Based on the a contrario framework, the algorithm works efficiently without the need of any parameter tuning. The design criteria are thoroughly explained and the algorithm's good and bad results are illustrated on real and synthetic images. The issues involved, as well as the strategies used, are common to many geometrical structure detection problems and some possible