WorldWideScience

Sample records for ground validation segment

  1. The LOFT Ground Segment

    DEFF Research Database (Denmark)

    Bozzo, E.; Antonelli, A.; Argan, A.

    2014-01-01

    targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT...... Burst alert System additionally identifies on-board bright impulsive events (e.g., Gamma-ray Bursts, GRBs) and broadcasts the corresponding position and trigger time to the ground using a dedicated system of ~15 VHF receivers. All WFM data are planned to be made public immediately. In this contribution...... we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We describe the expected GS contributions from ESA and the LOFT consortium. A review is provided of the planned LOFT data products and the details of the data flow, archiving...

  2. Noise destroys feedback enhanced figure-ground segmentation but not feedforward figure-ground segmentation

    Science.gov (United States)

    Romeo, August; Arall, Marina; Supèr, Hans

    2012-01-01

    Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception. PMID:22934028

  3. Deficit in figure-ground segmentation following closed head injury.

    Science.gov (United States)

    Baylis, G C; Baylis, L L

    1997-08-01

    Patient CB showed a severe impairment in figure-ground segmentation following a closed head injury. Unlike normal subjects, CB was unable to parse smaller and brighter parts of stimuli as figure. Moreover, she did not show the normal effect that symmetrical regions are seen as figure, although she was able to make overt judgments of symmetry. Since she was able to attend normally to isolated objects, CB demonstrates a dissociation between figure ground segmentation and subsequent processes of attention. Despite her severe impairment in figure-ground segmentation, CB showed normal 'parallel' single feature visual search. This suggests that figure-ground segmentation is dissociable from 'preattentive' processes such as visual search.

  4. ESA Earth Observation Ground Segment Evolution Strategy

    Science.gov (United States)

    Benveniste, J.; Albani, M.; Laur, H.

    2016-12-01

    One of the key elements driving the evolution of EO Ground Segments, in particular in Europe, has been to enable the creation of added value from EO data and products. This requires the ability to constantly adapt and improve the service to a user base expanding far beyond the `traditional' EO user community of remote sensing specialists. Citizen scientists, the general public, media and educational actors form another user group that is expected to grow. Technological advances, Open Data policies, including those implemented by ESA and the EU, as well as an increasing number of satellites in operations (e.g. Copernicus Sentinels) have led to an enormous increase in available data volumes. At the same time, even with modern network and data handling services, fewer users can afford to bulk-download and consider all potentially relevant data and associated knowledge. The "EO Innovation Europe" concept is being implemented in Europe in coordination between the European Commission, ESA and other European Space Agencies, and industry. This concept is encapsulated in the main ideas of "Bringing the User to the Data" and "Connecting the Users" to complement the traditional one-to-one "data delivery" approach of the past. Both ideas are aiming to better "empower the users" and to create a "sustainable system of interconnected EO Exploitation Platforms", with the objective to enable large scale exploitation of European EO data assets for stimulating innovation and to maximize their impact. These interoperable/interconnected platforms are virtual environments in which the users - individually or collaboratively - have access to the required data sources and processing tools, as opposed to downloading and handling the data `at home'. EO-Innovation Europe has been structured around three elements: an enabling element (acting as a back office), a stimulating element and an outreach element (acting as a front office). Within the enabling element, a "mutualisation" of efforts

  5. Eliciting Perceptual Ground Truth for Image Segmentation

    OpenAIRE

    Hodge, Victoria Jane; Eakins, John; Austin, Jim

    2006-01-01

    In this paper, we investigate human visual perception and establish a body of ground truth data elicited from human visual studies. We aim to build on the formative work of Ren, Eakins and Briggs who produced an initial ground truth database. Human subjects were asked to draw and rank their perceptions of the parts of a series of figurative images. These rankings were then used to score the perceptions, identify the preferred human breakdowns and thus allow us to induce perceptual rules for h...

  6. Figure-ground segmentation can occur without attention.

    Science.gov (United States)

    Kimchi, Ruth; Peterson, Mary A

    2008-07-01

    The question of whether or not figure-ground segmentation can occur without attention is unresolved. Early theorists assumed it can, but the evidence is scant and open to alternative interpretations. Recent research indicating that attention can influence figure-ground segmentation raises the question anew. We examined this issue by asking participants to perform a demanding change-detection task on a small matrix presented on a task-irrelevant scene of alternating regions organized into figures and grounds by convexity. Independently of any change in the matrix, the figure-ground organization of the scene changed or remained the same. Changes in scene organization produced congruency effects on target-change judgments, even though, when probed with surprise questions, participants could report neither the figure-ground status of the region on which the matrix appeared nor any change in that status. When attending to the scene, participants reported figure-ground status and changes to it highly accurately. These results clearly demonstrate that figure-ground segmentation can occur without focal attention.

  7. LANDSAT-D ground segment operations plan, revision A

    Science.gov (United States)

    Evans, B.

    1982-01-01

    The basic concept for the utilization of LANDSAT ground processing resources is described. Only the steady state activities that support normal ground processing are addressed. This ground segment operations plan covers all processing of the multispectral scanner and the processing of thematic mapper through data acquisition and payload correction data generation for the LANDSAT 4 mission. The capabilities embedded in the hardware and software elements are presented from an operations viewpoint. The personnel assignments associated with each functional process and the mechanisms available for controlling the overall data flow are identified.

  8. Figure-ground segmentation based on class-independent shape priors

    Science.gov (United States)

    Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu

    2018-01-01

    We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.

  9. The IXV Ground Segment design, implementation and operations

    Science.gov (United States)

    Martucci di Scarfizzi, Giovanni; Bellomo, Alessandro; Musso, Ivano; Bussi, Diego; Rabaioli, Massimo; Santoro, Gianfranco; Billig, Gerhard; Gallego Sanz, José María

    2016-07-01

    The Intermediate eXperimental Vehicle (IXV) is an ESA re-entry demonstrator that performed, on the 11th February of 2015, a successful re-entry demonstration mission. The project objectives were the design, development, manufacturing and on ground and in flight verification of an autonomous European lifting and aerodynamically controlled re-entry system. For the IXV mission a dedicated Ground Segment was provided. The main subsystems of the IXV Ground Segment were: IXV Mission Control Center (MCC), from where monitoring of the vehicle was performed, as well as support during pre-launch and recovery phases; IXV Ground Stations, used to cover IXV mission by receiving spacecraft telemetry and forwarding it toward the MCC; the IXV Communication Network, deployed to support the operations of the IXV mission by interconnecting all remote sites with MCC, supporting data, voice and video exchange. This paper describes the concept, architecture, development, implementation and operations of the ESA Intermediate Experimental Vehicle (IXV) Ground Segment and outlines the main operations and lessons learned during the preparation and successful execution of the IXV Mission.

  10. Running the figure to the ground: figure-ground segmentation during visual search.

    Science.gov (United States)

    Ralph, Brandon C W; Seli, Paul; Cheng, Vivian O Y; Solman, Grayden J F; Smilek, Daniel

    2014-04-01

    We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  12. Microstrip Resonator for High Field MRI with Capacitor-Segmented Strip and Ground Plane

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Boer, Vincent; Petersen, Esben Thade

    2017-01-01

    ) segmenting stripe and ground plane of the resonator with series capacitors. The design equations for capacitors providing symmetric current distribution are derived. The performance of two types of segmented resonators are investigated experimentally. To authors’ knowledge, a microstrip resonator, where both......, strip and ground plane are capacitor-segmented, is shown here for the first time....

  13. 3D segmentation of scintigraphic images with validation on realistic GATE simulations

    International Nuclear Information System (INIS)

    Burg, Samuel

    2011-01-01

    The objective of this thesis was to propose a new 3D segmentation method for scintigraphic imaging. The first part of the work was to simulate 3D volumes with known ground truth in order to validate a segmentation method over other. Monte-Carlo simulations were performed using the GATE software (Geant4 Application for Emission Tomography). For this, we characterized and modeled the gamma camera 'γ Imager' Biospace"T"M by comparing each measurement from a simulated acquisition to his real equivalent. The 'low level' segmentation tool that we have developed is based on a modeling of the levels of the image by probabilistic mixtures. Parameters estimation is done by an SEM algorithm (Stochastic Expectation Maximization). The 3D volume segmentation is achieved by an ICM algorithm (Iterative Conditional Mode). We compared the segmentation based on Gaussian and Poisson mixtures to segmentation by thresholding on the simulated volumes. This showed the relevance of the segmentations obtained using probabilistic mixtures, especially those obtained with Poisson mixtures. Those one has been used to segment real "1"8FDG PET images of the brain and to compute descriptive statistics of the different tissues. In order to obtain a 'high level' segmentation method and find anatomical structures (necrotic part or active part of a tumor, for example), we proposed a process based on the point processes formalism. A feasibility study has yielded very encouraging results. (author) [fr

  14. The Cryosat Payload Data Ground Segment and Data Processing

    Science.gov (United States)

    Frommknecht, B.; Mizzi, L.; Parrinello, T.; Badessi, S.

    2014-12-01

    The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change.Scope of this paper is to describe the Cryosat Ground Segment and its main function to satisfy the Cryosat mission requirements. In particular, the paper will discuss the current status of the L1b and L2 processing in terms of completeness and availability. An outlook will be given on planned product and processor updates, the associated reprocessing campaigns will be discussed as well.

  15. GPM GROUND VALIDATION CAMPAIGN REPORTS IFLOODS V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Campaign Reports IFloodS dataset consists of various reports filed by the scientists during the GPM Ground Validation Iowa Flood Studies...

  16. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  17. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Science.gov (United States)

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-05-19

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  18. Management of the science ground segment for the Euclid mission

    Science.gov (United States)

    Zacchei, Andrea; Hoar, John; Pasian, Fabio; Buenadicha, Guillermo; Dabin, Christophe; Gregorio, Anna; Mansutti, Oriana; Sauvage, Marc; Vuerli, Claudio

    2016-07-01

    Euclid is an ESA mission aimed at understanding the nature of dark energy and dark matter by using simultaneously two probes (weak lensing and baryon acoustic oscillations). The mission will observe galaxies and clusters of galaxies out to z 2, in a wide extra-galactic survey covering 15000 deg2, plus a deep survey covering an area of 40 deg². The payload is composed of two instruments, an imager in the visible domain (VIS) and an imager-spectrometer (NISP) covering the near-infrared. The launch is planned in Q4 of 2020. The elements of the Euclid Science Ground Segment (SGS) are the Science Operations Centre (SOC) operated by ESA and nine Science Data Centres (SDCs) in charge of data processing, provided by the Euclid Consortium (EC), formed by over 110 institutes spread in 15 countries. SOC and the EC started several years ago a tight collaboration in order to design and develop a single, cost-efficient and truly integrated SGS. The distributed nature, the size of the data set, and the needed accuracy of the results are the main challenges expected in the design and implementation of the SGS. In particular, the huge volume of data (not only Euclid data but also ground based data) to be processed in the SDCs will require distributed storage to avoid data migration across SDCs. This paper describes the management challenges that the Euclid SGS is facing while dealing with such complexity. The main aspect is related to the organisation of a geographically distributed software development team. In principle algorithms and code is developed in a large number of institutes, while data is actually processed at fewer centers (the national SDCs) where the operational computational infrastructures are maintained. The software produced for data handling, processing and analysis is built within a common development environment defined by the SGS System Team, common to SOC and ECSGS, which has already been active for several years. The code is built incrementally through

  19. GPM GROUND VALIDATION CITATION VIDEOS IPHEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Citation Videos IPHEx data were collected during the Integrated Precipitation and Hydrology Experiment (IPHEx) in the Southern...

  20. GPM GROUND VALIDATION METEOROLOGICAL TOWER ENVIRONMENT CANADA GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Meteorological Tower Environment Canada GCPEx dataset provides temperature, relative humidity, 10 m winds, pressure and solar radiation...

  1. GPM GROUND VALIDATION PRECIPITATION VIDEO IMAGER (PVI) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Precipitation Video Imager (PVI) GCPEx dataset collected precipitation particle images and drop size distribution data from November 2011...

  2. GPM GROUND VALIDATION ENVIRONMENT CANADA (EC) RADIOSONDE GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Environment Canada (EC) Radiosonde GCPEx dataset provides measurements of pressure, temperature, humidity, and winds collected by Vaisala...

  3. GPM Ground Validation Southern Appalachian Rain Gauge IPHEx V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Southern Appalachian Rain Gauge IPHEx dataset was collected during the Integrated Precipitation and Hydrology Experiment (IPHEx) field...

  4. GPM Ground Validation Autonomous Parsivel Unit (APU) OLYMPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Autonomous Parsivel Unit (APU) OLYMPEX dataset was collected during the OLYMPEX field campaign held at Washington's Olympic Peninsula...

  5. GPM GROUND VALIDATION DUAL POLARIZATION RADIOMETER GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Dual Polarization Radiometer GCPEx dataset provides brightness temperature measurements at frequencies 90 GHz (not polarized) and 150 GHz...

  6. Consistent interactive segmentation of pulmonary ground glass nodules identified in CT studies

    Science.gov (United States)

    Zhang, Li; Fang, Ming; Naidich, David P.; Novak, Carol L.

    2004-05-01

    Ground glass nodules (GGNs) have proved especially problematic in lung cancer diagnosis, as despite frequently being malignant they characteristically have extremely slow rates of growth. This problem is further magnified by the small size of many of these lesions now being routinely detected following the introduction of multislice CT scanners capable of acquiring contiguous high resolution 1 to 1.25 mm sections throughout the thorax in a single breathhold period. Although segmentation of solid nodules can be used clinically to determine volume doubling times quantitatively, reliable methods for segmentation of pure ground glass nodules have yet to be introduced. Our purpose is to evaluate a newly developed computer-based segmentation method for rapid and reproducible measurements of pure ground glass nodules. 23 pure or mixed ground glass nodules were identified in a total of 8 patients by a radiologist and subsequently segmented by our computer-based method using Markov random field and shape analysis. The computer-based segmentation was initialized by a click point. Methodological consistency was assessed using the overlap ratio between 3 segmentations initialized by 3 different click points for each nodule. The 95% confidence interval on the mean of the overlap ratios proved to be [0.984, 0.998]. The computer-based method failed on two nodules that were difficult to segment even manually either due to especially low contrast or markedly irregular margins. While achieving consistent manual segmentation of ground glass nodules has proven problematic most often due to indistinct boundaries and interobserver variability, our proposed method introduces a powerful new tool for obtaining reproducible quantitative measurements of these lesions. It is our intention to further document the value of this approach with a still larger set of ground glass nodules.

  7. Fast and Accurate Ground Truth Generation for Skew-Tolerance Evaluation of Page Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Okun Oleg

    2006-01-01

    Full Text Available Many image segmentation algorithms are known, but often there is an inherent obstacle in the unbiased evaluation of segmentation quality: the absence or lack of a common objective representation for segmentation results. Such a representation, known as the ground truth, is a description of what one should obtain as the result of ideal segmentation, independently of the segmentation algorithm used. The creation of ground truth is a laborious process and therefore any degree of automation is always welcome. Document image analysis is one of the areas where ground truths are employed. In this paper, we describe an automated tool called GROTTO intended to generate ground truths for skewed document images, which can be used for the performance evaluation of page segmentation algorithms. Some of these algorithms are claimed to be insensitive to skew (tilt of text lines. However, this fact is usually supported only by a visual comparison of what one obtains and what one should obtain since ground truths are mostly available for upright images, that is, those without skew. As a result, the evaluation is both subjective; that is, prone to errors, and tedious. Our tool allows users to quickly and easily produce many sufficiently accurate ground truths that can be employed in practice and therefore it facilitates automatic performance evaluation. The main idea is to utilize the ground truths available for upright images and the concept of the representative square [9] in order to produce the ground truths for skewed images. The usefulness of our tool is demonstrated through a number of experiments with real-document images of complex layout.

  8. GPM GROUND VALIDATION AUTONOMOUS PARSIVEL UNIT (APU) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Autonomous Parsivel Unit (APU) GCPEx dataset was collected by the Autonomous Parsivel Unit (APU), which is an optical disdrometer that...

  9. GPM GROUND VALIDATION AUTONOMOUS PARSIVEL UNIT (APU) IFLOODS V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Autonomous Parsivel Unit (APU) IFLOODS dataset collected data from several sites in eastern Iowa during the spring of 2013. The APU dataset...

  10. GPM GROUND VALIDATION KCBW NEXRAD GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation KCBW NEXRAD GCPEx dataset was collected during January 9, 2012 to March 12, 2012 for the GPM Cold-season Precipitation Experiment (GCPEx)....

  11. GPM GROUND VALIDATION SATELLITE SIMULATED ORBITS LPVEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Satellite Simulated Orbits LPVEx dataset is available in the Orbital database, which takes account for the atmospheric profiles, the...

  12. GPM GROUND VALIDATION AUTONOMOUS PARSIVEL UNIT (APU) NSSTC V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Autonomous Parsivel Unit (APU) NSSTC dataset was collected by the Autonomous Parsivel Unit (APU), which is an optical disdrometer based on...

  13. GPM Ground Validation Navigation Data ER-2 OLYMPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NASA ER-2 Navigation Data OLYMPEX dataset supplies navigation data collected by the NASA ER-2 aircraft for flights that occurred during...

  14. GPM GROUND VALIDATION GCPEX SNOW MICROPHYSICS CASE STUDY V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation GCPEX Snow Microphysics Case Study characterizes the 3-D microphysical evolution and distribution of snow in context of the thermodynamic...

  15. Dynamic segmentation to estimate vine vigor from ground images

    OpenAIRE

    Sáiz Rubio, Verónica; Rovira Más, Francisco

    2012-01-01

    [EN] The geographic information required to implement precision viticulture applications in real fields has led to the extensive use of remote sensing and airborne imagery. While advantageous because they cover large areas and provide diverse radiometric data, they are unreachable to most of medium-size Spanish growers who cannot afford such image sourcing. This research develops a new methodology to generate globally-referenced vigor maps in vineyards from ground images taken wit...

  16. Dynamic segmentation to estimate vine vigor from ground images

    OpenAIRE

    Sáiz-Rubio, V.; Rovira-Más, F.

    2012-01-01

    The geographic information required to implement precision viticulture applications in real fields has led to the extensive use of remote sensing and airborne imagery. While advantageous because they cover large areas and provide diverse radiometric data, they are unreachable to most of medium-size Spanish growers who cannot afford such image sourcing. This research develops a new methodology to generate globally-referenced vigor maps in vineyards from ground images taken with a camera mounte...

  17. Design, development, and validation of a segment support actuator for the prototype segmented mirror telescope

    Science.gov (United States)

    Deshmukh, Prasanna Gajanan; Mandal, Amaresh; Parihar, Padmakar S.; Nayak, Dayananda; Mishra, Deepta Sundar

    2018-01-01

    Segmented mirror telescopes (SMT) are built using several small hexagonal mirrors positioned and aligned by the three actuators and six edge sensors per segment to maintain the shape of the primary mirror. The actuators are responsible for maintaining and tracking the mirror segments to the desired position, in the presence of external disturbances introduced by wind, vibration, gravity, and temperature. The present paper describes our effort to develop a soft actuator and the actuator controller for prototype SMT at Indian Institute of Astrophysics, Bangalore. The actuator designed, developed, and validated is a soft actuator based on the voice coil motor and flexural elements. It is designed for the range of travel of ±1.5 mm and the force range of 25 N along with an offloading mechanism to reduce the power consumption. A precision controller using a programmable system on chip (PSoC 5Lp) and a customized drive board has also been developed for this actuator. The close loop proportional-integral-derivative (PID) controller implemented in the PSoC gets position feedback from a high-resolution linear optical encoder. The optimum PID gains are derived using relay tuning method. In the laboratory, we have conducted several experiments to test the performance of the prototype soft actuator as well as the controller. We could achieve 5.73- and 10.15-nm RMS position errors in the steady state as well as tracking with a constant speed of 350 nm/s, respectively. We also present the outcome of various performance tests carried out when off-loader is in action as well as the actuator is subjected to dynamic wind loading.

  18. Prognostic validation of a 17-segment score derived from a 20-segment score for myocardial perfusion SPECT interpretation.

    Science.gov (United States)

    Berman, Daniel S; Abidov, Aiden; Kang, Xingping; Hayes, Sean W; Friedman, John D; Sciammarella, Maria G; Cohen, Ishac; Gerlach, James; Waechter, Parker B; Germano, Guido; Hachamovitch, Rory

    2004-01-01

    Recently, a 17-segment model of the left ventricle has been recommended as an optimally weighted approach for interpreting myocardial perfusion single photon emission computed tomography (SPECT). Methods to convert databases from previous 20- to new 17-segment data and criteria for abnormality for the 17-segment scores are needed. Initially, for derivation of the conversion algorithm, 65 patients were studied (algorithm population) (pilot group, n = 28; validation group, n = 37). Three conversion algorithms were derived: algorithm 1, which used mid, distal, and apical scores; algorithm 2, which used distal and apical scores alone; and algorithm 3, which used maximal scores of the distal septal, lateral, and apical segments in the 20-segment model for 3 corresponding segments of the 17-segment model. The prognosis population comprised 16,020 consecutive patients (mean age, 65 +/- 12 years; 41% women) who had exercise or vasodilator stress technetium 99m sestamibi myocardial perfusion SPECT and were followed up for 2.1 +/- 0.8 years. In this population, 17-segment scores were derived from 20-segment scores by use of algorithm 2, which demonstrated the best agreement with expert 17-segment reading in the algorithm population. The prognostic value of the 20- and 17-segment scores was compared by converting the respective summed scores into percent myocardium abnormal. Conversion algorithm 2 was found to be highly concordant with expert visual analysis by the 17-segment model (r = 0.982; kappa = 0.866) in the algorithm population. In the prognosis population, 456 cardiac deaths occurred during follow-up. When the conversion algorithm was applied, extent and severity of perfusion defects were nearly identical by 20- and derived 17-segment scores. The receiver operating characteristic curve areas by 20- and 17-segment perfusion scores were identical for predicting cardiac death (both 0.77 +/- 0.02, P = not significant). The optimal prognostic cutoff value for either 20

  19. Gaia Launch Imminent: A Review of Practices (Good and Bad) in Building the Gaia Ground Segment

    Science.gov (United States)

    O'Mullane, W.

    2014-05-01

    As we approach launch the Gaia ground segment is ready to process a steady stream of complex data coming from Gaia at L2. This talk will focus on the software engineering aspects of the ground segment. Of course in a short paper it is difficult to cover everything but an attempt will be made to highlight some good things, like the Dictionary Tool and some things to be careful with like computer aided software engineering tools. The usefulness of some standards like ECSS will be touched upon. Testing is also certainly part of this story as are Challenges or Rehearsals so they will not go without mention.

  20. Seismic fragility formulations for segmented buried pipeline systems including the impact of differential ground subsidence

    Energy Technology Data Exchange (ETDEWEB)

    Pineda Porras, Omar Andrey [Los Alamos National Laboratory; Ordaz, Mario [UNAM, MEXICO CITY

    2009-01-01

    Though Differential Ground Subsidence (DGS) impacts the seismic response of segmented buried pipelines augmenting their vulnerability, fragility formulations to estimate repair rates under such condition are not available in the literature. Physical models to estimate pipeline seismic damage considering other cases of permanent ground subsidence (e.g. faulting, tectonic uplift, liquefaction, and landslides) have been extensively reported, not being the case of DGS. The refinement of the study of two important phenomena in Mexico City - the 1985 Michoacan earthquake scenario and the sinking of the city due to ground subsidence - has contributed to the analysis of the interrelation of pipeline damage, ground motion intensity, and DGS; from the analysis of the 48-inch pipeline network of the Mexico City's Water System, fragility formulations for segmented buried pipeline systems for two DGS levels are proposed. The novel parameter PGV{sup 2}/PGA, being PGV peak ground velocity and PGA peak ground acceleration, has been used as seismic parameter in these formulations, since it has shown better correlation to pipeline damage than PGV alone according to previous studies. By comparing the proposed fragilities, it is concluded that a change in the DGS level (from Low-Medium to High) could increase the pipeline repair rates (number of repairs per kilometer) by factors ranging from 1.3 to 2.0; being the higher the seismic intensity the lower the factor.

  1. Local figure-ground cues are valid for natural images.

    Science.gov (United States)

    Fowlkes, Charless C; Martin, David R; Malik, Jitendra

    2007-06-08

    Figure-ground organization refers to the visual perception that a contour separating two regions belongs to one of the regions. Recent studies have found neural correlates of figure-ground assignment in V2 as early as 10-25 ms after response onset, providing strong support for the role of local bottom-up processing. How much information about figure-ground assignment is available from locally computed cues? Using a large collection of natural images, in which neighboring regions were assigned a figure-ground relation by human observers, we quantified the extent to which figural regions locally tend to be smaller, more convex, and lie below ground regions. Our results suggest that these Gestalt cues are ecologically valid, and we quantify their relative power. We have also developed a simple bottom-up computational model of figure-ground assignment that takes image contours as input. Using parameters fit to natural image statistics, the model is capable of matching human-level performance when scene context limited.

  2. New approach for validating the segmentation of 3D data applied to individual fibre extraction

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2017-01-01

    We present two approaches for validating the segmentation of 3D data. The first approach consists on comparing the amount of estimated material to a value provided by the manufacturer. The second approach consists on comparing the segmented results to those obtained from imaging modalities...

  3. GPM Ground Validation: Pre to Post-Launch Era

    Science.gov (United States)

    Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George

    2015-04-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation

  4. The GPM Ground Validation Program: Pre to Post-Launch

    Science.gov (United States)

    Petersen, W. A.

    2014-12-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, types and data quality are being routinely generated to facilitate statistical GV of instantaneous and merged GPM products. To assess precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of ground-satellite estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi

  5. Validation and Comparison of One-Dimensional Ground Motion Methodologies

    International Nuclear Information System (INIS)

    B. Darragh; W. Silva; N. Gregor

    2006-01-01

    Both point- and finite-source stochastic one-dimensional ground motion models, coupled to vertically propagating equivalent-linear shear-wave site response models are validated using an extensive set of strong motion data as part of the Yucca Mountain Project. The validation and comparison exercises are presented entirely in terms of 5% damped pseudo absolute response spectra. The study consists of a quantitative analyses involving modeling nineteen well-recorded earthquakes, M 5.6 to 7.4 at over 600 sites. The sites range in distance from about 1 to about 200 km in the western US (460 km for central-eastern US). In general, this validation demonstrates that the stochastic point- and finite-source models produce accurate predictions of strong ground motions over the range of 0 to 100 km and for magnitudes M 5.0 to 7.4. The stochastic finite-source model appears to be broadband, producing near zero bias from about 0.3 Hz (low frequency limit of the analyses) to the high frequency limit of the data (100 and 25 Hz for response and Fourier amplitude spectra, respectively)

  6. GPM GROUND VALIDATION TWO-DIMENSIONAL VIDEO DISDROMETER (2DVD) IPHEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Two-Dimensional Video Disdrometer (2DVD) IPHEx dataset was collected during the GPM Ground Validation Integrated Precipitation and...

  7. GPM GROUND VALIDATION TWO-DIMENSIONAL VIDEO DISDROMETER (2DVD) IFLOODS V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Two-Dimensional Video Disdrometer (2DVD) IFloodS dataset was collected during the GPM Ground Validation Iowa Flood Studies (IFloodS) field...

  8. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  9. Lumbar segmental instability: a criterion-related validity study of manual therapy assessment

    Directory of Open Access Journals (Sweden)

    Chapple Cathy

    2005-11-01

    Full Text Available Abstract Background Musculoskeletal physiotherapists routinely assess lumbar segmental motion during the clinical examination of a patient with low back pain. The validity of manual assessment of segmental motion has not, however, been adequately investigated. Methods In this prospective, multi-centre, pragmatic, diagnostic validity study, 138 consecutive patients with recurrent or chronic low back pain (R/CLBP were recruited. Physiotherapists with post-graduate training in manual therapy performed passive accessory intervertebral motion tests (PAIVMs and passive physiological intervertebral motion tests (PPIVMs. Consenting patients were referred for flexion-extension radiographs. Sagittal angular rotation and sagittal translation of each lumbar spinal motion segment was measured from these radiographs, and compared to a reference range derived from a study of 30 asymptomatic volunteers. Motion beyond two standard deviations from the reference mean was considered diagnostic of rotational lumbar segmental instability (LSI and translational LSI. Accuracy and validity of the clinical assessments were expressed using sensitivity, specificity, and likelihood ratio statistics with 95% confidence intervals (CI. Results Only translation LSI was found to be significantly associated with R/CLBP (p Conclusion This study provides the first evidence reporting the concurrent validity of manual tests for the detection of abnormal sagittal planar motion. PAIVMs and PPIVMs are highly specific, but not sensitive, for the detection of translation LSI. Likelihood ratios resulting from positive test results were only moderate. This research indicates that manual clinical examination procedures have moderate validity for detecting segmental motion abnormality.

  10. SP-100 from ground demonstration to flight validation

    International Nuclear Information System (INIS)

    Buden, D.

    1989-01-01

    The SP-100 program is in the midst of developing and demonstrating the technology of a liquid-metal-cooled fast reactor using thermoelectric thermal-to-electric conversion devices for space power applications in the range of tens to hundreds of kilowatts. The current ground engineering system (GES) design and development phase will demonstrate the readiness of the technology building blocks and the system to proceed to flight system validation. This phase includes the demonstration of a 2.4-MW(thermal) reactor in the nuclear assembly test (NAT) and aerospace subsystem in the integrated assembly test (IAT). The next phase in the SP-100 development, now being planned, is to be a flight demonstration of the readiness of the technology to be incorporated into future military and civilian missions. This planning will answer questions concerning the logical progression of the GES to the flight validation experiment. Important issues in planning the orderly transition include answering the need to plan for a second reactor ground test, the method to be used to test the SP-100 for acceptance for flight, the need for the IAT prior to the flight-test configuration design, the efficient use of facilities for GES and the flight experiment, and whether the NAT should be modified based on flight experiment planning

  11. Multi-segment foot kinematics and ground reaction forces during gait of individuals with plantar fasciitis.

    Science.gov (United States)

    Chang, Ryan; Rodrigues, Pedro A; Van Emmerik, Richard E A; Hamill, Joseph

    2014-08-22

    Clinically, plantar fasciitis (PF) is believed to be a result and/or prolonged by overpronation and excessive loading, but there is little biomechanical data to support this assertion. The purpose of this study was to determine the differences between healthy individuals and those with PF in (1) rearfoot motion, (2) medial forefoot motion, (3) first metatarsal phalangeal joint (FMPJ) motion, and (4) ground reaction forces (GRF). We recruited healthy (n=22) and chronic PF individuals (n=22, symptomatic over three months) of similar age, height, weight, and foot shape (p>0.05). Retro-reflective skin markers were fixed according to a multi-segment foot and shank model. Ground reaction forces and three dimensional kinematics of the shank, rearfoot, medial forefoot, and hallux segment were captured as individuals walked at 1.35 ms(-1). Despite similarities in foot anthropometrics, when compared to healthy individuals, individuals with PF exhibited significantly (pfoot kinematics and kinetics. Consistent with the theoretical injury mechanisms of PF, we found these individuals to have greater total rearfoot eversion and peak FMPJ dorsiflexion, which may put undue loads on the plantar fascia. Meanwhile, increased medial forefoot plantar flexion at initial contact and decreased propulsive GRF are suggestive of compensatory responses, perhaps to manage pain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Design and validation of Segment - freely available software for cardiovascular image analysis

    International Nuclear Information System (INIS)

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-01

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page (http://segment.heiberg.se). Segment

  13. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...REPORT Generalization of Figure - Ground Segmentation from Monocular to Binocular Vision in an Embodied Biological Brain Model 14. ABSTRACT 16. SECURITY...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS figure - ground , neural network, object

  14. Edge-assignment and figure-ground segmentation in short-term visual matching.

    Science.gov (United States)

    Driver, J; Baylis, G C

    1996-12-01

    Eight experiments examined the role of edge-assignment in a contour matching task. Subjects judged whether the jagged vertical edge of a probe shape matched the jagged edge that divided two adjoining shapes in an immediately preceding figure-ground display. Segmentation factors biased assignment of this dividing edge toward a figural shape on just one of its sides. Subjects were faster and more accurate at matching when the probe edge had a corresponding assignment. The rapid emergence of this effect provides an on-line analog of the long-term memory advantage for figures over grounds which Rubin (1915/1958) reported. The present on-line advantage was found when figures were defined by relative contrast and size, or by symmetry, and could not be explained solely by the automatic drawing of attention toward the location of the figural region. However, deliberate attention to one region of an otherwise ambiguous figure-ground display did produce the advantage. We propose that one-sided assignment of dividing edges may be obligatory in vision.

  15. Validation of a model of left ventricular segmentation for interpretation of SPET myocardial perfusion images

    International Nuclear Information System (INIS)

    Aepfelbacher, F.C.; Johnson, R.B.; Schwartz, J.G.; Danias, P.G.; Chen, L.; Parker, R.A.; Parker, A.J.

    2001-01-01

    Several models of left ventricular segmentation have been developed that assume a standard coronary artery distribution, and are currently used for interpretation of single-photon emission tomography (SPET) myocardial perfusion imaging. This approach has the potential for incorrect assignment of myocardial segments to vascular territories, possibly over- or underestimating the number of vessels with significant coronary artery disease (CAD). We therefore sought to validate a 17-segment model of myocardial perfusion by comparing the predefined coronary territory assignment with the actual angiographically derived coronary distribution. We examined 135 patients who underwent both coronary angiography and stress SPET imaging within 30 days. Individualized coronary distribution was determined by review of the coronary angiograms and used to identify the coronary artery supplying each of the 17 myocardial segments of the model. The actual coronary distribution was used to assess the accuracy of the assumed coronary distribution of the model. The sensitivities and specificities of stress SPET for detection of CAD in individual coronary arteries and the classification regarding perceived number of diseased coronary arteries were also compared between the two coronary distributions (actual and assumed). The assumed coronary distribution corresponded to the actual coronary anatomy in all but one segment (3). The majority of patients (80%) had 14 or more concordant segments. Sensitivities and specificities of stress SPET for detection of CAD in the coronary territories were similar, with the exception of the RCA territory, for which specificity for detection of CAD was better for the angiographically derived coronary artery distribution than for the model. There was 95% agreement between assumed and angiographically derived coronary distributions in classification to single- versus multi-vessel CAD. Reassignment of a single segment (segment 3) from the LCX to the LAD

  16. Validation of a model of left ventricular segmentation for interpretation of SPET myocardial perfusion images

    Energy Technology Data Exchange (ETDEWEB)

    Aepfelbacher, F.C.; Johnson, R.B.; Schwartz, J.G.; Danias, P.G. [Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA (United States); Chen, L.; Parker, R.A. [Biometrics Center, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA (United States); Parker, A.J. [Nuclear Medicine Division, Department of Radiology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA (United States)

    2001-11-01

    Several models of left ventricular segmentation have been developed that assume a standard coronary artery distribution, and are currently used for interpretation of single-photon emission tomography (SPET) myocardial perfusion imaging. This approach has the potential for incorrect assignment of myocardial segments to vascular territories, possibly over- or underestimating the number of vessels with significant coronary artery disease (CAD). We therefore sought to validate a 17-segment model of myocardial perfusion by comparing the predefined coronary territory assignment with the actual angiographically derived coronary distribution. We examined 135 patients who underwent both coronary angiography and stress SPET imaging within 30 days. Individualized coronary distribution was determined by review of the coronary angiograms and used to identify the coronary artery supplying each of the 17 myocardial segments of the model. The actual coronary distribution was used to assess the accuracy of the assumed coronary distribution of the model. The sensitivities and specificities of stress SPET for detection of CAD in individual coronary arteries and the classification regarding perceived number of diseased coronary arteries were also compared between the two coronary distributions (actual and assumed). The assumed coronary distribution corresponded to the actual coronary anatomy in all but one segment (3). The majority of patients (80%) had 14 or more concordant segments. Sensitivities and specificities of stress SPET for detection of CAD in the coronary territories were similar, with the exception of the RCA territory, for which specificity for detection of CAD was better for the angiographically derived coronary artery distribution than for the model. There was 95% agreement between assumed and angiographically derived coronary distributions in classification to single- versus multi-vessel CAD. Reassignment of a single segment (segment 3) from the LCX to the LAD

  17. Fast CSF MRI for brain segmentation; Cross-validation by comparison with 3D T1-based brain segmentation methods.

    Science.gov (United States)

    van der Kleij, Lisa A; de Bresser, Jeroen; Hendrikse, Jeroen; Siero, Jeroen C W; Petersen, Esben T; De Vis, Jill B

    2018-01-01

    In previous work we have developed a fast sequence that focusses on cerebrospinal fluid (CSF) based on the long T2 of CSF. By processing the data obtained with this CSF MRI sequence, brain parenchymal volume (BPV) and intracranial volume (ICV) can be automatically obtained. The aim of this study was to assess the precision of the BPV and ICV measurements of the CSF MRI sequence and to validate the CSF MRI sequence by comparison with 3D T1-based brain segmentation methods. Ten healthy volunteers (2 females; median age 28 years) were scanned (3T MRI) twice with repositioning in between. The scan protocol consisted of a low resolution (LR) CSF sequence (0:57min), a high resolution (HR) CSF sequence (3:21min) and a 3D T1-weighted sequence (6:47min). Data of the HR 3D-T1-weighted images were downsampled to obtain LR T1-weighted images (reconstructed imaging time: 1:59 min). Data of the CSF MRI sequences was automatically segmented using in-house software. The 3D T1-weighted images were segmented using FSL (5.0), SPM12 and FreeSurfer (5.3.0). The mean absolute differences for BPV and ICV between the first and second scan for CSF LR (BPV/ICV: 12±9/7±4cc) and CSF HR (5±5/4±2cc) were comparable to FSL HR (9±11/19±23cc), FSL LR (7±4, 6±5cc), FreeSurfer HR (5±3/14±8cc), FreeSurfer LR (9±8, 12±10cc), and SPM HR (5±3/4±7cc), and SPM LR (5±4, 5±3cc). The correlation between the measured volumes of the CSF sequences and that measured by FSL, FreeSurfer and SPM HR and LR was very good (all Pearson's correlation coefficients >0.83, R2 .67-.97). The results from the downsampled data and the high-resolution data were similar. Both CSF MRI sequences have a precision comparable to, and a very good correlation with established 3D T1-based automated segmentations methods for the segmentation of BPV and ICV. However, the short imaging time of the fast CSF MRI sequence is superior to the 3D T1 sequence on which segmentation with established methods is performed.

  18. The potential of ground gravity measurements to validate GRACE data

    Directory of Open Access Journals (Sweden)

    D. Crossley

    2003-01-01

    Full Text Available New satellite missions are returning high precision, time-varying, satellite measurements of the Earth’s gravity field. The GRACE mission is now in its calibration/- validation phase and first results of the gravity field solutions are imminent. We consider here the possibility of external validation using data from the superconducting gravimeters in the European sub-array of the Global Geodynamics Project (GGP as ‘ground truth’ for comparison with GRACE. This is a pilot study in which we use 14 months of 1-hour data from the beginning of GGP (1 July 1997 to 30 August 1998, when the Potsdam instrument was relocated to South Africa. There are 7 stations clustered in west central Europe, and one station, Metsahovi in Finland. We remove local tides, polar motion, local and global air pressure, and instrument drift and then decimate to 6-hour samples. We see large variations in the time series of 5–10µgal between even some neighboring stations, but there are also common features that correlate well over the 427-day period. The 8 stations are used to interpolate a minimum curvature (gridded surface that extends over the geographical region. This surface shows time and spatial coherency at the level of 2– 4µgal over the first half of the data and 1–2µgal over the latter half. The mean value of the surface clearly shows a rise in European gravity of about 3µgal over the first 150 days and a fairly constant value for the rest of the data. The accuracy of this mean is estimated at 1µgal, which compares favorably with GRACE predictions for wavelengths of 500 km or less. Preliminary studies of hydrology loading over Western Europe shows the difficulty of correlating the local hydrology, which can be highly variable, with large-scale gravity variations.Key words. GRACE, satellite gravity, superconducting gravimeter, GGP, ground truth

  19. GPM ground validation via commercial cellular networks: an exploratory approach

    Science.gov (United States)

    Rios Gaona, Manuel Felipe; Overeem, Aart; Leijnse, Hidde; Brasjen, Noud; Uijlenhoet, Remko

    2016-04-01

    The suitability of commercial microwave link networks for ground validation of GPM (Global Precipitation Measurement) data is evaluated here. Two state-of-the-art rainfall products are compared over the land surface of the Netherlands for a period of 7 months, i.e., rainfall maps from commercial cellular communication networks and Integrated Multi-satellite Retrievals for GPM (IMERG). Commercial microwave link networks are nowadays the core component in telecommunications worldwide. Rainfall rates can be retrieved from measurements of attenuation between transmitting and receiving antennas. If adequately set up, these networks enable rainfall monitoring tens of meters above the ground at high spatiotemporal resolutions (temporal sampling of seconds to tens of minutes, and spatial sampling of hundreds of meters to tens of kilometers). The GPM mission is the successor of TRMM (Tropical Rainfall Measurement Mission). For two years now, IMERG offers rainfall estimates across the globe (180°W - 180°E and 60°N - 60°S) at spatiotemporal resolutions of 0.1° x 0.1° every 30 min. These two data sets are compared against a Dutch gauge-adjusted radar data set, considered to be the ground truth given its accuracy, spatiotemporal resolution and availability. The suitability of microwave link networks in satellite rainfall evaluation is of special interest, given the independent character of this technique, its high spatiotemporal resolutions and availability. These are valuable assets for water management and modeling of floods, landslides, and weather extremes; especially in places where rain gauge networks are scarce or poorly maintained, or where weather radar networks are too expensive to acquire and/or maintain.

  20. Numerical simulation and experimental validation of aircraft ground deicing model

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2016-05-01

    Full Text Available Aircraft ground deicing plays an important role of guaranteeing the aircraft safety. In practice, most airports generally use as many deicing fluids as possible to remove the ice, which causes the waste of the deicing fluids and the pollution of the environment. Therefore, the model of aircraft ground deicing should be built to establish the foundation for the subsequent research, such as the optimization of the deicing fluid consumption. In this article, the heat balance of the deicing process is depicted, and the dynamic model of the deicing process is provided based on the analysis of the deicing mechanism. In the dynamic model, the surface temperature of the deicing fluids and the ice thickness are regarded as the state parameters, while the fluid flow rate, the initial temperature, and the injection time of the deicing fluids are treated as control parameters. Ignoring the heat exchange between the deicing fluids and the environment, the simplified model is obtained. The rationality of the simplified model is verified by the numerical simulation and the impacts of the flow rate, the initial temperature and the injection time on the deicing process are investigated. To verify the model, the semi-physical experiment system is established, consisting of the low-constant temperature test chamber, the ice simulation system, the deicing fluid heating and spraying system, the simulated wing, the test sensors, and the computer measure and control system. The actual test data verify the validity of the dynamic model and the accuracy of the simulation analysis.

  1. Rainfall Product Evaluation for the TRMM Ground Validation Program

    Science.gov (United States)

    Amitai, E.; Wolff, D. B.; Robinson, M.; Silberstein, D. S.; Marks, D. A.; Kulie, M. S.; Fisher, B.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Evaluation of the Tropical Rainfall Measuring Mission (TRMM) satellite observations is conducted through a comprehensive Ground Validation (GV) Program. Standardized instantaneous and monthly rainfall products are routinely generated using quality-controlled ground based radar data from four primary GV sites. As part of the TRMM GV program, effort is being made to evaluate these GV products and to determine the uncertainties of the rainfall estimates. The evaluation effort is based on comparison to rain gauge data. The variance between the gauge measurement and the true averaged rain amount within the radar pixel is a limiting factor in the evaluation process. While monthly estimates are relatively simple to evaluate, the evaluation of the instantaneous products are much more of a challenge. Scattegrams of point comparisons between radar and rain gauges are extremely noisy for several reasons (e.g. sample volume discrepancies, timing and navigation mismatches, variability of Z(sub e)-R relationships), and therefore useless for evaluating the estimates. Several alternative methods, such as the analysis of the distribution of rain volume by rain rate as derived from gauge intensities and from reflectivities above the gauge network will be presented. Alternative procedures to increase the accuracy of the estimates and to reduce their uncertainties also will be discussed.

  2. Evolution of the JPSS Ground Project Calibration and Validation System

    Science.gov (United States)

    Purcell, Patrick; Chander, Gyanesh; Jain, Peyush

    2016-01-01

    The Joint Polar Satellite System (JPSS) is the National Oceanic and Atmospheric Administration's (NOAA) next-generation operational Earth observation Program that acquires and distributes global environmental data from multiple polar-orbiting satellites. The JPSS Program plays a critical role to NOAA's mission to understand and predict changes in weather, climate, oceans, coasts, and space environments, which supports the Nation's economy and protection of lives and property. The National Aeronautics and Space Administration (NASA) is acquiring and implementing the JPSS, comprised of flight and ground systems, on behalf of NOAA. The JPSS satellites are planned to fly in the afternoon orbit and will provide operational continuity of satellite-based observations and products for NOAA Polar-orbiting Operational Environmental Satellites (POES) and the Suomi National Polar-orbiting Partnership (SNPP) satellite. To support the JPSS Calibration and Validation (CalVal) node Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) services facilitate: Algorithm Integration and Checkout, Algorithm and Product Operational Tuning, Instrument Calibration, Product Validation, Algorithm Investigation, and Data Quality Support and Monitoring. GRAVITE is a mature, deployed system that currently supports the SNPP Mission and has been in operations since SNPP launch. This paper discusses the major re-architecture for Block 2.0 that incorporates SNPP lessons learned, architecture of the system, and demonstrates how GRAVITE has evolved as a system with increased performance. It is now a robust, stable, reliable, maintainable, scalable, and secure system that supports development, test, and production strings, replaces proprietary and custom software, uses open source software, and is compliant with NASA and NOAA standards.

  3. GPM GROUND VALIDATION COMPOSITE SATELLITE OVERPASSES MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Composite Satellite Overpasses MC3E dataset provides satellite overpasses from the AQUA satellite during the Midlatitude Continental...

  4. GPM GROUND VALIDATION ENVIRONMENT CANADA (EC) MANUAL PRECIPITATION MEASUREMENTS GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Environment Canada (EC) Manual Precipitation Measurements GCPEx dataset was collected during the GPM Cold-season Precipitation Experiment...

  5. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  6. Figure/Ground Segmentation via a Haptic Glance: Attributing Initial Finger Contacts to Objects or Their Supporting Surfaces.

    Science.gov (United States)

    Pawluk, D; Kitada, R; Abramowicz, A; Hamilton, C; Lederman, S J

    2011-01-01

    The current study addresses the well-known "figure/ground" problem in human perception, a fundamental topic that has received surprisingly little attention from touch scientists to date. Our approach is grounded in, and directly guided by, current knowledge concerning the nature of haptic processing. Given inherent figure/ground ambiguity in natural scenes and limited sensory inputs from first contact (a "haptic glance"), we consider first whether people are even capable of differentiating figure from ground (Experiments 1 and 2). Participants were required to estimate the strength of their subjective impression that they were feeling an object (i.e., figure) as opposed to just the supporting structure (i.e., ground). Second, we propose a tripartite factor classification scheme to further assess the influence of kinetic, geometric (Experiments 1 and 2), and material (Experiment 2) factors on haptic figure/ground segmentation, complemented by more open-ended subjective responses obtained at the end of the experiment. Collectively, the results indicate that under certain conditions it is possible to segment figure from ground via a single haptic glance with a reasonable degree of certainty, and that all three factor classes influence the estimated likelihood that brief, spatially distributed fingertip contacts represent contact with an object and/or its background supporting structure.

  7. Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images.

    Science.gov (United States)

    Chiu, Stephanie J; Izatt, Joseph A; O'Connell, Rachelle V; Winter, Katrina P; Toth, Cynthia A; Farsiu, Sina

    2012-01-05

    To automatically segment retinal spectral domain optical coherence tomography (SD-OCT) images of eyes with age-related macular degeneration (AMD) and various levels of image quality to advance the study of retinal pigment epithelium (RPE)+drusen complex (RPEDC) volume changes indicative of AMD progression. A general segmentation framework based on graph theory and dynamic programming was used to segment three retinal boundaries in SD-OCT images of eyes with drusen and geographic atrophy (GA). A validation study for eyes with nonneovascular AMD was conducted, forming subgroups based on scan quality and presence of GA. To test for accuracy, the layer thickness results from two certified graders were compared against automatic segmentation results for 220 B-scans across 20 patients. For reproducibility, automatic layer volumes were compared that were generated from 0° versus 90° scans in five volumes with drusen. The mean differences in the measured thicknesses of the total retina and RPEDC layers were 4.2 ± 2.8 and 3.2 ± 2.6 μm for automatic versus manual segmentation. When the 0° and 90° datasets were compared, the mean differences in the calculated total retina and RPEDC volumes were 0.28% ± 0.28% and 1.60% ± 1.57%, respectively. The average segmentation time per image was 1.7 seconds automatically versus 3.5 minutes manually. The automatic algorithm accurately and reproducibly segmented three retinal boundaries in images containing drusen and GA. This automatic approach can reduce time and labor costs and yield objective measurements that potentially reveal quantitative RPE changes in longitudinal clinical AMD studies. (ClinicalTrials.gov number, NCT00734487.).

  8. The EADC-ADNI Harmonized Protocol for manual hippocampal segmentation on magnetic resonance: Evidence of validity

    Science.gov (United States)

    Frisoni, Giovanni B.; Jack, Clifford R.; Bocchetta, Martina; Bauer, Corinna; Frederiksen, Kristian S.; Liu, Yawu; Preboske, Gregory; Swihart, Tim; Blair, Melanie; Cavedo, Enrica; Grothe, Michel J.; Lanfredi, Mariangela; Martinez, Oliver; Nishikawa, Masami; Portegies, Marileen; Stoub, Travis; Ward, Chadwich; Apostolova, Liana G.; Ganzola, Rossana; Wolf, Dominik; Barkhof, Frederik; Bartzokis, George; DeCarli, Charles; Csernansky, John G.; deToledo-Morrell, Leyla; Geerlings, Mirjam I.; Kaye, Jeffrey; Killiany, Ronald J.; Lehéricy, Stephane; Matsuda, Hiroshi; O'Brien, John; Silbert, Lisa C.; Scheltens, Philip; Soininen, Hilkka; Teipel, Stefan; Waldemar, Gunhild; Fellgiebel, Andreas; Barnes, Josephine; Firbank, Michael; Gerritsen, Lotte; Henneman, Wouter; Malykhin, Nikolai; Pruessner, Jens C.; Wang, Lei; Watson, Craig; Wolf, Henrike; deLeon, Mony; Pantel, Johannes; Ferrari, Clarissa; Bosco, Paolo; Pasqualetti, Patrizio; Duchesne, Simon; Duvernoy, Henri; Boccardi, Marina

    2015-01-01

    Background An international Delphi panel has defined a harmonized protocol (HarP) for the manual segmentation of the hippocampus on MR. The aim of this study is to study the concurrent validity of the HarP toward local protocols, and its major sources of variance. Methods Fourteen tracers segmented 10 Alzheimer's Disease Neuroimaging Initiative (ADNI) cases scanned at 1.5 T and 3T following local protocols, qualified for segmentation based on the HarP through a standard web-platform and resegmented following the HarP. The five most accurate tracers followed the HarP to segment 15 ADNI cases acquired at three time points on both 1.5 T and 3T. Results The agreement among tracers was relatively low with the local protocols (absolute left/right ICC 0.44/0.43) and much higher with the HarP (absolute left/right ICC 0.88/0.89). On the larger set of 15 cases, the HarP agreement within (left/right ICC range: 0.94/0.95 to 0.99/0.99) and among tracers (left/right ICC range: 0.89/0.90) was very high. The volume variance due to different tracers was 0.9% of the total, comparing favorably to variance due to scanner manufacturer (1.2), atrophy rates (3.5), hemispheric asymmetry (3.7), field strength (4.4), and significantly smaller than the variance due to atrophy (33.5%, P < .001), and physiological variability (49.2%, P < .001). Conclusions The HarP has high measurement stability compared with local segmentation protocols, and good reproducibility within and among human tracers. Hippocampi segmented with the HarP can be used as a reference for the qualification of human tracers and automated segmentation algorithms. PMID:25267715

  9. CryoSat-2 Payload Data Ground Segment and Data Processing Status

    Science.gov (United States)

    Badessi, S.; Frommknecht, B.; Parrinello, T.; Mizzi, L.

    2012-04-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a baseline 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Scope of this paper is to describe the Cryosat-2 Ground Segment present configuration and its main function to satisfy the Cryosat-2 mission requirements. In particular, the paper will highlight the current status of the processing of the SIRAL instrument L1b and L2 products in terms of completeness and availability. Additional information will be also given on the PDGS current status and planned evolution, the latest product and processor updates and the status of the associated reprocessing campaign.

  10. The CryoSat-2 Payload Data Ground Segment and Data Processing

    Science.gov (United States)

    Frommknecht, Bjoern; Parrinello, Tommaso; Badessi, Stefano; Mizzi, Loretta; Torroni, Vittorio

    2017-04-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a baseline 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Scope of this paper is to describe the Cryosat-2 Ground Segment present configuration and its main function to satisfy the Cryosat-2 mission requirements. In particular, the paper will highlight the current status of the pro- cessing of the SIRAL instrument L1b and L2 products, both for ocean and ice products, in terms of completeness and availability. Additional information will be also given on the PDGS current status and planned evolutions, including product and processor updates and associated reprocessing campaigns.

  11. Identifying food-related life style segments by a cross-culturally valid scaling device

    DEFF Research Database (Denmark)

    Brunsø, Karen; Grunert, Klaus G.

    1994-01-01

    -related life style in a cross-culturally valid way. To this end, we have col-lected a pool of 202 items, collected data in three countries, and have con-structed scales based on cross-culturally stable patterns. These scales have then been subjected to a number of tests of reliability and vali-dity. We have...... then applied the set of scales to a fourth country, Germany, based on a representative sample of 1000 respondents. The scales had, with a fe exceptions, moderately good reliabilities. A cluster ana-ly-sis led to the identification of 5 segments, which differed on all 23 scales....

  12. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    Science.gov (United States)

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Fast CSF MRI for brain segmentation; Cross-validation by comparison with 3D T1-based brain segmentation methods

    DEFF Research Database (Denmark)

    van der Kleij, Lisa A.; de Bresser, Jeroen; Hendrikse, Jeroen

    2018-01-01

    ObjectiveIn previous work we have developed a fast sequence that focusses on cerebrospinal fluid (CSF) based on the long T-2 of CSF. By processing the data obtained with this CSF MRI sequence, brain parenchymal volume (BPV) and intracranial volume (ICV) can be automatically obtained. The aim...... of this study was to assess the precision of the BPV and ICV measurements of the CSF MRI sequence and to validate the CSF MRI sequence by comparison with 3D T-1-based brain segmentation methods.Materials and methodsTen healthy volunteers (2 females; median age 28 years) were scanned (3T MRI) twice......cc) and CSF HR (5 +/- 5/4 +/- 2cc) were comparable to FSL HR (9 +/- 11/19 +/- 23cc), FSL LR (7 +/- 4,6 +/- 5cc),FreeSurfer HR (5 +/- 3/14 +/- 8cc), FreeSurfer LR (9 +/- 8,12 +/- 10cc), and SPM HR (5 +/- 3/4 +/- 7cc), and SPM LR (5 +/- 4,5 +/- 3cc). The correlation between the measured volumes...

  14. GPM GROUND VALIDATION KICT NEXRAD MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validaiton KICT NEXRAD MC3E dataset was collected from April 22, 2011 to June 6, 2011 for the Midlatitude Continental Convective Clouds Experiment...

  15. The CRYOSAT-2 Payload Ground Segment: Data Processing Status and Data Access

    Science.gov (United States)

    Parrinello, T.; Frommknecht, B.; Gilles, P.

    2010-12-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Cryosat-2 carries an innovative radar altimeter called the Synthetic Aperture Interferometric Altimeter (SIRAL) with two antennas and with extended capabilities to meet the measurement requirements for ice-sheets elevation and sea-ice freeboard. Scope of this paper is to describe the Cryosat Ground Segment and its main function to satisfy the Cryosat mission requirements. In particular, the paper will discuss the processing steps necessary to produce SIRAL L1b waveform power data and the SIRAL L2 geophysical elevation data from the raw data acquired by the satellite. The papers will also present the current status of the data processing in terms of completeness, availability and data access to the scientific community.

  16. Validation of phalanx bone three-dimensional surface segmentation from computed tomography images using laser scanning

    Energy Technology Data Exchange (ETDEWEB)

    DeVries, Nicole A.; Gassman, Esther E.; Kallemeyn, Nicole A. [The University of Iowa, Department of Biomedical Engineering, Center for Computer Aided Design, Iowa City, IA (United States); Shivanna, Kiran H. [The University of Iowa, Center for Computer Aided Design, Iowa City, IA (United States); Magnotta, Vincent A. [The University of Iowa, Department of Biomedical Engineering, Department of Radiology, Center for Computer Aided Design, Iowa City, IA (United States); Grosland, Nicole M. [The University of Iowa, Department of Biomedical Engineering, Department of Orthopaedics and Rehabilitation, Center for Computer Aided Design, Iowa City, IA (United States)

    2008-01-15

    To examine the validity of manually defined bony regions of interest from computed tomography (CT) scans. Segmentation measurements were performed on the coronal reformatted CT images of the three phalanx bones of the index finger from five cadaveric specimens. Two smoothing algorithms (image-based and Laplacian surface-based) were evaluated to determine their ability to represent accurately the anatomic surface. The resulting surfaces were compared with laser surface scans of the corresponding cadaveric specimen. The average relative overlap between two tracers was 0.91 for all bones. The overall mean difference between the manual unsmoothed surface and the laser surface scan was 0.20 mm. Both image-based and Laplacian surface-based smoothing were compared; the overall mean difference for image-based smoothing was 0.21 mm and 0.20 mm for Laplacian smoothing. This study showed that manual segmentation of high-contrast, coronal, reformatted, CT datasets can accurately represent the true surface geometry of bones. Additionally, smoothing techniques did not significantly alter the surface representations. This validation technique should be extended to other bones, image segmentation and spatial filtering techniques. (orig.)

  17. Validation of phalanx bone three-dimensional surface segmentation from computed tomography images using laser scanning

    International Nuclear Information System (INIS)

    DeVries, Nicole A.; Gassman, Esther E.; Kallemeyn, Nicole A.; Shivanna, Kiran H.; Magnotta, Vincent A.; Grosland, Nicole M.

    2008-01-01

    To examine the validity of manually defined bony regions of interest from computed tomography (CT) scans. Segmentation measurements were performed on the coronal reformatted CT images of the three phalanx bones of the index finger from five cadaveric specimens. Two smoothing algorithms (image-based and Laplacian surface-based) were evaluated to determine their ability to represent accurately the anatomic surface. The resulting surfaces were compared with laser surface scans of the corresponding cadaveric specimen. The average relative overlap between two tracers was 0.91 for all bones. The overall mean difference between the manual unsmoothed surface and the laser surface scan was 0.20 mm. Both image-based and Laplacian surface-based smoothing were compared; the overall mean difference for image-based smoothing was 0.21 mm and 0.20 mm for Laplacian smoothing. This study showed that manual segmentation of high-contrast, coronal, reformatted, CT datasets can accurately represent the true surface geometry of bones. Additionally, smoothing techniques did not significantly alter the surface representations. This validation technique should be extended to other bones, image segmentation and spatial filtering techniques. (orig.)

  18. A gradient-based method for segmenting FDG-PET images: methodology and validation

    International Nuclear Information System (INIS)

    Geets, Xavier; Lee, John A.; Gregoire, Vincent; Bol, Anne; Lonneux, Max

    2007-01-01

    A new gradient-based method for segmenting FDG-PET images is described and validated. The proposed method relies on the watershed transform and hierarchical cluster analysis. To allow a better estimation of the gradient intensity, iteratively reconstructed images were first denoised and deblurred with an edge-preserving filter and a constrained iterative deconvolution algorithm. Validation was first performed on computer-generated 3D phantoms containing spheres, then on a real cylindrical Lucite phantom containing spheres of different volumes ranging from 2.1 to 92.9 ml. Moreover, laryngeal tumours from seven patients were segmented on PET images acquired before laryngectomy by the gradient-based method and the thresholding method based on the source-to-background ratio developed by Daisne (Radiother Oncol 2003;69:247-50). For the spheres, the calculated volumes and radii were compared with the known values; for laryngeal tumours, the volumes were compared with the macroscopic specimens. Volume mismatches were also analysed. On computer-generated phantoms, the deconvolution algorithm decreased the mis-estimate of volumes and radii. For the Lucite phantom, the gradient-based method led to a slight underestimation of sphere volumes (by 10-20%), corresponding to negligible radius differences (0.5-1.1 mm); for laryngeal tumours, the segmented volumes by the gradient-based method agreed with those delineated on the macroscopic specimens, whereas the threshold-based method overestimated the true volume by 68% (p = 0.014). Lastly, macroscopic laryngeal specimens were totally encompassed by neither the threshold-based nor the gradient-based volumes. The gradient-based segmentation method applied on denoised and deblurred images proved to be more accurate than the source-to-background ratio method. (orig.)

  19. Ground Validation Assessments of GPM Core Observatory Science Requirements

    Science.gov (United States)

    Petersen, Walt; Huffman, George; Kidd, Chris; Skofronick-Jackson, Gail

    2017-04-01

    NASA Global Precipitation Measurement (GPM) Mission science requirements define specific measurement error standards for retrieved precipitation parameters such as rain rate, raindrop size distribution, and falling snow detection on instantaneous temporal scales and spatial resolutions ranging from effective instrument fields of view [FOV], to grid scales of 50 km x 50 km. Quantitative evaluation of these requirements intrinsically relies on GPM precipitation retrieval algorithm performance in myriad precipitation regimes (and hence, assumptions related to physics) and on the quality of ground-validation (GV) data being used to assess the satellite products. We will review GPM GV products, their quality, and their application to assessing GPM science requirements, interleaving measurement and precipitation physical considerations applicable to the approaches used. Core GV data products used to assess GPM satellite products include 1) two minute and 30-minute rain gauge bias-adjusted radar rain rate products and precipitation types (rain/snow) adapted/modified from the NOAA/OU multi-radar multi-sensor (MRMS) product over the continental U.S.; 2) Polarimetric radar estimates of rain rate over the ocean collected using the K-Pol radar at Kwajalein Atoll in the Marshall Islands and the Middleton Island WSR-88D radar located in the Gulf of Alaska; and 3) Multi-regime, field campaign and site-specific disdrometer-measured rain/snow size distribution (DSD), phase and fallspeed information used to derive polarimetric radar-based DSD retrievals and snow water equivalent rates (SWER) for comparison to coincident GPM-estimated DSD and precipitation rates/types, respectively. Within the limits of GV-product uncertainty we demonstrate that the GPM Core satellite meets its basic mission science requirements for a variety of precipitation regimes. For the liquid phase, we find that GPM radar-based products are particularly successful in meeting bias and random error requirements

  20. Improved vegetation segmentation with ground shadow removal using an HDR camera

    NARCIS (Netherlands)

    Suh, Hyun K.; Hofstee, Jan W.; Henten, van Eldert J.

    2018-01-01

    A vision-based weed control robot for agricultural field application requires robust vegetation segmentation. The output of vegetation segmentation is the fundamental element in the subsequent process of weed and crop discrimination as well as weed control. There are two challenging issues for

  1. A new validation technique for estimations of body segment inertia tensors: Principal axes of inertia do matter.

    Science.gov (United States)

    Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J

    2016-12-08

    The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. GPM GROUND VALIDATION AIRBORNE SECOND GENERATION PRECIPITATION RADAR (APR-2) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Airborne Second Generation Precipitation Radar (APR-2) GCPEx dataset was collected during the GPM Cold-season Precipitation Experiment...

  3. GPM GROUND VALIDATION MCGILL W-BAND RADAR GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation McGill W-Band Radar GCPEx dataset was collected from February 1, 2012 to February 29, 2012 at the CARE site in Ontario, Canada as a part of...

  4. GPM GROUND VALIDATION ENVIRONMENT CANADA (EC) SNOW SURVEYS GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Environment Canada Snow Surveys GCPEx dataset was manually collected during the GPM Cold-season Precipitation Experiment (GCPEx), which...

  5. GPM GROUND VALIDATION DUAL POLARIZED C-BAND DOPPLER RADAR KING CITY GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Dual Polarized C-Band Doppler Radar King City GCPEx dataset has special Range Height Indicator (RHI) and sector scans of several dual...

  6. GPM GROUND VALIDATION JOSS-WALDVOGEL DISDROMETER (JW) NSSTC V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Joss-Waldvogel Disdrometer (JW) NSSTC dataset was collected by the Joss-Waldvogel (JW) disdrometer, which is an impact-type...

  7. GPM GROUND VALIDATION ADVANCED MICROWAVE RADIOMETER RAIN IDENTIFICATION (ADMIRARI) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Advanced Microwave Radiometer Rain Identification (ADMIRARI) GCPEx dataset measures brightness temperature at three frequencies (10.7, 21.0...

  8. GPM GROUND VALIDATION ENVIRONMENT CANADA (EC) VAISALA CEILOMETER GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Environment Canada (EC) VAISALA Ceilometer GCPEx dataset was collected during the GPM Cold-season Precipitation Experiment (GCPEx) in...

  9. GPM GROUND VALIDATION NCAR CLOUD MICROPHYSICS PARTICLE PROBES MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NCAR Cloud Microphysics Particle Probes MC3E dataset was collected during the Midlatitude Continental Convective Clouds Experiment (MC3E),...

  10. GPM GROUND VALIDATION CONICAL SCANNING MILLIMETER-WAVE IMAGING RADIOMETER (COSMIR) MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Conical Scanning Millimeter-wave Imaging Radiometer (COSMIR) MC3E dataset used the Conical Scanning Millimeter-wave Imaging Radiometer...

  11. GPM GROUND VALIDATION NASA MICRO RAIN RADAR (MRR) MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NASA Micro Rain Radar (MRR) MC3E dataset was collected by a Micro Rain Radar (MRR), which is a vertically pointing Doppler radar which...

  12. GPM GROUND VALIDATION CONICAL SCANNING MILLIMETER-WAVE IMAGING RADIOMETER (COSMIR) GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Conical Scanning Millimeter-wave Imaging Radiometer (COSMIR) GCPEx dataset used the Conical Scanning Millimeter-wave Imaging Radiometer...

  13. GPM GROUND VALIDATION NASA ER-2 NAVIGATION DATA MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NASA ER-2 Navigation Data MC3E dataset contains information recorded by an on board navigation recorder (NavRec). In addition to typical...

  14. GPM GROUND VALIDATION OKLAHOMA CLIMATOLOGICAL SURVEY MESONET MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Oklahoma Climatological Survey Mesonet MC3E data were collected during the Midlatitude Continental Convective Clouds Experiment (MC3E) in...

  15. GPM GROUND VALIDATION NOAA UHF 449 PROFILER RAW DATA SPC FORMAT MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NOAA UHF 449 Profiler Raw Data SPC Format MC3E dataset was collected during the NASA supported Midlatitude Continental Convective Clouds...

  16. GPM GROUND VALIDATION NASA S-BAND DUAL POLARIMETRIC (NPOL) DOPPLER RADAR IFLOODS V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NASA S-Band Dual Polarimetric (NPOL) Doppler Radar IFloodS data set was collected from April 30, 2013 to June 16, 2013 near Traer, Iowa as...

  17. GPM GROUND VALIDATION SATELLITE SIMULATED ORBITS C3VP V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Satellite Simulated Orbits C3VP dataset is available in the Orbital database, which takes account for the atmospheric profiles, the...

  18. GPM GROUND VALIDATION SATELLITE SIMULATED ORBITS MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Satellite Simulated Orbits MC3E dataset is available in the Orbital database , which takes account for the atmospheric profiles, the...

  19. GPM GROUND VALIDATION SATELLITE SIMULATED ORBITS TWP-ICE V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Satellite Simulated Orbits TWP-ICE dataset is available in the Orbital database, which takes account for the atmospheric profiles, the...

  20. GPM GROUND VALIDATION NOAA S-BAND PROFILER MINUTE DATA MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation NOAA S-Band Profiler Minute Data MC3E dataset was gathered during the Midlatitude Continental Convective Clouds Experiment (MC3E) in...

  1. Feedback enhances feedforward figure-ground segmentation by changing firing mode.

    Science.gov (United States)

    Supèr, Hans; Romeo, August

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.

  2. Feedback enhances feedforward figure-ground segmentation by changing firing mode.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.

  3. Feedback Enhances Feedforward Figure-Ground Segmentation by Changing Firing Mode

    Science.gov (United States)

    Supèr, Hans; Romeo, August

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforwardspiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses withthe responses to a homogenous texture. We propose that feedback controlsfigure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons. PMID:21738747

  4. Consolidated Ground Segment Requirements for a UHF Radar for the ESSAS

    Science.gov (United States)

    Muller, Florent; Vera, Juan

    2009-03-01

    ESA has launched a nine months long study to define the requirements associated to the ground segment of a UHF (300-3000 MHz) radar system. The study has been awarded in open competition to a consortium led by Onera, associated to the Spanish companies Indra and its sub-contractor Deimos. After a phase of consolidation of the requirements, different monostatic and bistatic concepts of radars will be proposed and evaluated. Two concepts will be selected for further design studies. ESA will then select the best one, for detailed design as well as cost and performance evaluation. The aim of this paper is to present the results of the first phase of the study concerning the consolidation of the radar system requirements. The main mission for the system is to be able to build and maintain a catalogue of the objects in low Earth orbit (apogee lower than 2000km) in an autonomous way, for different sizes of objects, depending on the future successive development phases of the project. The final step must give the capability of detecting and tracking 10cm objects, with a possible upgrade to 5 cm objects. A demonstration phase must be defined for 1 m objects. These different steps will be considered during all the phases of the study. Taking this mission and the different steps of the study as a starting point, the first phase will define a set of requirements for the radar system. It was finished at the end of January 2009. First part will describe the constraints derived from the targets and their environment. Orbiting objects have a given distribution in space, and their observability and detectability are based on it. It is also related to the location of the radar system But they are also dependant on the natural propagation phenomenon, especially ionospheric issues, and the characteristics of the objects. Second part will focus on the mission itself. To carry out the mission, objects must be detected and tracked regularly to refresh the associated orbital parameters

  5. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... image analysis methods used in longitudinal studies. The implication is increased measurement variation and a risk of bias in the estimations (e.g. in the volume change for a structure). We proposed two quite different approaches for scanner drift normalization and demonstrated the performance...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  6. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    Science.gov (United States)

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  7. aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data

    Science.gov (United States)

    Niedworok, Christian J.; Brown, Alexander P. Y.; Jorge Cardoso, M.; Osten, Pavel; Ourselin, Sebastien; Modat, Marc; Margrie, Troy W.

    2016-01-01

    The validation of automated image registration and segmentation is crucial for accurate and reliable mapping of brain connectivity and function in three-dimensional (3D) data sets. While validation standards are necessarily high and routinely met in the clinical arena, they have to date been lacking for high-resolution microscopy data sets obtained from the rodent brain. Here we present a tool for optimized automated mouse atlas propagation (aMAP) based on clinical registration software (NiftyReg) for anatomical segmentation of high-resolution 3D fluorescence images of the adult mouse brain. We empirically evaluate aMAP as a method for registration and subsequent segmentation by validating it against the performance of expert human raters. This study therefore establishes a benchmark standard for mapping the molecular function and cellular connectivity of the rodent brain. PMID:27384127

  8. Molecular species identification of Central European ground beetles (Coleoptera: Carabidae using nuclear rDNA expansion segments and DNA barcodes

    Directory of Open Access Journals (Sweden)

    Raupach Michael J

    2010-09-01

    Full Text Available Abstract Background The identification of vast numbers of unknown organisms using DNA sequences becomes more and more important in ecological and biodiversity studies. In this context, a fragment of the mitochondrial cytochrome c oxidase I (COI gene has been proposed as standard DNA barcoding marker for the identification of organisms. Limitations of the COI barcoding approach can arise from its single-locus identification system, the effect of introgression events, incomplete lineage sorting, numts, heteroplasmy and maternal inheritance of intracellular endosymbionts. Consequently, the analysis of a supplementary nuclear marker system could be advantageous. Results We tested the effectiveness of the COI barcoding region and of three nuclear ribosomal expansion segments in discriminating ground beetles of Central Europe, a diverse and well-studied invertebrate taxon. As nuclear markers we determined the 18S rDNA: V4, 18S rDNA: V7 and 28S rDNA: D3 expansion segments for 344 specimens of 75 species. Seventy-three species (97% of the analysed species could be accurately identified using COI, while the combined approach of all three nuclear markers provided resolution among 71 (95% of the studied Carabidae. Conclusion Our results confirm that the analysed nuclear ribosomal expansion segments in combination constitute a valuable and efficient supplement for classical DNA barcoding to avoid potential pitfalls when only mitochondrial data are being used. We also demonstrate the high potential of COI barcodes for the identification of even closely related carabid species.

  9. Molecular species identification of Central European ground beetles (Coleoptera: Carabidae) using nuclear rDNA expansion segments and DNA barcodes.

    Science.gov (United States)

    Raupach, Michael J; Astrin, Jonas J; Hannig, Karsten; Peters, Marcell K; Stoeckle, Mark Y; Wägele, Johann-Wolfgang

    2010-09-13

    The identification of vast numbers of unknown organisms using DNA sequences becomes more and more important in ecological and biodiversity studies. In this context, a fragment of the mitochondrial cytochrome c oxidase I (COI) gene has been proposed as standard DNA barcoding marker for the identification of organisms. Limitations of the COI barcoding approach can arise from its single-locus identification system, the effect of introgression events, incomplete lineage sorting, numts, heteroplasmy and maternal inheritance of intracellular endosymbionts. Consequently, the analysis of a supplementary nuclear marker system could be advantageous. We tested the effectiveness of the COI barcoding region and of three nuclear ribosomal expansion segments in discriminating ground beetles of Central Europe, a diverse and well-studied invertebrate taxon. As nuclear markers we determined the 18S rDNA: V4, 18S rDNA: V7 and 28S rDNA: D3 expansion segments for 344 specimens of 75 species. Seventy-three species (97%) of the analysed species could be accurately identified using COI, while the combined approach of all three nuclear markers provided resolution among 71 (95%) of the studied Carabidae. Our results confirm that the analysed nuclear ribosomal expansion segments in combination constitute a valuable and efficient supplement for classical DNA barcoding to avoid potential pitfalls when only mitochondrial data are being used. We also demonstrate the high potential of COI barcodes for the identification of even closely related carabid species.

  10. Computer-Aided Segmentation and Volumetry of Artificial Ground-Glass Nodules at Chest CT

    NARCIS (Netherlands)

    Scholten, Ernst Th.; Jacobs, Colin; van Ginneken, Bram; Willemink, Martin J.; Kuhnigk, Jan-Martin; van Ooijen, Peter M. A.; Oudkerk, Matthijs; Mali, Willem P. Th. M.; de Jong, Pim A.

    OBJECTIVE. The purpose of this study was to investigate a new software program for semiautomatic measurement of the volume and mass of ground-glass nodules (GGNs) in a chest phantom and to investigate the influence of CT scanner, reconstruction filter, tube voltage, and tube current. MATERIALS AND

  11. Automatic segmentation of myocardium at risk from contrast enhanced SSFP CMR: validation against expert readers and SPECT

    International Nuclear Information System (INIS)

    Tufvesson, Jane; Carlsson, Marcus; Aletras, Anthony H.; Engblom, Henrik; Deux, Jean-François; Koul, Sasha; Sörensson, Peder; Pernow, John; Atar, Dan; Erlinge, David; Arheden, Håkan; Heiberg, Einar

    2016-01-01

    Efficacy of reperfusion therapy can be assessed as myocardial salvage index (MSI) by determining the size of myocardium at risk (MaR) and myocardial infarction (MI), (MSI = 1-MI/MaR). Cardiovascular magnetic resonance (CMR) can be used to assess MI by late gadolinium enhancement (LGE) and MaR by either T2-weighted imaging or contrast enhanced SSFP (CE-SSFP). Automatic segmentation algorithms have been developed and validated for MI by LGE as well as for MaR by T2-weighted imaging. There are, however, no algorithms available for CE-SSFP. Therefore, the aim of this study was to develop and validate automatic segmentation of MaR in CE-SSFP. The automatic algorithm applies surface coil intensity correction and classifies myocardial intensities by Expectation Maximization to define a MaR region based on a priori regional criteria, and infarct region from LGE. Automatic segmentation was validated against manual delineation by expert readers in 183 patients with reperfused acute MI from two multi-center randomized clinical trials (RCT) (CHILL-MI and MITOCARE) and against myocardial perfusion SPECT in an additional set (n = 16). Endocardial and epicardial borders were manually delineated at end-diastole and end-systole. Manual delineation of MaR was used as reference and inter-observer variability was assessed for both manual delineation and automatic segmentation of MaR in a subset of patients (n = 15). MaR was expressed as percent of left ventricular mass (%LVM) and analyzed by bias (mean ± standard deviation). Regional agreement was analyzed by Dice Similarity Coefficient (DSC) (mean ± standard deviation). MaR assessed by manual and automatic segmentation were 36 ± 10 % and 37 ± 11 %LVM respectively with bias 1 ± 6 %LVM and regional agreement DSC 0.85 ± 0.08 (n = 183). MaR assessed by SPECT and CE-SSFP automatic segmentation were 27 ± 10 %LVM and 29 ± 7 %LVM respectively with bias 2 ± 7 %LVM. Inter-observer variability was 0 ± 3 %LVM for manual delineation and

  12. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    Science.gov (United States)

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  13. The role of the background: texture segregation and figure-ground segmentation.

    Science.gov (United States)

    Caputo, G

    1996-09-01

    The effects of a texture surround composed of line elements on a stimulus within which a target line element segregates, were studied. Detection and discrimination of the target when it had the same orientation as the surround were impaired at short presentation time; on the other hand, no effect was present when they were reciprocally orthogonal. These results are interpreted as background completion in texture segregation; a texture made up of similar elements is represented as a continuous surface with contour and contrast of an embedded element inhibited. This interpretation is further confirmed with a simple line protruding from an annulus. Generally, the results are taken as evidence that local features are prevented from segmenting when they are parts of a global entity.

  14. Validation of the CrIS fast physical NH3 retrieval with ground-based FTIR

    NARCIS (Netherlands)

    Dammers, E.; Shephard, M.W.; Palm, M.; Cady-Pereira, K.; Capps, S.; Lutsch, E.; Strong, K.; Hannigan, J.W.; Ortega, I.; Toon, G.C.; Stremme, W.; Grutter, M.; Jones, N.; Smale, D.; Siemons, J.; Hrpcek, K.; Tremblay, D.; Schaap, M.; Notholt, J.; Willem Erisman, J.

    2017-01-01

    Presented here is the validation of the CrIS (Cross-track Infrared Sounder) fast physical NH3 retrieval (CFPR) column and profile measurements using ground-based Fourier transform infrared (FTIR) observations. We use the total columns and profiles from seven FTIR sites in the Network for the

  15. Semiautomatic regional segmentation to measure orbital fat volumes in thyroid-associated ophthalmopathy. A validation study.

    Science.gov (United States)

    Comerci, M; Elefante, A; Strianese, D; Senese, R; Bonavolontà, P; Alfano, B; Bonavolontà, B; Brunetti, A

    2013-08-01

    This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data.

  16. Validating PET segmentation of thoracic lesions-is 4D PET necessary?

    DEFF Research Database (Denmark)

    Nielsen, M. S.; Carl, J.

    2017-01-01

    Respiratory-induced motions are prone to degrade the positron emission tomography (PET) signal with the consequent loss of image information and unreliable segmentations. This phantom study aims to assess the discrepancies relative to stationary PET segmentations, of widely used semiautomatic PET...... segmentation methods on heterogeneous target lesions influenced by motion during image acquisition. Three target lesions included dual F-18 Fluoro-deoxy-glucose (FDG) tracer concentrations as high-and low tracer activities relative to the background. Four different tracer concentration arrangements were...... segmented using three SUV threshold methods (Max40%, SUV40% and 2.5SUV) and a gradient based method (GradientSeg). Segmentations in static 3D-PET scans (PETsta) specified the reference conditions for the individual segmentation methods, target lesions and tracer concentrations. The motion included PET...

  17. Myocardial segmentation based on coronary anatomy using coronary computed tomography angiography: Development and validation in a pig model

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Mi Sun [Chung-Ang University College of Medicine, Department of Radiology, Chung-Ang University Hospital, Seoul (Korea, Republic of); Yang, Dong Hyun; Seo, Joon Beom; Kang, Joon-Won; Lim, Tae-Hwan [Asan Medical Center, University of Ulsan College of Medicine, Department of Radiology and Research Institute of Radiology, Seoul (Korea, Republic of); Kim, Young-Hak; Kang, Soo-Jin; Jung, Joonho [Asan Medical Center, University of Ulsan College of Medicine, Heart Institute, Seoul (Korea, Republic of); Kim, Namkug [Asan Medical Center, University of Ulsan College of Medicine, Department of Convergence Medicine, Seoul (Korea, Republic of); Heo, Seung-Ho [Asan Medical Center, University of Ulsan College of Medicine, Asan institute for Life Science, Seoul (Korea, Republic of); Baek, Seunghee [Asan Medical Center, University of Ulsan College of Medicine, Department of Clinical Epidemiology and Biostatistics, Seoul (Korea, Republic of); Choi, Byoung Wook [Yonsei University, Department of Diagnostic Radiology, College of Medicine, Seoul (Korea, Republic of)

    2017-10-15

    To validate a method for performing myocardial segmentation based on coronary anatomy using coronary CT angiography (CCTA). Coronary artery-based myocardial segmentation (CAMS) was developed for use with CCTA. To validate and compare this method with the conventional American Heart Association (AHA) classification, a single coronary occlusion model was prepared and validated using six pigs. The unstained occluded coronary territories of the specimens and corresponding arterial territories from CAMS and AHA segmentations were compared using slice-by-slice matching and 100 virtual myocardial columns. CAMS more precisely predicted ischaemic area than the AHA method, as indicated by 95% versus 76% (p < 0.001) of the percentage of matched columns (defined as percentage of matched columns of segmentation method divided by number of unstained columns in the specimen). According to the subgroup analyses, CAMS demonstrated a higher percentage of matched columns than the AHA method in the left anterior descending artery (100% vs. 77%; p < 0.001) and mid- (99% vs. 83%; p = 0.046) and apical-level territories of the left ventricle (90% vs. 52%; p = 0.011). CAMS is a feasible method for identifying the corresponding myocardial territories of the coronary arteries using CCTA. (orig.)

  18. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    Science.gov (United States)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  19. Intracranial aneurysm segmentation in 3D CT angiography: Method and quantitative validation with and without prior noise filtering

    International Nuclear Information System (INIS)

    Firouzian, Azadeh; Manniesing, Rashindra; Flach, Zwenneke H.; Risselada, Roelof; Kooten, Fop van; Sturkenboom, Miriam C.J.M.; Lugt, Aad van der; Niessen, Wiro J.

    2011-01-01

    Intracranial aneurysm volume and shape are important factors for predicting rupture risk, for pre-surgical planning and for follow-up studies. To obtain these parameters, manual segmentation can be employed; however, this is a tedious procedure, which is prone to inter- and intra-observer variability. Therefore there is a need for an automated method, which is accurate, reproducible and reliable. This study aims to develop and validate an automated method for segmenting intracranial aneurysms in Computed Tomography Angiography (CTA) data. Also, it is investigated whether prior smoothing improves segmentation robustness and accuracy. The proposed segmentation method is implemented in the level set framework, more specifically Geodesic Active Surfaces, in which a surface is evolved to capture the aneurysmal wall via an energy minimization approach. The energy term is composed of three different image features, namely; intensity, gradient magnitude and intensity variance. The method requires minimal user interaction, i.e. a single seed point inside the aneurysm needs to be placed, based on which image intensity statistics of the aneurysm are derived and used in defining the energy term. The method has been evaluated on 15 aneurysms in 11 CTA data sets by comparing the results to manual segmentations performed by two expert radiologists. Evaluation measures were Similarity Index, Average Surface Distance and Volume Difference. The results show that the automated aneurysm segmentation method is reproducible, and performs in the range of inter-observer variability in terms of accuracy. Smoothing by nonlinear diffusion with appropriate parameter settings prior to segmentation, slightly improves segmentation accuracy.

  20. The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation

    Science.gov (United States)

    Silva, F.; Goulet, C. A.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.

    2016-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100 Hz) ground motions for earthquakes at regional scales. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The BBP scientific software modules implement kinematic rupture generation, low- and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, several ground motion intensity measure calculations, and various ground motion goodness-of-fit tools. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground-motion seismograms, using multiple alternative ground motion simulation methods, and software utilities to generate tables, plots, and maps. The BBP has been developed over the last five years in a collaborative project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The SCEC BBP software released in 2016 can be compiled and run on recent Linux and Mac OS X systems with GNU compilers. It includes five simulation methods, seven simulation regions covering California, Japan, and Eastern North America, and the ability to compare simulation results against empirical ground motion models (aka GMPEs). The latest version includes updated ground motion simulation methods, a suite of new validation metrics and a simplified command line user interface.

  1. A Comparison of Two Commercial Volumetry Software Programs in the Analysis of Pulmonary Ground-Glass Nodules: Segmentation Capability and Measurement Accuracy

    Science.gov (United States)

    Kim, Hyungjin; Lee, Sang Min; Lee, Hyun-Ju; Goo, Jin Mo

    2013-01-01

    Objective To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. Materials and Methods In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. Results The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. Conclusion LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs. PMID:23901328

  2. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    International Nuclear Information System (INIS)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo

    2013-01-01

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  3. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo [Dept. of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul (Korea, Republic of)

    2013-08-15

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  4. Segmentation of corpus callosum using diffusion tensor imaging: validation in patients with glioblastoma

    International Nuclear Information System (INIS)

    Nazem-Zadeh, Mohammad-Reza; Saksena, Sona; Babajani-Fermi, Abbas; Jiang, Quan; Soltanian-Zadeh, Hamid; Rosenblum, Mark; Mikkelsen, Tom; Jain, Rajan

    2012-01-01

    This paper presents a three-dimensional (3D) method for segmenting corpus callosum in normal subjects and brain cancer patients with glioblastoma. Nineteen patients with histologically confirmed treatment naïve glioblastoma and eleven normal control subjects underwent DTI on a 3T scanner. Based on the information inherent in diffusion tensors, a similarity measure was proposed and used in the proposed algorithm. In this algorithm, diffusion pattern of corpus callosum was used as prior information. Subsequently, corpus callosum was automatically divided into Witelson subdivisions. We simulated the potential rotation of corpus callosum under tumor pressure and studied the reproducibility of the proposed segmentation method in such cases. Dice coefficients, estimated to compare automatic and manual segmentation results for Witelson subdivisions, ranged from 94% to 98% for control subjects and from 81% to 95% for tumor patients, illustrating closeness of automatic and manual segmentations. Studying the effect of corpus callosum rotation by different Euler angles showed that although segmentation results were more sensitive to azimuth and elevation than skew, rotations caused by brain tumors do not have major effects on the segmentation results. The proposed method and similarity measure segment corpus callosum by propagating a hyper-surface inside the structure (resulting in high sensitivity), without penetrating into neighboring fiber bundles (resulting in high specificity)

  5. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    Science.gov (United States)

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Comparison of vertical ground reaction forces during overground and treadmill running. A validation study

    Directory of Open Access Journals (Sweden)

    Kluitenberg Bas

    2012-11-01

    Full Text Available Abstract Background One major drawback in measuring ground-reaction forces during running is that it is time consuming to get representative ground-reaction force (GRF values with a traditional force platform. An instrumented force measuring treadmill can overcome the shortcomings inherent to overground testing. The purpose of the current study was to determine the validity of an instrumented force measuring treadmill for measuring vertical ground-reaction force parameters during running. Methods Vertical ground-reaction forces of experienced runners (12 male, 12 female were obtained during overground and treadmill running at slow, preferred and fast self-selected running speeds. For each runner, 7 mean vertical ground-reaction force parameters of the right leg were calculated based on five successful overground steps and 30 seconds of treadmill running data. Intraclass correlations (ICC(3,1 and ratio limits of agreement (RLOA were used for further analysis. Results Qualitatively, the overground and treadmill ground-reaction force curves for heelstrike runners and non-heelstrike runners were very similar. Quantitatively, the time-related parameters and active peak showed excellent agreement (ICCs between 0.76 and 0.95, RLOA between 5.7% and 15.5%. Impact peak showed modest agreement (ICCs between 0.71 and 0.76, RLOA between 19.9% and 28.8%. The maximal and average loading-rate showed modest to excellent ICCs (between 0.70 and 0.89, but RLOA were higher (between 34.3% and 45.4%. Conclusions The results of this study demonstrated that the treadmill is a moderate to highly valid tool for the assessment of vertical ground-reaction forces during running for runners who showed a consistent landing strategy during overground and treadmill running. The high stride-to-stride variance during both overground and treadmill running demonstrates the importance of measuring sufficient steps for representative ground-reaction force values. Therefore, an

  7. Analysis of a kinetic multi-segment foot model. Part I: Model repeatability and kinematic validity.

    Science.gov (United States)

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L

    2012-04-01

    Kinematic multi-segment foot models are still evolving, but have seen increased use in clinical and research settings. The addition of kinetics may increase knowledge of foot and ankle function as well as influence multi-segment foot model evolution; however, previous kinetic models are too complex for clinical use. In this study we present a three-segment kinetic foot model and thorough evaluation of model performance during normal gait. In this first of two companion papers, model reference frames and joint centers are analyzed for repeatability, joint translations are measured, segment rigidity characterized, and sample joint angles presented. Within-tester and between-tester repeatability were first assessed using 10 healthy pediatric participants, while kinematic parameters were subsequently measured on 17 additional healthy pediatric participants. Repeatability errors were generally low for all sagittal plane measures as well as transverse plane Hindfoot and Forefoot segments (median<3°), while the least repeatable orientations were the Hindfoot coronal plane and Hallux transverse plane. Joint translations were generally less than 2mm in any one direction, while segment rigidity analysis suggested rigid body behavior for the Shank and Hindfoot, with the Forefoot violating the rigid body assumptions in terminal stance/pre-swing. Joint excursions were consistent with previously published studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. A New and Simple Practical Plane Dividing Hepatic Segment 2 and 3 of the Liver: Evaluation of Its Validity

    International Nuclear Information System (INIS)

    Lee, Ho Yun; Chung, Jin Wook; Lee, Jeong Min; Yoon, Chang Jin; Lee, Whal; Jae, Hwan Jun; Yin, Yong Hu; Kang, Sung Gwon; Park, Jae Hyung

    2007-01-01

    The conventional method of dividing hepatic segment 2 (S2) and 3 (S3) is subjective and CT interpretation is unclear. The purpose of our study was to test the validity of our hypothesis that the actual plane dividing S2 and S3 is a vertical plane of equal distance from the S2 and S3 portal veins in clinical situations. We prospectively performed thin-section iodized-oil CT immediately after segmental chemoembolization of S2 or S3 in 27 consecutive patients and measured the angle of intersegmental plane on sagittal multiplanar reformation (MPR) images to verify its vertical nature. Our hypothetical plane dividing S2 and S3 is vertical and equidistant from the S2 and S3 portal veins (vertical method). To clinically validate this, we retrospectively collected 102 patients with small solitary hepatocellular carcinomas (HCC) on S2 or S3 the segmental location of which was confirmed angiographically. Two reviewers predicted the segmental location of each tumor at CT using the vertical method independently in blind trials. The agreement between CT interpretation and angiographic results was analyzed with Kappa values. We also compared the vertical method with the horizontal one. In MPR images, the average angle of the intersegmental plane was slanted 15 degrees anteriorly from the vertical plane. In predicting the segmental location of small HCC with the vertical method, the Kappa value between CT interpretation and angiographic result was 0.838 for reviewer 1 and 0.756 for reviewer 2. Inter-observer agreement was 0.918. The vertical method was superior to the horizontal method for localization of HCC in the left lobe (p < 0.0001 for reviewers 1 and 2). The proposed vertical plane equidistant from S2 and S3 portal vein is simple to use and useful for dividing S2 and S3 of the liver

  9. Active debris removal GNC challenges over design and required ground validation

    Science.gov (United States)

    Colmenarejo, Pablo; Avilés, Marcos; di Sotto, Emanuele

    2015-06-01

    Because of the exponential growth of space debris, the access to space in the medium-term future is considered as being seriously compromised, particularly within LEO polar Sun-synchronous orbits and within geostationary orbits. The active debris removal (ADR) application poses new and challenging requirements on: first, the new required Guidance, Navigation and Control (GNC) technologies and, second, how to validate these new technologies before being applied in real missions. There is no doubt about the strong safety and collision risk aspects affecting the real operational ADR missions. But it shall be considered that even ADR demonstration missions will be affected by significant risk of collision during the demonstration, and that the ADR GNC systems/technologies to be used shall be well mature before using/demonstrating them in space. Specific and dedicated on-ground validation approaches, techniques and facilities are mandatory. The different ADR techniques can be roughly catalogued in three main groups (rigid capture, non-rigid capture and contactless). All of them have a strong impact on the GNC system of the active vehicle during the capture/proximity phase and, particularly, during the active vehicle/debris combo control phase after capture and during the de-orbiting phase. The main operational phases on an ADR scenario are: (1) ground controlled phase (ADR vehicle and debris are far), (2) fine orbit synchronization phase (ADR vehicle to reach debris ±V-bar), (3) short range phase (along track distance reduction till 10-100 s of metres), (4) terminal approach/capture phase and (5) de-orbiting. While phases 1-3 are somehow conventional and already addressed in detail during past/on-going studies related to rendezvous and/or formation flying, phases 4-5 are very specific and not mature in terms of GNC needed technologies and HW equipment. GMV is currently performing different internal activities and ESA studies/developments related to ADR mission, GNC and

  10. A proposed strategy for the validation of ground-water flow and solute transport models

    International Nuclear Information System (INIS)

    Davis, P.A.; Goodrich, M.T.

    1991-01-01

    Ground-water flow and transport models can be thought of as a combination of conceptual and mathematical models and the data that characterize a given system. The judgment of the validity or invalidity of a model depends both on the adequacy of the data and the model structure (i.e., the conceptual and mathematical model). This report proposes a validation strategy for testing both components independently. The strategy is based on the philosophy that a model cannot be proven valid, only invalid or not invalid. In addition, the authors believe that a model should not be judged in absence of its intended purpose. Hence, a flow and transport model may be invalid for one purpose but not invalid for another. 9 refs

  11. A Ground-Based Validation System of Teleoperation for a Space Robot

    Directory of Open Access Journals (Sweden)

    Xueqian Wang

    2012-10-01

    Full Text Available Teleoperation of space robots is very important for future on-orbit service. In order to assure the task is accomplished successfully, ground experiments are required to verify the function and validity of the teleoperation system before a space robot is launched. In this paper, a ground-based validation subsystem is developed as a part of a teleoperation system. The subsystem is mainly composed of four parts: the input verification module, the onboard verification module, the dynamic and image workstation, and the communication simulator. The input verification module, consisting of hardware and software of the master, is used to verify the input ability. The onboard verification module, consisting of the same hardware and software as the onboard processor, is used to verify the processor's computing ability and execution schedule. In addition, the dynamic and image workstation calculates the dynamic response of the space robot and target, and generates emulated camera images, including the hand-eye cameras, global-vision camera and rendezvous camera. The communication simulator provides fidelity communication conditions, i.e., time delays and communication bandwidth. Lastly, we integrated a teleoperation system and conducted many experiments on the system. Experiment results show that the ground system is very useful for verified teleoperation technology.

  12. Hippocampal unified multi-atlas network (HUMAN): protocol and scale validation of a novel segmentation tool.

    Science.gov (United States)

    Amoroso, N; Errico, R; Bruno, S; Chincarini, A; Garuccio, E; Sensi, F; Tangaro, S; Tateo, A; Bellotti, R

    2015-11-21

    In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer's Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice[Formula: see text] and Dice[Formula: see text]). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.

  13. FUZZY CLUSTERWISE REGRESSION IN BENEFIT SEGMENTATION - APPLICATION AND INVESTIGATION INTO ITS VALIDITY

    NARCIS (Netherlands)

    STEENKAMP, JBEM; WEDEL, M

    This article describes a new technique for benefit segmentation, fuzzy clusterwise regression analysis (FCR). It combines clustering with prediction and is based on multiattribute models of consumer behavior. FCR is especially useful when the number of observations per subject is small, when the

  14. Ground Water Atlas of the United States: Segment 11, Delaware, Maryland, New Jersey, North Carolina, Pennsylvania, Virginia, West Virginia

    Science.gov (United States)

    Trapp, Henry; Horn, Marilee A.

    1997-01-01

    Segment 11 consists of the States of Delaware, Maryland, New Jersey, North Carolina, West Virginia, and the Commonwealths of Pennsylvania and Virginia. All but West Virginia border on the Atlantic Ocean or tidewater. Pennsylvania also borders on Lake Erie. Small parts of northwestern and north-central Pennsylvania drain to Lake Erie and Lake Ontario; the rest of the segment drains either to the Atlantic Ocean or the Gulf of Mexico. Major rivers include the Hudson, the Delaware, the Susquehanna, the Potomac, the Rappahannock, the James, the Chowan, the Neuse, the Tar, the Cape Fear, and the Yadkin-Peedee, all of which drain into the Atlantic Ocean, and the Ohio and its tributaries, which drain to the Gulf of Mexico. Although rivers are important sources of water supply for many cities, such as Trenton, N.J.; Philadelphia and Pittsburgh, Pa.; Baltimore, Md.; Washington, D.C.; Richmond, Va.; and Raleigh, N.C., one-fourth of the population, particularly the people who live on the Coastal Plain, depends on ground water for supply. Such cities as Camden, N.J.; Dover, Del.; Salisbury and Annapolis, Md.; Parkersburg and Weirton, W.Va.; Norfolk, Va.; and New Bern and Kinston, N.C., use ground water as a source of public supply. All the water in Segment 11 originates as precipitation. Average annual precipitation ranges from less than 36 inches in parts of Pennsylvania, Maryland, Virginia, and West Virginia to more than 80 inches in parts of southwestern North Carolina (fig. 1). In general, precipitation is greatest in mountainous areas (because water tends to condense from moisture-laden air masses as the air passes over the higher altitudes) and near the coast, where water vapor that has been evaporated from the ocean is picked up by onshore winds and falls as precipitation when it reaches the shoreline. Some of the precipitation returns to the atmosphere by evapotranspiration (evaporation plus transpiration by plants), but much of it either flows overland into streams as

  15. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  16. GPM GROUND VALIDATION DUAL-FREQUENCY DUAL-POLARIZED DOPPLER RADAR (D3R) IFLOODS V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Dual-frequency Dual-polarized Doppler Radar (D3R) IFloodS data set contain radar reflectivity and doppler velocity measurements. The D3R...

  17. Validation of automated supervised segmentation of multibeam backscatter data from the Chatham Rise, New Zealand

    Science.gov (United States)

    Hillman, Jess I. T.; Lamarche, Geoffroy; Pallentin, Arne; Pecher, Ingo A.; Gorman, Andrew R.; Schneider von Deimling, Jens

    2018-06-01

    Using automated supervised segmentation of multibeam backscatter data to delineate seafloor substrates is a relatively novel technique. Low-frequency multibeam echosounders (MBES), such as the 12-kHz EM120, present particular difficulties since the signal can penetrate several metres into the seafloor, depending on substrate type. We present a case study illustrating how a non-targeted dataset may be used to derive information from multibeam backscatter data regarding distribution of substrate types. The results allow us to assess limitations associated with low frequency MBES where sub-bottom layering is present, and test the accuracy of automated supervised segmentation performed using SonarScope® software. This is done through comparison of predicted and observed substrate from backscatter facies-derived classes and substrate data, reinforced using quantitative statistical analysis based on a confusion matrix. We use sediment samples, video transects and sub-bottom profiles acquired on the Chatham Rise, east of New Zealand. Inferences on the substrate types are made using the Generic Seafloor Acoustic Backscatter (GSAB) model, and the extents of the backscatter classes are delineated by automated supervised segmentation. Correlating substrate data to backscatter classes revealed that backscatter amplitude may correspond to lithologies up to 4 m below the seafloor. Our results emphasise several issues related to substrate characterisation using backscatter classification, primarily because the GSAB model does not only relate to grain size and roughness properties of substrate, but also accounts for other parameters that influence backscatter. Better understanding these limitations allows us to derive first-order interpretations of sediment properties from automated supervised segmentation.

  18. Simulating Deformations of MR Brain Images for Validation of Atlas-based Segmentation and Registration Algorithms

    OpenAIRE

    Xue, Zhong; Shen, Dinggang; Karacali, Bilge; Stern, Joshua; Rottenberg, David; Davatzikos, Christos

    2006-01-01

    Simulated deformations and images can act as the gold standard for evaluating various template-based image segmentation and registration algorithms. Traditional deformable simulation methods, such as the use of analytic deformation fields or the displacement of landmarks followed by some form of interpolation, are often unable to construct rich (complex) and/or realistic deformations of anatomical organs. This paper presents new methods aiming to automatically simulate realistic inter- and in...

  19. Development of a histologically validated segmentation protocol for the hippocampal body.

    Science.gov (United States)

    Steve, Trevor A; Yasuda, Clarissa L; Coras, Roland; Lail, Mohjevan; Blumcke, Ingmar; Livy, Daniel J; Malykhin, Nikolai; Gross, Donald W

    2017-08-15

    Recent findings have demonstrated that hippocampal subfields can be selectively affected in different disease states, which has led to efforts to segment the human hippocampus with in vivo magnetic resonance imaging (MRI). However, no studies have examined the histological accuracy of subfield segmentation protocols. The presence of MRI-visible anatomical landmarks with known correspondence to histology represents a fundamental prerequisite for in vivo hippocampal subfield segmentation. In the present study, we aimed to: 1) develop a novel method for hippocampal body segmentation, based on two MRI-visible anatomical landmarks (stratum lacunosum moleculare [SLM] & dentate gyrus [DG]), and assess its accuracy in comparison to the gold standard direct histological measurements; 2) quantify the accuracy of two published segmentation strategies in comparison to the histological gold standard; and 3) apply the novel method to ex vivo MRI and correlate the results with histology. Ultra-high resolution ex vivo MRI was performed on six whole cadaveric hippocampal specimens, which were then divided into 22 blocks and histologically processed. The hippocampal bodies were segmented into subfields based on histological criteria and subfield boundaries and areas were directly measured. A novel method was developed using mean percentage of the total SLM distance to define subfield boundaries. Boundary distances and subfield areas on histology were then determined using the novel method and compared to the gold standard histological measurements. The novel method was then used to determine ex vivo MRI measures of subfield boundaries and areas, which were compared to histological measurements. For direct histological measurements, the mean percentages of total SLM distance were: Subiculum/CA1 = 9.7%, CA1/CA2 = 78.4%, CA2/CA3 = 97.5%. When applied to histology, the novel method provided accurate measures for CA1/CA2 (ICC = 0.93) and CA2/CA3 (ICC = 0.97) boundaries, but not for the

  20. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    Science.gov (United States)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  1. Validation of OMI erythemal doses with multi-sensor ground-based measurements in Thessaloniki, Greece

    Science.gov (United States)

    Zempila, Melina Maria; Fountoulakis, Ilias; Taylor, Michael; Kazadzis, Stelios; Arola, Antti; Koukouli, Maria Elissavet; Bais, Alkiviadis; Meleti, Chariklia; Balis, Dimitrios

    2018-06-01

    The aim of this study is to validate the Ozone Monitoring Instrument (OMI) erythemal dose rates using ground-based measurements in Thessaloniki, Greece. In the Laboratory of Atmospheric Physics of the Aristotle University of Thessaloniki, a Yankee Environmental System UVB-1 radiometer measures the erythemal dose rates every minute, and a Norsk Institutt for Luftforskning (NILU) multi-filter radiometer provides multi-filter based irradiances that were used to derive erythemal dose rates for the period 2005-2014. Both these datasets were independently validated against collocated UV irradiance spectra from a Brewer MkIII spectrophotometer. Cloud detection was performed based on measurements of the global horizontal radiation from a Kipp & Zonen pyranometer and from NILU measurements in the visible range. The satellite versus ground observation validation was performed taking into account the effect of temporal averaging, limitations related to OMI quality control criteria, cloud conditions, the solar zenith angle and atmospheric aerosol loading. Aerosol optical depth was also retrieved using a collocated CIMEL sunphotometer in order to assess its impact on the comparisons. The effect of total ozone columns satellite versus ground-based differences on the erythemal dose comparisons was also investigated. Since most of the public awareness alerts are based on UV Index (UVI) classifications, an analysis and assessment of OMI capability for retrieving UVIs was also performed. An overestimation of the OMI erythemal product by 3-6% and 4-8% with respect to ground measurements is observed when examining overpass and noontime estimates respectively. The comparisons revealed a relatively small solar zenith angle dependence, with the OMI data showing a slight dependence on aerosol load, especially at high aerosol optical depth values. A mean underestimation of 2% in OMI total ozone columns under cloud-free conditions was found to lead to an overestimation in OMI erythemal

  2. Validation of OMI UV measurements against ground-based measurements at a station in Kampala, Uganda

    Science.gov (United States)

    Muyimbwa, Dennis; Dahlback, Arne; Stamnes, Jakob; Hamre, Børge; Frette, Øyvind; Ssenyonga, Taddeo; Chen, Yi-Chun

    2015-04-01

    We present solar ultraviolet (UV) irradiance data measured with a NILU-UV instrument at a ground site in Kampala (0.31°N, 32.58°E), Uganda for the period 2005-2014. The data were analyzed and compared with UV irradiances inferred from the Ozone Monitoring Instrument (OMI) for the same period. Kampala is located on the shores of lake Victoria, Africa's largest fresh water lake, which may influence the climate and weather conditions of the region. Also, there is an excessive use of worn cars, which may contribute to a high anthropogenic loading of absorbing aerosols. The OMI surface UV algorithm does not account for absorbing aerosols, which may lead to systematic overestimation of surface UV irradiances inferred from OMI satellite data. We retrieved UV index values from OMI UV irradiances and validated them against the ground-based UV index values obtained from NILU-UV measurements. The UV index values were found to follow a seasonal pattern similar to that of the clouds and the rainfall. OMI inferred UV index values were overestimated with a mean bias of about 28% under all-sky conditions, but the mean bias was reduced to about 8% under clear-sky conditions when only days with radiation modification factor (RMF) greater than 65% were considered. However, when days with RMF greater than 70, 75, and 80% were considered, OMI inferred UV index values were found to agree with the ground-based UV index values to within 5, 3, and 1%, respectively. In the validation we identified clouds/aerosols, which were present in 88% of the measurements, as the main cause of OMI inferred overestimation of the UV index.

  3. A semi-automated volumetric software for segmentation and perfusion parameter quantification of brain tumors using 320-row multidetector computed tomography: a validation study

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Soo Young; Suh, Sangil; Ryoo, Inseon; Park, Arim; Seol, Hae Young [Korea University Guro Hospital, Department of Radiology, Seoul (Korea, Republic of); Noh, Kyoung Jin [Soonchunhyang University, Department of Electronic Engineering, Asan (Korea, Republic of); Shim, Hackjoon [Toshiba Medical Systems Korea Co., Seoul (Korea, Republic of)

    2017-05-15

    We developed a semi-automated volumetric software, NPerfusion, to segment brain tumors and quantify perfusion parameters on whole-brain CT perfusion (WBCTP) images. The purpose of this study was to assess the feasibility of the software and to validate its performance compared with manual segmentation. Twenty-nine patients with pathologically proven brain tumors who underwent preoperative WBCTP between August 2012 and February 2015 were included. Three perfusion parameters, arterial flow (AF), equivalent blood volume (EBV), and Patlak flow (PF, which is a measure of permeability of capillaries), of brain tumors were generated by a commercial software and then quantified volumetrically by NPerfusion, which also semi-automatically segmented tumor boundaries. The quantification was validated by comparison with that of manual segmentation in terms of the concordance correlation coefficient and Bland-Altman analysis. With NPerfusion, we successfully performed segmentation and quantified whole volumetric perfusion parameters of all 29 brain tumors that showed consistent perfusion trends with previous studies. The validation of the perfusion parameter quantification exhibited almost perfect agreement with manual segmentation, with Lin concordance correlation coefficients (ρ {sub c}) for AF, EBV, and PF of 0.9988, 0.9994, and 0.9976, respectively. On Bland-Altman analysis, most differences between this software and manual segmentation on the commercial software were within the limit of agreement. NPerfusion successfully performs segmentation of brain tumors and calculates perfusion parameters of brain tumors. We validated this semi-automated segmentation software by comparing it with manual segmentation. NPerfusion can be used to calculate volumetric perfusion parameters of brain tumors from WBCTP. (orig.)

  4. A semi-automated volumetric software for segmentation and perfusion parameter quantification of brain tumors using 320-row multidetector computed tomography: a validation study.

    Science.gov (United States)

    Chae, Soo Young; Suh, Sangil; Ryoo, Inseon; Park, Arim; Noh, Kyoung Jin; Shim, Hackjoon; Seol, Hae Young

    2017-05-01

    We developed a semi-automated volumetric software, NPerfusion, to segment brain tumors and quantify perfusion parameters on whole-brain CT perfusion (WBCTP) images. The purpose of this study was to assess the feasibility of the software and to validate its performance compared with manual segmentation. Twenty-nine patients with pathologically proven brain tumors who underwent preoperative WBCTP between August 2012 and February 2015 were included. Three perfusion parameters, arterial flow (AF), equivalent blood volume (EBV), and Patlak flow (PF, which is a measure of permeability of capillaries), of brain tumors were generated by a commercial software and then quantified volumetrically by NPerfusion, which also semi-automatically segmented tumor boundaries. The quantification was validated by comparison with that of manual segmentation in terms of the concordance correlation coefficient and Bland-Altman analysis. With NPerfusion, we successfully performed segmentation and quantified whole volumetric perfusion parameters of all 29 brain tumors that showed consistent perfusion trends with previous studies. The validation of the perfusion parameter quantification exhibited almost perfect agreement with manual segmentation, with Lin concordance correlation coefficients (ρ c ) for AF, EBV, and PF of 0.9988, 0.9994, and 0.9976, respectively. On Bland-Altman analysis, most differences between this software and manual segmentation on the commercial software were within the limit of agreement. NPerfusion successfully performs segmentation of brain tumors and calculates perfusion parameters of brain tumors. We validated this semi-automated segmentation software by comparing it with manual segmentation. NPerfusion can be used to calculate volumetric perfusion parameters of brain tumors from WBCTP.

  5. Classification in hyperspectral images by independent component analysis, segmented cross-validation and uncertainty estimates

    Directory of Open Access Journals (Sweden)

    Beatriz Galindo-Prieto

    2018-02-01

    Full Text Available Independent component analysis combined with various strategies for cross-validation, uncertainty estimates by jack-knifing and critical Hotelling’s T2 limits estimation, proposed in this paper, is used for classification purposes in hyperspectral images. To the best of our knowledge, the combined approach of methods used in this paper has not been previously applied to hyperspectral imaging analysis for interpretation and classification in the literature. The data analysis performed here aims to distinguish between four different types of plastics, some of them containing brominated flame retardants, from their near infrared hyperspectral images. The results showed that the method approach used here can be successfully used for unsupervised classification. A comparison of validation approaches, especially leave-one-out cross-validation and regions of interest scheme validation is also evaluated.

  6. The Electromagnetic Field for a PEC Wedge Over a Grounded Dielectric Slab: 1. Formulation and Validation

    Science.gov (United States)

    Daniele, Vito G.; Lombardi, Guido; Zich, Rodolfo S.

    2017-12-01

    Complex scattering problems are often made by composite structures where wedges and penetrable substrates may interact at near field. In this paper (Part 1) together with its companion paper (Part 2) we study the canonical problem constituted of a Perfectly Electrically Conducting (PEC) wedge lying on a grounded dielectric slab with a comprehensive mathematical model based on the application of the Generalized Wiener-Hopf Technique (GWHT) with the help of equivalent circuital representations for linear homogenous regions (angular and layered regions). The proposed procedure is valid for the general case, and the papers focus on E-polarization. The solution is obtained using analytical and semianalytical approaches that reduce the Wiener-Hopf factorization to integral equations. Several numerical test cases validate the proposed method. The scope of Part 1 is to present the method and its validation applied to the problem. The companion paper Part 2 focuses on the properties of the solution, and it presents physical and engineering insights as Geometrical Theory of Diffraction (GTD)/Uniform Theory of Diffraction(UTD) coefficients, total far fields, modal fields, and excitation of surface and leaky waves for different kinds of source. The structure is of interest in antenna technologies and electromagnetic compatibility (tip on a substrate with guiding and antenna properties).

  7. Status Update on the GPM Ground Validation Iowa Flood Studies (IFloodS) Field Experiment

    Science.gov (United States)

    Petersen, Walt; Krajewski, Witold

    2013-04-01

    The overarching objective of integrated hydrologic ground validation activities supporting the Global Precipitation Measurement Mission (GPM) is to provide better understanding of the strengths and limitations of the satellite products, in the context of hydrologic applications. To this end, the GPM Ground Validation (GV) program is conducting the first of several hydrology-oriented field efforts: the Iowa Flood Studies (IFloodS) experiment. IFloodS will be conducted in the central to northeastern part of Iowa in Midwestern United States during the months of April-June, 2013. Specific science objectives and related goals for the IFloodS experiment can be summarized as follows: 1. Quantify the physical characteristics and space/time variability of rain (rates, DSD, process/"regime") and map to satellite rainfall retrieval uncertainty. 2. Assess satellite rainfall retrieval uncertainties at instantaneous to daily time scales and evaluate propagation/impact of uncertainty in flood-prediction. 3. Assess hydrologic predictive skill as a function of space/time scales, basin morphology, and land use/cover. 4. Discern the relative roles of rainfall quantities such as rate and accumulation as compared to other factors (e.g. transport of water in the drainage network) in flood genesis. 5. Refine approaches to "integrated hydrologic GV" concept based on IFloodS experiences and apply to future GPM Integrated GV field efforts. These objectives will be achieved via the deployment of the NASA NPOL S-band and D3R Ka/Ku-band dual-polarimetric radars, University of Iowa X-band dual-polarimetric radars, a large network of paired rain gauge platforms with attendant soil moisture and temperature probes, a large network of both 2D Video and Parsivel disdrometers, and USDA-ARS gauge and soil-moisture measurements (in collaboration with the NASA SMAP mission). The aforementioned measurements will be used to complement existing operational WSR-88D S-band polarimetric radar measurements

  8. Validation of MOPITT carbon monoxide using ground-based Fourier transform infrared spectrometer data from NDACC

    Science.gov (United States)

    Buchholz, Rebecca R.; Deeter, Merritt N.; Worden, Helen M.; Gille, John; Edwards, David P.; Hannigan, James W.; Jones, Nicholas B.; Paton-Walsh, Clare; Griffith, David W. T.; Smale, Dan; Robinson, John; Strong, Kimberly; Conway, Stephanie; Sussmann, Ralf; Hase, Frank; Blumenstock, Thomas; Mahieu, Emmanuel; Langerock, Bavo

    2017-06-01

    The Measurements of Pollution in the Troposphere (MOPITT) satellite instrument provides the longest continuous dataset of carbon monoxide (CO) from space. We perform the first validation of MOPITT version 6 retrievals using total column CO measurements from ground-based remote-sensing Fourier transform infrared spectrometers (FTSs). Validation uses data recorded at 14 stations, that span a wide range of latitudes (80° N to 78° S), in the Network for the Detection of Atmospheric Composition Change (NDACC). MOPITT measurements are spatially co-located with each station, and different vertical sensitivities between instruments are accounted for by using MOPITT averaging kernels (AKs). All three MOPITT retrieval types are analyzed: thermal infrared (TIR-only), joint thermal and near infrared (TIR-NIR), and near infrared (NIR-only). Generally, MOPITT measurements overestimate CO relative to FTS measurements, but the bias is typically less than 10 %. Mean bias is 2.4 % for TIR-only, 5.1 % for TIR-NIR, and 6.5 % for NIR-only. The TIR-NIR and NIR-only products consistently produce a larger bias and lower correlation than the TIR-only. Validation performance of MOPITT for TIR-only and TIR-NIR retrievals over land or water scenes is equivalent. The four MOPITT detector element pixels are validated separately to account for their different uncertainty characteristics. Pixel 1 produces the highest standard deviation and lowest correlation for all three MOPITT products. However, for TIR-only and TIR-NIR, the error-weighted average that includes all four pixels often provides the best correlation, indicating compensating pixel biases and well-captured error characteristics. We find that MOPITT bias does not depend on latitude but rather is influenced by the proximity to rapidly changing atmospheric CO. MOPITT bias drift has been bound geographically to within ±0.5 % yr-1 or lower at almost all locations.

  9. Automated cerebellar segmentation: Validation and application to detect smaller volumes in children prenatally exposed to alcohol

    Directory of Open Access Journals (Sweden)

    Valerie A. Cardenas

    2014-01-01

    Discussion: These results demonstrate excellent reliability and validity of automated cerebellar volume and mid-sagittal area measurements, compared to manual measurements. These data also illustrate that this new technology for automatically delineating the cerebellum leads to conclusions regarding the effects of prenatal alcohol exposure on the cerebellum consistent with prior studies that used labor intensive manual delineation, even with a very small sample.

  10. Validation of neural spike sorting algorithms without ground-truth information.

    Science.gov (United States)

    Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F

    2016-05-01

    The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Monitoring Ground Subsidence in Hong Kong via Spaceborne Radar: Experiments and Validation

    Directory of Open Access Journals (Sweden)

    Yuxiao Qin

    2015-08-01

    Full Text Available The persistent scatterers interferometry (PSI technique is gradually becoming known for its capability of providing up to millimeter accuracy of measurement on ground displacement. Nevertheless, there is still quite a good amount of doubt regarding its correctness or accuracy. In this paper, we carried out an experiment corroborating the capability of the PSI technique with the help of a traditional survey method in the urban area of Hong Kong, China. Seventy three TerraSAR-X (TSX and TanDEM-X (TDX images spanning over four years are used for the data process. There are three aims of this study. The first is to generate a displacement map of urban Hong Kong and to check for spots with possible ground movements. This information will be provided to the local surveyors so that they can check these specific locations. The second is to validate if the accuracy of the PSI technique can indeed reach the millimeter level in this real application scenario. For validating the accuracy of PSI, four corner reflectors (CR were installed at a construction site on reclaimed land in Hong Kong. They were manually moved up or down by a few to tens of millimeters, and the value derived from the PSI analysis was compared to the true value. The experiment, carried out in unideal conditions, nevertheless proved undoubtedly that millimeter accuracy can be achieved by the PSI technique. The last is to evaluate the advantages and limitations of the PSI technique. Overall, the PSI technique can be extremely useful if used in collaboration with other techniques, so that the advantages can be highlighted and the drawbacks avoided.

  12. Validation of ozone monitoring instrument ultraviolet index against ground-based UV index in Kampala, Uganda.

    Science.gov (United States)

    Muyimbwa, Dennis; Dahlback, Arne; Ssenyonga, Taddeo; Chen, Yi-Chun; Stamnes, Jakob J; Frette, Øyvind; Hamre, Børge

    2015-10-01

    The Ozone Monitoring Instrument (OMI) overpass solar ultraviolet (UV) indices have been validated against the ground-based UV indices derived from Norwegian Institute for Air Research UV measurements in Kampala (0.31° N, 32.58° E, 1200 m), Uganda for the period between 2005 and 2014. An excessive use of old cars, which would imply a high loading of absorbing aerosols, could cause the OMI retrieval algorithm to overestimate the surface UV irradiances. The UV index values were found to follow a seasonal pattern with maximum values in March and October. Under all-sky conditions, the OMI retrieval algorithm was found to overestimate the UV index values with a mean bias of about 28%. When only days with radiation modification factor greater than or equal to 65%, 70%, 75%, and 80% were considered, the mean bias between ground-based and OMI overpass UV index values was reduced to 8%, 5%, 3%, and 1%, respectively. The overestimation of the UV index by the OMI retrieval algorithm was found to be mainly due to clouds and aerosols.

  13. Cross Validation of Rain Drop Size Distribution between GPM and Ground Based Polarmetric radar

    Science.gov (United States)

    Chandra, C. V.; Biswas, S.; Le, M.; Chen, H.

    2017-12-01

    Dual-frequency precipitation radar (DPR) on board the Global Precipitation Measurement (GPM) core satellite has reflectivity measurements at two independent frequencies, Ku- and Ka- band. Dual-frequency retrieval algorithms have been developed traditionally through forward, backward, and recursive approaches. However, these algorithms suffer from "dual-value" problem when they retrieve medium volume diameter from dual-frequency ratio (DFR) in rain region. To this end, a hybrid method has been proposed to perform raindrop size distribution (DSD) retrieval for GPM using a linear constraint of DSD along rain profile to avoid "dual-value" problem (Le and Chandrasekar, 2015). In the current GPM level 2 algorithm (Iguchi et al. 2017- Algorithm Theoretical Basis Document) the Solver module retrieves a vertical profile of drop size distributionn from dual-frequency observations and path integrated attenuations. The algorithm details can be found in Seto et al. (2013) . On the other hand, ground based polarimetric radars have been used for a long time to estimate drop size distributions (e.g., Gorgucci et al. 2002 ). In addition, coincident GPM and ground based observations have been cross validated using careful overpass analysis. In this paper, we perform cross validation on raindrop size distribution retrieval from three sources, namely the hybrid method, the standard products from the solver module and DSD retrievals from ground polarimetric radars. The results are presented from two NEXRAD radars located in Dallas -Fort Worth, Texas (i.e., KFWS radar) and Melbourne, Florida (i.e., KMLB radar). The results demonstrate the ability of DPR observations to produce DSD estimates, which can be used subsequently to generate global DSD maps. References: Seto, S., T. Iguchi, T. Oki, 2013: The basic performance of a precipitation retrieval algorithm for the Global Precipitation Measurement mission's single/dual-frequency radar measurements. IEEE Transactions on Geoscience and

  14. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation

    International Nuclear Information System (INIS)

    Elbakri, Idris A; Fessler, Jeffrey A

    2003-01-01

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications

  15. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation.

    Science.gov (United States)

    Idris A, Elbakri; Fessler, Jeffrey A

    2003-08-07

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.

  16. Comparison of five cluster validity indices performance in brain [18 F]FET-PET image segmentation using k-means.

    Science.gov (United States)

    Abualhaj, Bedor; Weng, Guoyang; Ong, Melissa; Attarwala, Ali Asgar; Molina, Flavia; Büsing, Karen; Glatting, Gerhard

    2017-01-01

    Dynamic [ 18 F]fluoro-ethyl-L-tyrosine positron emission tomography ([ 18 F]FET-PET) is used to identify tumor lesions for radiotherapy treatment planning, to differentiate glioma recurrence from radiation necrosis and to classify gliomas grading. To segment different regions in the brain k-means cluster analysis can be used. The main disadvantage of k-means is that the number of clusters must be pre-defined. In this study, we therefore compared different cluster validity indices for automated and reproducible determination of the optimal number of clusters based on the dynamic PET data. The k-means algorithm was applied to dynamic [ 18 F]FET-PET images of 8 patients. Akaike information criterion (AIC), WB, I, modified Dunn's and Silhouette indices were compared on their ability to determine the optimal number of clusters based on requirements for an adequate cluster validity index. To check the reproducibility of k-means, the coefficients of variation CVs of the objective function values OFVs (sum of squared Euclidean distances within each cluster) were calculated using 100 random centroid initialization replications RCI 100 for 2 to 50 clusters. k-means was performed independently on three neighboring slices containing tumor for each patient to investigate the stability of the optimal number of clusters within them. To check the independence of the validity indices on the number of voxels, cluster analysis was applied after duplication of a slice selected from each patient. CVs of index values were calculated at the optimal number of clusters using RCI 100 to investigate the reproducibility of the validity indices. To check if the indices have a single extremum, visual inspection was performed on the replication with minimum OFV from RCI 100 . The maximum CV of OFVs was 2.7 × 10 -2 from all patients. The optimal number of clusters given by modified Dunn's and Silhouette indices was 2 or 3 leading to a very poor segmentation. WB and I indices suggested in

  17. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    Science.gov (United States)

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.

  18. Design and validation of inert homemade explosive simulants for ground penetrating radar

    Science.gov (United States)

    VanderGaast, Brian W.; McFee, John E.; Russell, Kevin L.; Faust, Anthony A.

    2015-05-01

    The Canadian Armed Forces (CAF) identified a requirement for inert simulants to act as improvised, or homemade, explosives (IEs) when training on, or evaluating, ground penetrating radar (GPR) systems commonly used in the detection of buried landmines and improvised explosive devices (IEDs). In response, Defence R and D Canada (DRDC) initiated a project to develop IE simulant formulations using commonly available inert materials. These simulants are intended to approximate the expected GPR response of common ammonium nitrate-based IEs, in particular ammonium nitrate/fuel oil (ANFO) and ammonium nitrate/aluminum (ANAl). The complex permittivity over the range of electromagnetic frequencies relevant to standard GPR systems was measured for bulk quantities of these three IEs that had been fabricated at DRDC Suffield Research Centre. Following these measurements, published literature was examined to find benign materials with both a similar complex permittivity, as well as other physical properties deemed desirable - such as low-toxicity, thermal stability, and commercial availability - in order to select candidates for subsequent simulant formulation. Suitable simulant formulations were identified for ANFO, with resulting complex permittivities measured to be within acceptable limits of target values. These IE formulations will now undergo end-user trials with CAF operators in order to confirm their utility. Investigations into ANAl simulants continues. This progress report outlines the development program, simulant design, and current validation results.

  19. TEMIS UV product validation using NILU-UV ground-based measurements in Thessaloniki, Greece

    Science.gov (United States)

    Zempila, Melina-Maria; van Geffen, Jos H. G. M.; Taylor, Michael; Fountoulakis, Ilias; Koukouli, Maria-Elissavet; van Weele, Michiel; van der A, Ronald J.; Bais, Alkiviadis; Meleti, Charikleia; Balis, Dimitrios

    2017-06-01

    This study aims to cross-validate ground-based and satellite-based models of three photobiological UV effective dose products: the Commission Internationale de l'Éclairage (CIE) erythemal UV, the production of vitamin D in the skin, and DNA damage, using high-temporal-resolution surface-based measurements of solar UV spectral irradiances from a synergy of instruments and models. The satellite-based Tropospheric Emission Monitoring Internet Service (TEMIS; version 1.4) UV daily dose data products were evaluated over the period 2009 to 2014 with ground-based data from a Norsk Institutt for Luftforskning (NILU)-UV multifilter radiometer located at the northern midlatitude super-site of the Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki (LAP/AUTh), in Greece. For the NILU-UV effective dose rates retrieval algorithm, a neural network (NN) was trained to learn the nonlinear functional relation between NILU-UV irradiances and collocated Brewer-based photobiological effective dose products. Then the algorithm was subjected to sensitivity analysis and validation. The correlation of the NN estimates with target outputs was high (r = 0. 988 to 0.990) and with a very low bias (0.000 to 0.011 in absolute units) proving the robustness of the NN algorithm. For further evaluation of the NILU NN-derived products, retrievals of the vitamin D and DNA-damage effective doses from a collocated Yankee Environmental Systems (YES) UVB-1 pyranometer were used. For cloud-free days, differences in the derived UV doses are better than 2 % for all UV dose products, revealing the reference quality of the ground-based UV doses at Thessaloniki from the NILU-UV NN retrievals. The TEMIS UV doses used in this study are derived from ozone measurements by the SCIAMACHY/Envisat and GOME2/MetOp-A satellite instruments, over the European domain in combination with SEVIRI/Meteosat-based diurnal cycle of the cloud cover fraction per 0. 5° × 0. 5° (lat × long) grid cells. TEMIS

  20. TEMIS UV product validation using NILU-UV ground-based measurements in Thessaloniki, Greece

    Directory of Open Access Journals (Sweden)

    M.-M. Zempila

    2017-06-01

    Full Text Available This study aims to cross-validate ground-based and satellite-based models of three photobiological UV effective dose products: the Commission Internationale de l'Éclairage (CIE erythemal UV, the production of vitamin D in the skin, and DNA damage, using high-temporal-resolution surface-based measurements of solar UV spectral irradiances from a synergy of instruments and models. The satellite-based Tropospheric Emission Monitoring Internet Service (TEMIS; version 1.4 UV daily dose data products were evaluated over the period 2009 to 2014 with ground-based data from a Norsk Institutt for Luftforskning (NILU-UV multifilter radiometer located at the northern midlatitude super-site of the Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki (LAP/AUTh, in Greece. For the NILU-UV effective dose rates retrieval algorithm, a neural network (NN was trained to learn the nonlinear functional relation between NILU-UV irradiances and collocated Brewer-based photobiological effective dose products. Then the algorithm was subjected to sensitivity analysis and validation. The correlation of the NN estimates with target outputs was high (r = 0. 988 to 0.990 and with a very low bias (0.000 to 0.011 in absolute units proving the robustness of the NN algorithm. For further evaluation of the NILU NN-derived products, retrievals of the vitamin D and DNA-damage effective doses from a collocated Yankee Environmental Systems (YES UVB-1 pyranometer were used. For cloud-free days, differences in the derived UV doses are better than 2 % for all UV dose products, revealing the reference quality of the ground-based UV doses at Thessaloniki from the NILU-UV NN retrievals. The TEMIS UV doses used in this study are derived from ozone measurements by the SCIAMACHY/Envisat and GOME2/MetOp-A satellite instruments, over the European domain in combination with SEVIRI/Meteosat-based diurnal cycle of the cloud cover fraction per 0. 5° × 0. 5

  1. Modified ground-truthing: an accurate and cost-effective food environment validation method for town and rural areas.

    Science.gov (United States)

    Caspi, Caitlin Eicher; Friebur, Robin

    2016-03-17

    A major concern in food environment research is the lack of accuracy in commercial business listings of food stores, which are convenient and commonly used. Accuracy concerns may be particularly pronounced in rural areas. Ground-truthing or on-site verification has been deemed the necessary standard to validate business listings, but researchers perceive this process to be costly and time-consuming. This study calculated the accuracy and cost of ground-truthing three town/rural areas in Minnesota, USA (an area of 564 miles, or 908 km), and simulated a modified validation process to increase efficiency without comprising accuracy. For traditional ground-truthing, all streets in the study area were driven, while the route and geographic coordinates of food stores were recorded. The process required 1510 miles (2430 km) of driving and 114 staff hours. The ground-truthed list of stores was compared with commercial business listings, which had an average positive predictive value (PPV) of 0.57 and sensitivity of 0.62 across the three sites. Using observations from the field, a modified process was proposed in which only the streets located within central commercial clusters (the 1/8 mile or 200 m buffer around any cluster of 2 stores) would be validated. Modified ground-truthing would have yielded an estimated PPV of 1.00 and sensitivity of 0.95, and would have resulted in a reduction in approximately 88 % of the mileage costs. We conclude that ground-truthing is necessary in town/rural settings. The modified ground-truthing process, with excellent accuracy at a fraction of the costs, suggests a new standard and warrants further evaluation.

  2. Validation of the CrIS fast physical NH3 retrieval with ground-based FTIR

    Directory of Open Access Journals (Sweden)

    E. Dammers

    2017-07-01

    Full Text Available Presented here is the validation of the CrIS (Cross-track Infrared Sounder fast physical NH3 retrieval (CFPR column and profile measurements using ground-based Fourier transform infrared (FTIR observations. We use the total columns and profiles from seven FTIR sites in the Network for the Detection of Atmospheric Composition Change (NDACC to validate the satellite data products. The overall FTIR and CrIS total columns have a positive correlation of r  =  0.77 (N  =  218 with very little bias (a slope of 1.02. Binning the comparisons by total column amounts, for concentrations larger than 1.0  ×  1016 molecules cm−2, i.e. ranging from moderate to polluted conditions, the relative difference is on average ∼ 0–5 % with a standard deviation of 25–50 %, which is comparable to the estimated retrieval uncertainties in both CrIS and the FTIR. For the smallest total column range (< 1.0  × 1016 molecules cm−2 where there are a large number of observations at or near the CrIS noise level (detection limit the absolute differences between CrIS and the FTIR total columns show a slight positive column bias. The CrIS and FTIR profile comparison differences are mostly within the range of the single-level retrieved profile values from estimated retrieval uncertainties, showing average differences in the range of  ∼ 20 to 40 %. The CrIS retrievals typically show good vertical sensitivity down into the boundary layer which typically peaks at  ∼ 850 hPa (∼ 1.5 km. At this level the median absolute difference is 0.87 (std  =  ±0.08 ppb, corresponding to a median relative difference of 39 % (std  =  ±2 %. Most of the absolute and relative profile comparison differences are in the range of the estimated retrieval uncertainties. At the surface, where CrIS typically has lower sensitivity, it tends to overestimate in low-concentration conditions and underestimate

  3. Validation of strong-motion stochastic model using observed ground motion records in north-east India

    Directory of Open Access Journals (Sweden)

    Dipok K. Bora

    2016-03-01

    Full Text Available We focused on validation of applicability of semi-empirical technique (spectral models and stochastic simulation for the estimation of ground-motion characteristics in the northeastern region (NER of India. In the present study, it is assumed that the point source approximation in far field is valid. The one-dimensional stochastic point source seismological model of Boore (1983 (Boore, DM. 1983. Stochastic simulation of high frequency ground motions based on seismological models of the radiated spectra. Bulletin of Seismological Society of America, 73, 1865–1894. is used for modelling the acceleration time histories. Total ground-motion records of 30 earthquakes of magnitudes lying between MW 4.2 and 6.2 in NER India from March 2008 to April 2013 are used for this study. We considered peak ground acceleration (PGA and pseudospectral acceleration (response spectrum amplitudes with 5% damping ratio at three fundamental natural periods, namely: 0.3, 1.0, and 3.0 s. The spectral models, which work well for PGA, overestimate the pseudospectral acceleration. It seems that there is a strong influence of local site amplification and crustal attenuation (kappa, which control spectral amplitudes at different frequencies. The results would allow analysing regional peculiarities of ground-motion excitation and propagation and updating seismic hazard assessment, both the probabilistic and deterministic approaches.

  4. Comparison of EISCAT and ionosonde electron densities: application to a ground-based ionospheric segment of a space weather programme

    Directory of Open Access Journals (Sweden)

    J. Lilensten

    2005-01-01

    Full Text Available Space weather applications require real-time data and wide area observations from both ground- and space-based instrumentation. From space, the global navigation satellite system - GPS - is an important tool. From the ground the incoherent scatter (IS radar technique permits a direct measurement up to the topside region, while ionosondes give good measurements of the lower part of the ionosphere. An important issue is the intercalibration of these various instruments. In this paper, we address the intercomparison of the EISCAT IS radar and two ionosondes located at Tromsø (Norway, at times when GPS measurements were also available. We show that even EISCAT data calibrated using ionosonde data can lead to different values of total electron content (TEC when compared to that obtained from GPS.

  5. Multimodal Navigation in Endoscopic Transsphenoidal Resection of Pituitary Tumors Using Image-Based Vascular and Cranial Nerve Segmentation: A Prospective Validation Study.

    Science.gov (United States)

    Dolati, Parviz; Eichberg, Daniel; Golby, Alexandra; Zamani, Amir; Laws, Edward

    2016-11-01

    Transsphenoidal surgery (TSS) is the most common approach for the treatment of pituitary tumors. However, misdirection, vascular damage, intraoperative cerebrospinal fluid leakage, and optic nerve injuries are all well-known complications, and the risk of adverse events is more likely in less-experienced hands. This prospective study was conducted to validate the accuracy of image-based segmentation coupled with neuronavigation in localizing neurovascular structures during TSS. Twenty-five patients with a pituitary tumor underwent preoperative 3-T magnetic resonance imaging (MRI), and MRI images loaded into the navigation platform were used for segmentation and preoperative planning. After patient registration and subsequent surgical exposure, each segmented neural or vascular element was validated by manual placement of the navigation probe or Doppler probe on or as close as possible to the target. Preoperative segmentation of the internal carotid artery and cavernous sinus matched with the intraoperative endoscopic and micro-Doppler findings in all cases. Excellent correspondence between image-based segmentation and the endoscopic view was also evident at the surface of the tumor and at the tumor-normal gland interfaces. Image guidance assisted the surgeons in localizing the optic nerve and chiasm in 64% of cases. The mean accuracy of the measurements was 1.20 ± 0.21 mm. Image-based preoperative vascular and neural element segmentation, especially with 3-dimensional reconstruction, is highly informative preoperatively and potentially could assist less-experienced neurosurgeons in preventing vascular and neural injury during TSS. In addition, the accuracy found in this study is comparable to previously reported neuronavigation measurements. This preliminary study is encouraging for future prospective intraoperative validation with larger numbers of patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Comparison of vertical ground reaction forces during overground and treadmill running. A validation study

    NARCIS (Netherlands)

    Kluitenberg, Bas; Bredeweg, Steef W.; Zijlstra, Sjouke; Zijlstra, Wiebren; Buist, Ida

    2012-01-01

    Background: One major drawback in measuring ground-reaction forces during running is that it is time consuming to get representative ground-reaction force (GRF) values with a traditional force platform. An instrumented force measuring treadmill can overcome the shortcomings inherent to overground

  7. Survivability enhancement study for C/sup 3/I/BM (communications, command, control and intelligence/battle management) ground segments: Final report

    Energy Technology Data Exchange (ETDEWEB)

    1986-10-30

    This study involves a concept developed by the Fairchild Space Company which is directly applicable to the Strategic Defense Initiative (SDI) Program as well as other national security programs requiring reliable, secure and survivable telecommunications systems. The overall objective of this study program was to determine the feasibility of combining and integrating long-lived, compact, autonomous isotope power sources with fiber optic and other types of ground segments of the SDI communications, command, control and intelligence/battle management (C/sup 3/I/BM) system in order to significantly enhance the survivability of those critical systems, especially against the potential threats of electromagnetic pulse(s) (EMP) resulting from high altitude nuclear weapon explosion(s). 28 figs., 2 tabs.

  8. A validation of ground ambulance pre-hospital times modeled using geographic information systems.

    Science.gov (United States)

    Patel, Alka B; Waters, Nigel M; Blanchard, Ian E; Doig, Christopher J; Ghali, William A

    2012-10-03

    Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7-8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a

  9. Validity of single term energy expression for ground state rotational band of even-even nuclei

    International Nuclear Information System (INIS)

    Sharma, S.; Kumar, R.; Gupta, J.B.

    2005-01-01

    Full text: There are large numbers of empirical studies of gs band of even-even nuclei in various mass regions. The Bohr-Mottelson's energy expression is E(I) = AX + BX 2 +CX 3 +... where X = I(I+1). The anharmonic vibrator energy expression is: E(I) = al + bl 2 + cl 3 SF model with energy expression: E(I)= pX + qI + rXI... where the terms represents the rotational, vibrational and R-V interaction energy, respectively. The validity f the various energy expressions with two terms had been tested by Sharma for light, medium and heavy mass regions using R I s. R 4 plots (where, spin I=6, 8, 10, 12), which are parameter independent. It was also noted, that of the goodness of energy expression can be judged with the minimum input of energies (i.e. only 2 parameters) and predictability's of the model p to high spins. Recently, Gupta et. al proposed a single term energy expression (SSTE) which was applied for rare earth region. This proposed power law reflected the unity of rotation - vibration in a different way and was successful in explaining the structure of gs-band. It will be useful for test the single term energy expression for light and heavy mass region. The single term expression for energy of ground state band can be written as: E I =axI b , where the index b and the coefficient a are the constant for the band. The values of b+1 and a 1 are as follows: b 1 =log(R 1 )/log(I/2) and a 1 =E I /I b ... The following results were gained: 1) The sharp variation in the value of index b at given spin will be an indication of the change in the shape of the nucleus; 2) The value of E I /I b is fairly constant with spin below back-bending, which reflects the stability of shape with spin; 3) This proposed power law is successful in explaining the structure of gs-band of nuclei

  10. Validation of Satellite AOD Data with the Ground PM10 Data over Islamabad Pakistan

    Science.gov (United States)

    Bulbul, Gufran; Shahid, Imran

    2016-07-01

    health. In this study, concentrations of PM10 will be monitored at different sites in H-12 sector and Kashmir Highway Islamabad using High volume air sampler and its chemical characterization will be done using Energy Dispersive XRF. The first application of satellite remote sensing for aerosol monitoring began in the mid-1970s to detect the desert particles above the ocean using data from Landsat, GOES, and AVHRR remote sensing satellites. Maps of Aerosol Optical Depth (AOD) over the ocean were produced using the 0.63 µm channel of Advanced Very High Resolution Radiometer (AVHRR) . Aerosols properties were retrieved using AVHRR. The useable range of wavelengths of spectrum (shorter wavelengths and the longer wavelengths) for the remote sensing of the aerosols particles is mostly restricted due to ozone and gaseous absorptions. The purpose of the study is to validate the satellite Aerosol Optical Depth (AOD) data for the regional and local scale for Pakistan Objectives • To quantify the concentration of PM10 • To investigate their elemental composition • To find out their possible sources • Validation with MODIS satellite AOD Methodology: PM10 concentration will be measured at different sites of NUST Islamabad, Pakistan using High volume air sampler an Air sampling equipment capable of sampling high volumes of air (typically 57,000 ft3 or 1,600 m3) at high flow rates (typically 1.13 m3/min or 40 ft3/min) over an extended sampling duration (typically 24 hrs). The sampling period will be of 24 hours. Particles in the PM10 size range are then collected on the filter(s) during the specified 24-h sampling period. Each sample filter will be weighed before and after sampling to determine the net weight (mass) gain of the collected PM10 sample (40 CFR Part 50, Appendix M, US EPA). Next step will be the chemical characterization. Element concentrations will be determined by energy dispersive X-ray fluorescence (ED-XRF) technique. The ED-XRF system uses an X-ray tube to

  11. Local spectral anisotropy is a valid cue for figure–ground organization in natural scenes

    OpenAIRE

    Ramenahalli, Sudarshan; Mihalas, Stefan; Niebur, Ernst

    2014-01-01

    An important step in the process of understanding visual scenes is its organization in different perceptual objects which requires figure-ground segregation. The determination which side of an occlusion boundary is figure (closer to the observer) and which is ground (further away from the observer) is made through a combination of global cues, like convexity, and local cues, like T-junctions. We here focus on a novel set of local cues in the intensity patterns along occlusion boundaries which...

  12. A Ground-based validation of GOSAT-observed atmospheric CO2 in Inner-Mongolian grasslands

    International Nuclear Information System (INIS)

    Qin, X; Lei, L; Zeng, Z; Kawasaki, M; Oohasi, M

    2014-01-01

    Atmospheric carbon dioxide (CO 2 ) is a long-lived greenhouse gas that significantly contributes to global warming. Long-term and continuous measurements of atmospheric CO 2 to investigate its global distribution and concentration variations are important for accurately understanding its potential climatic effects. Satellite measurements from space can offer atmospheric CO 2 data for climate change research. For that, ground-based measurements are required for validation and improving the precision of satellite-measured CO 2 . We implemented observation experiment of CO 2 column densities in the Xilinguole grasslands in Inner Mongolia, China, using a ground-based measurement system, which mainly consists of an optical spectrum analyzer (OSA), a sun tracker and a notebook controller. Measurements from our ground-based system were analyzed and compared with those from the Greenhouse gas Observation SATellite (GOSAT). The ground-based measurements had an average value of 389.46 ppm, which was 2.4 ppm larger than from GOSAT, with a standard deviation of 3.4 ppm. This result is slightly larger than the difference between GOSAT and the Total Carbon Column Observing Network (TCCON). This study highlights the usefulness of the ground-based OSA measurement system for analyzing atmospheric CO 2 column densities, which is expected to supplement the current TCCON network

  13. Complexity in the validation of ground-water travel time in fractured flow and transport systems

    International Nuclear Information System (INIS)

    Davies, P.B; Hunter, R.L.; Pickens, J.F.

    1991-02-01

    Ground-water travel time is a widely used concept in site assessment for radioactive waste disposal. While ground-water travel time was originally conceived to provide a simple performance measure for evaluating repository sites, its definition in many flow and transport environments is ambiguous. The US Department of Energy siting guidelines (10 CFR 960) define ground-water travel time as the time required for a unit volume of water to travel between two locations, calculated by dividing travel-path length by the quotient of average ground-water flux and effective porosity. Defining a meaningful effective porosity in a fractured porous material is a significant problem. Although the Waste Isolation Pilot Plant (WIPP) is not subject to specific requirements for ground-water travel time, travel times have been computed under a variety of model assumptions. Recently completed model analyses for WIPP illustrate the difficulties in applying a ground-water travel-time performance measure to flow and transport in fractured, fully saturated flow systems. 12 refs., 4 figs

  14. Complexity in the validation of ground-water travel time in fractured flow and transport systems

    International Nuclear Information System (INIS)

    Davies, P.B.; Hunter, R.L.; Pickens, J.F.

    1991-01-01

    Ground-water travel time is a widely used concept in site assessment for radioactive waste disposal. While ground-water travel time was originally conceived to provide a simple performance measure for evaluating repository sites, its definition in many flow and transport environments is ambiguous. The U.S. Department of Energy siting guidelines (10 CFR 960) define ground-water travel time as the time required for a unit volume of water to travel between two locations, calculated by dividing travel-path length by the quotient of average ground-water flux and effective porosity. Defining a meaningful effective porosity in a fractured porous material is a significant problem. Although the Waste Isolation Pilot Plant (WIPP) is not subject to specific requirements for ground-water travel time, travel times have been computed under a variety of model assumptions. Recently completed model analyses for WIPP illustrate the difficulties in applying a ground-water travel-time performance measure to flow and transport in fractured, fully saturated flow systems. Computer code used: SWIFT II (flow and transport code). 4 figs., 12 refs

  15. Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation

    International Nuclear Information System (INIS)

    Daisne, Jean-François; Blumhofer, Andreas

    2013-01-01

    Intensity modulated radiotherapy for head and neck cancer necessitates accurate definition of organs at risk (OAR) and clinical target volumes (CTV). This crucial step is time consuming and prone to inter- and intra-observer variations. Automatic segmentation by atlas deformable registration may help to reduce time and variations. We aim to test a new commercial atlas algorithm for automatic segmentation of OAR and CTV in both ideal and clinical conditions. The updated Brainlab automatic head and neck atlas segmentation was tested on 20 patients: 10 cN0-stages (ideal population) and 10 unselected N-stages (clinical population). Following manual delineation of OAR and CTV, automatic segmentation of the same set of structures was performed and afterwards manually corrected. Dice Similarity Coefficient (DSC), Average Surface Distance (ASD) and Maximal Surface Distance (MSD) were calculated for “manual to automatic” and “manual to corrected” volumes comparisons. In both groups, automatic segmentation saved about 40% of the corresponding manual segmentation time. This effect was more pronounced for OAR than for CTV. The edition of the automatically obtained contours significantly improved DSC, ASD and MSD. Large distortions of normal anatomy or lack of iodine contrast were the limiting factors. The updated Brainlab atlas-based automatic segmentation tool for head and neck Cancer patients is timesaving but still necessitates review and corrections by an expert

  16. Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation.

    Science.gov (United States)

    Daisne, Jean-François; Blumhofer, Andreas

    2013-06-26

    Intensity modulated radiotherapy for head and neck cancer necessitates accurate definition of organs at risk (OAR) and clinical target volumes (CTV). This crucial step is time consuming and prone to inter- and intra-observer variations. Automatic segmentation by atlas deformable registration may help to reduce time and variations. We aim to test a new commercial atlas algorithm for automatic segmentation of OAR and CTV in both ideal and clinical conditions. The updated Brainlab automatic head and neck atlas segmentation was tested on 20 patients: 10 cN0-stages (ideal population) and 10 unselected N-stages (clinical population). Following manual delineation of OAR and CTV, automatic segmentation of the same set of structures was performed and afterwards manually corrected. Dice Similarity Coefficient (DSC), Average Surface Distance (ASD) and Maximal Surface Distance (MSD) were calculated for "manual to automatic" and "manual to corrected" volumes comparisons. In both groups, automatic segmentation saved about 40% of the corresponding manual segmentation time. This effect was more pronounced for OAR than for CTV. The edition of the automatically obtained contours significantly improved DSC, ASD and MSD. Large distortions of normal anatomy or lack of iodine contrast were the limiting factors. The updated Brainlab atlas-based automatic segmentation tool for head and neck Cancer patients is timesaving but still necessitates review and corrections by an expert.

  17. Validation of new CFD release by Ground-Coupled Heat Transfer Test Cases

    Directory of Open Access Journals (Sweden)

    Sehnalek Stanislav

    2017-01-01

    Full Text Available In this article is presented validation of ANSYS Fluent with IEA BESTEST Task 34. Article stars with outlook to the topic, afterward are described steady-state cases used for validation. Thereafter is mentioned implementation of these cases on CFD. Article is concluded with presentation of the simulated results with a comparison of those from already validated simulation software by IEA. These validation shows high correlation with an older version of tested ANSYS as well as with other main software. The paper ends by discussion with an outline of future research.

  18. Local spectral anisotropy is a valid cue for figure-ground organization in natural scenes.

    Science.gov (United States)

    Ramenahalli, Sudarshan; Mihalas, Stefan; Niebur, Ernst

    2014-10-01

    An important step in the process of understanding visual scenes is its organization in different perceptual objects which requires figure-ground segregation. The determination of which side of an occlusion boundary is figure (closer to the observer) and which is ground (further away from the observer) is made through a combination of global cues, like convexity, and local cues, like T-junctions. We here focus on a novel set of local cues in the intensity patterns along occlusion boundaries which we show to differ between figure and ground. Image patches are extracted from natural scenes from two standard image sets along the boundaries of objects and spectral analysis is performed separately on figure and ground. On the figure side, oriented spectral power orthogonal to the occlusion boundary significantly exceeds that parallel to the boundary. This "spectral anisotropy" is present only for higher spatial frequencies, and absent on the ground side. The difference in spectral anisotropy between the two sides of an occlusion border predicts which is the figure and which the background with an accuracy exceeding 60% per patch. Spectral anisotropy of close-by locations along the boundary co-varies but is largely independent over larger distances which allows to combine results from different image regions. Given the low cost of this strictly local computation, we propose that spectral anisotropy along occlusion boundaries is a valuable cue for figure-ground segregation. A data base of images and extracted patches labeled for figure and ground is made freely available. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Ground Water Atlas of the United States: Segment 13, Alaska, Hawaii, Puerto Rico, and the U.S. Virgin Islands

    Science.gov (United States)

    Miller, James A.; Whitehead, R.L.; Oki, Delwyn S.; Gingerich, Stephen B.; Olcott, Perry G.

    1997-01-01

    Alaska is the largest State in the Nation and has an area of about 586,400 square miles, or about one-fifth the area of the conterminous United States. The State is geologically and topographically diverse and is characterized by wild, scenic beauty. Alaska contains abundant natural resources, including ground water and surface water of chemical quality that is generally suitable for most uses.The central part of Alaska is drained by the Yukon River and its tributaries, the largest of which are the Porcupine, the Tanana, and the Koyukuk Rivers. The Yukon River originates in northwestern Canada and, like the Kuskokwim River, which drains a large part of southwestern Alaska , discharges into the Bering Sea. The Noatak River in northwestern Alaska discharges into the Chukchi Sea. Major rivers in southern Alaska include the Susitna and the Matanuska Rivers, which discharge into Cook Inlet, and the Copper River, which discharges into the Gulf of Alaska . North of the Brooks Range, the Colville and the Sagavanirktok Rivers and numerous smaller streams discharge into the Arctic Ocean.In 1990, Alaska had a population of about 552,000 and, thus , is one of the least populated States in the Nation. Most of the population is concentrated in the cities of Anchorage, Fairbanks, and Juneau, all of which are located in lowland areas. The mountains, the frozen Arctic desert, the interior plateaus, and the areas covered with glaciers lack major population centers. Large parts of Alaska are uninhabited and much of the State is public land. Ground-water development has not occurred over most of these remote areas.The Hawaiian islands are the exposed parts of the Hawaiian Ridge, which is a large volcanic mountain range on the sea floor. Most of the Hawaiian Ridge is below sea level (fig. 31) . The State of Hawaii consists of a group of 132 islands, reefs, and shoals that extend for more than 1 ,500 miles from southeast to northwest across the central Pacific Ocean between about 155

  20. The role of oscillatory brain activity in object processing and figure-ground segmentation in human vision.

    Science.gov (United States)

    Kinsey, K; Anderson, S J; Hadjipapas, A; Holliday, I E

    2011-03-01

    'figure/ground' stimulation suggest a possible dual role for gamma rhythms in visual object coding, and provide general support of the binding-by-synchronization hypothesis. As the power changes in alpha and beta activity were largely independent of the spatial location of the target, however, we conclude that their role in object processing may relate principally to changes in visual attention. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Validation of CALIPSO space-borne-derived attenuated backscatter coefficient profiles using a ground-based lidar in Athens, Greece

    Directory of Open Access Journals (Sweden)

    R. E. Mamouri

    2009-09-01

    Full Text Available We present initial aerosol validation results of the space-borne lidar CALIOP -onboard the CALIPSO satellite- Level 1 attenuated backscatter coefficient profiles, using coincident observations performed with a ground-based lidar in Athens, Greece (37.9° N, 23.6° E. A multi-wavelength ground-based backscatter/Raman lidar system is operating since 2000 at the National Technical University of Athens (NTUA in the framework of the European Aerosol Research LIdar NETwork (EARLINET, the first lidar network for tropospheric aerosol studies on a continental scale. Since July 2006, a total of 40 coincidental aerosol ground-based lidar measurements were performed over Athens during CALIPSO overpasses. The ground-based measurements were performed each time CALIPSO overpasses the station location within a maximum distance of 100 km. The duration of the ground–based lidar measurements was approximately two hours, centred on the satellite overpass time. From the analysis of the ground-based/satellite correlative lidar measurements, a mean bias of the order of 22% for daytime measurements and of 8% for nighttime measurements with respect to the CALIPSO profiles was found for altitudes between 3 and 10 km. The mean bias becomes much larger for altitudes lower that 3 km (of the order of 60% which is attributed to the increase of aerosol horizontal inhomogeneity within the Planetary Boundary Layer, resulting to the observation of possibly different air masses by the two instruments. In cases of aerosol layers underlying Cirrus clouds, comparison results for aerosol tropospheric profiles become worse. This is attributed to the significant multiple scattering effects in Cirrus clouds experienced by CALIPSO which result in an attenuation which is less than that measured by the ground-based lidar.

  2. Validation of GOME (ERS-2) NO2 vertical column data with ground-based measurements at Issyk-Kul (Kyrgyzstan)

    Science.gov (United States)

    Ionov, D.; Sinyakov, V.; Semenov, V.

    Starting from 1995 the global monitoring of atmospheric nitrogen dioxide is carried out by the measurements of nadir-viewing GOME spectrometer aboard ERS-2 satellite. Continuous validation of that data by means of comparisons with well-controlled ground-based measurements is important to ensure the quality of GOME data products and improve related retrieval algorithms. At the station of Issyk-Kul (Kyrgyzstan) the ground-based spectroscopic observations of NO2 vertical column have been started since 1983. The station is located on the northern shore of Issyk-Kul lake, 1650 meters above the sea level (42.6 N, 77.0 E). The site is equipped with grating spectrometer for the twilight measurements of zenith-scattered solar radiation in the visible range, and applies the DOAS technique to retrieve NO2 vertical column. It is included in the list of NDSC stations as a complementary one. The present study is focused on validation of GOME NO2 vertical column data, based on 8-year comparison with correlative ground-based measurements at Issyk-Kul station in 1996-2003. Within the investigation, an agreement of both individual and monthly averaged GOME measurements with corresponding twilight ground-based observations is examined. Such agreement is analyzed with respect to different conditions (season, sun elevation), temporal/spatial criteria choice (actual overpass location, correction for diurnal variation) and data processing (GDP version 2.7, 3.0). In addition, NO2 vertical columns were integrated from simultaneous stratospheric profile measurements by NASA HALOE and SAGE-II/III satellite instruments and introduced to explain the differences with ground-based observations. In particular cases, NO2 vertical profiles retrieved from the twilight ground-based measurements at Issuk-Kul were also included into comparison. Overall, summertime GOME NO2 vertical columns were found to be systematicaly lower than ground-based data. This work was supported by International Association

  3. Characterization of Personal Privacy Devices (PPD) radiation pattern impact on the ground and airborne segments of the local area augmentation system (LAAS) at GPS L1 frequency

    Science.gov (United States)

    Alkhateeb, Abualkair M. Khair

    Personal Privacy Devices (PPDs) are radio-frequency transmitters that intentionally transmit in a frequency band used by other devices for the intent purpose of denying service to those devices. These devices have shown the potential to interfere with the ground and air sub-systems of the Local Area Augmentation Systems (LAAS), a GPS-based navigation aids at commercial airports. The Federal Aviation Administration (FAA) is concerned by the potential impact of these devices to GPS navigation aids at airports and has commenced an activity to determine the severity of this threat. In support of this situation, the research in this dissertation has been conducted under (FAA) Cooperative Agreement 2011-G-012, to investigate the impact of these devices on the LAAS. In order to investigate the impact of PPDs Radio Frequency Interference (RFI) on the ground and air sub-systems of the LAAS, the work presented in phase one of this research is intended to characterize the vehicle's impact on the PPD's Effective Isotropic Radiated Power (EIRP). A study was conceived in this research to characterize PPD performance by examining the on-vehicle radiation patterns as a function of vehicle type, jammer type, jammer location inside a vehicle and jammer orientation at each location. Phase two was to characterize the GPS Radiation Pattern on Multipath Limiting Antenna. MLA has to meet stringent requirements for acceptable signal detection and multipath rejection. The ARL-2100 is the most recent MLA antenna proposed to be used in the LAAS ground segment. The ground-based antenna's radiation pattern was modeled. This was achieved via (HFSS) a commercial-off the shelf CAD-based modeling code with a full-wave electromagnetic software simulation package that uses the Finite Element Analysis. Phase three of this work has been conducted to study the characteristics of the GPS Radiation Pattern on Commercial Aircraft. The airborne GPS antenna was modeled and the resulting radiation pattern on

  4. Estimating and validating ground-based timber harvesting production through computer simulation

    Science.gov (United States)

    Jingxin Wang; Chris B. LeDoux

    2003-01-01

    Estimating ground-based timber harvesting systems production with an object oriented methodology was investigated. The estimation model developed generates stands of trees, simulates chain saw, drive-to-tree feller-buncher, swing-to-tree single-grip harvester felling, and grapple skidder and forwarder extraction activities, and analyzes costs and productivity. It also...

  5. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  6. Multimodal Navigation in Endoscopic Transsphenoidal Resection of Pituitary Tumors using Image-based Vascular and Cranial Nerve Segmentation: A Prospective Validation Study

    Science.gov (United States)

    Dolati, Parviz; Eichberg, Daniel; Golby, Alexandra; Zamani, Amir; Laws, Edward

    2016-01-01

    Introduction Transsphenoidal surgery (TSS) is a well-known approach for the treatment of pituitary tumors. However, lateral misdirection and vascular damage, intraoperative CSF leakage, and optic nerve and vascular injuries are all well-known complications, and the risk of adverse events is more likely in less experienced hands. This prospective study was conducted to validate the accuracy of image-based segmentation in localization of neurovascular structures during TSS. Methods Twenty-five patients with pituitary tumors underwent preoperative 3TMRI, which included thin-sectioned 3D space T2, 3D Time of Flight and MPRAGE sequences. Images were reviewed by an expert independent neuroradiologist. Imaging sequences were loaded in BrainLab iPlanNet (16/25 cases) or Stryker (9/25 cases) image guidance platforms for segmentation and pre-operative planning. After patient registration into the neuronavigation system and subsequent surgical exposure, each segmented neural or vascular element was validated by manual placement of the navigation probe on or as close as possible to the target. The audible pulsations of the bilateral ICA were confirmed using a micro-Doppler probe. Results Pre-operative segmentation of the ICA and cavernous sinus matched with the intra-operative endoscopic and micro-Doppler findings in all cases (Dice Similarity Coefficient =1). This information reassured the surgeons with regard to the lateral extent of bone removal at the sellar floor and the limits of lateral exploration. Excellent correspondence between image-based segmentation and the endoscopic view was also evident at the surface of the tumor and at the tumor-normal gland interfaces. This assisted in preventing unnecessary removal of the normal pituitary gland. Image-guidance assisted the surgeons in localizing the optic nerve and chiasm in 64% of the cases and the diaphragma sella in 52% of cases, which helped to determine the limits of upward exploration and to decrease the risk of CSF

  7. Concurrent Validity of Physiological Cost Index in Walking over Ground and during Robotic Training in Subacute Stroke Patients

    Directory of Open Access Journals (Sweden)

    Anna Sofia Delussu

    2014-01-01

    Full Text Available Physiological Cost Index (PCI has been proposed to assess gait demand. The purpose of the study was to establish whether PCI is a valid indicator in subacute stroke patients of energy cost of walking in different walking conditions, that is, over ground and on the Gait Trainer (GT with body weight support (BWS. The study tested if correlations exist between PCI and ECW, indicating validity of the measure and, by implication, validity of PCI. Six patients (patient group (PG with subacute stroke and 6 healthy age- and size-matched subjects as control group (CG performed, in a random sequence in different days, walking tests overground and on the GT with 0, 30, and 50% BWS. There was a good to excellent correlation between PCI and ECW in the observed walking conditions: in PG Pearson correlation was 0.919 (p<0.001; in CG Pearson correlation was 0.852 (p<0.001. In conclusion, the high significant correlations between PCI and ECW, in all the observed walking conditions, suggest that PCI is a valid outcome measure in subacute stroke patients.

  8. Concurrent validity of Physiological Cost Index in walking over ground and during robotic training in subacute stroke patients.

    Science.gov (United States)

    Delussu, Anna Sofia; Morone, Giovanni; Iosa, Marco; Bragoni, Maura; Paolucci, Stefano; Traballesi, Marco

    2014-01-01

    Physiological Cost Index (PCI) has been proposed to assess gait demand. The purpose of the study was to establish whether PCI is a valid indicator in subacute stroke patients of energy cost of walking in different walking conditions, that is, over ground and on the Gait Trainer (GT) with body weight support (BWS). The study tested if correlations exist between PCI and ECW, indicating validity of the measure and, by implication, validity of PCI. Six patients (patient group (PG)) with subacute stroke and 6 healthy age- and size-matched subjects as control group (CG) performed, in a random sequence in different days, walking tests overground and on the GT with 0, 30, and 50% BWS. There was a good to excellent correlation between PCI and ECW in the observed walking conditions: in PG Pearson correlation was 0.919 (p < 0.001); in CG Pearson correlation was 0.852 (p < 0.001). In conclusion, the high significant correlations between PCI and ECW, in all the observed walking conditions, suggest that PCI is a valid outcome measure in subacute stroke patients.

  9. Volumetric analysis of pelvic hematomas after blunt trauma using semi-automated seeded region growing segmentation: a method validation study.

    Science.gov (United States)

    Dreizin, David; Bodanapally, Uttam K; Neerchal, Nagaraj; Tirada, Nikki; Patlas, Michael; Herskovits, Edward

    2016-11-01

    Manually segmented traumatic pelvic hematoma volumes are strongly predictive of active bleeding at conventional angiography, but the method is time intensive, limiting its clinical applicability. We compared volumetric analysis using semi-automated region growing segmentation to manual segmentation and diameter-based size estimates in patients with pelvic hematomas after blunt pelvic trauma. A 14-patient cohort was selected in an anonymous randomized fashion from a dataset of patients with pelvic binders at MDCT, collected retrospectively as part of a HIPAA-compliant IRB-approved study from January 2008 to December 2013. To evaluate intermethod differences, one reader (R1) performed three volume measurements using the manual technique and three volume measurements using the semi-automated technique. To evaluate interobserver differences for semi-automated segmentation, a second reader (R2) performed three semi-automated measurements. One-way analysis of variance was used to compare differences in mean volumes. Time effort was also compared. Correlation between the two methods as well as two shorthand appraisals (greatest diameter, and the ABC/2 method for estimating ellipsoid volumes) was assessed with Spearman's rho (r). Intraobserver variability was lower for semi-automated compared to manual segmentation, with standard deviations ranging between ±5-32 mL and ±17-84 mL, respectively (p = 0.0003). There was no significant difference in mean volumes between the two readers' semi-automated measurements (p = 0.83); however, means were lower for the semi-automated compared with the manual technique (manual: mean and SD 309.6 ± 139 mL; R1 semi-auto: 229.6 ± 88.2 mL, p = 0.004; R2 semi-auto: 243.79 ± 99.7 mL, p = 0.021). Despite differences in means, the correlation between the two methods was very strong and highly significant (r = 0.91, p hematoma volumes correlate strongly with manually segmented volumes. Since semi-automated segmentation

  10. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  11. Pathology-based validation of FDG PET segmentation tools for volume assessment of lymph node metastases from head and neck cancer

    Energy Technology Data Exchange (ETDEWEB)

    Schinagl, Dominic A.X. [Radboud University Nijmegen Medical Centre, Department of Radiation Oncology, Nijmegen (Netherlands); Radboud University Nijmegen Medical Centre, Department of Radiation Oncology (874), P.O. Box 9101, Nijmegen (Netherlands); Span, Paul N.; Kaanders, Johannes H.A.M. [Radboud University Nijmegen Medical Centre, Department of Radiation Oncology, Nijmegen (Netherlands); Hoogen, Frank J.A. van den [Radboud University Nijmegen Medical Centre, Department of Otorhinolaryngology, Head and Neck Surgery, Nijmegen (Netherlands); Merkx, Matthias A.W. [Radboud University Nijmegen Medical Centre, Department of Oral and Maxillofacial Surgery, Nijmegen (Netherlands); Slootweg, Piet J. [Radboud University Nijmegen Medical Centre, Department of Pathology, Nijmegen (Netherlands); Oyen, Wim J.G. [Radboud University Nijmegen Medical Centre, Department of Nuclear Medicine, Nijmegen (Netherlands)

    2013-12-15

    FDG PET is increasingly incorporated into radiation treatment planning of head and neck cancer. However, there are only limited data on the accuracy of radiotherapy target volume delineation by FDG PET. The purpose of this study was to validate FDG PET segmentation tools for volume assessment of lymph node metastases from head and neck cancer against the pathological method as the standard. Twelve patients with head and neck cancer and 28 metastatic lymph nodes eligible for therapeutic neck dissection underwent preoperative FDG PET/CT. The metastatic lymph nodes were delineated on CT (Node{sub CT}) and ten PET segmentation tools were used to assess FDG PET-based nodal volumes: interpreting FDG PET visually (PET{sub VIS}), applying an isocontour at a standardized uptake value (SUV) of 2.5 (PET{sub SUV}), two segmentation tools with a fixed threshold of 40 % and 50 %, and two adaptive threshold based methods. The latter four tools were applied with the primary tumour as reference and also with the lymph node itself as reference. Nodal volumes were compared with the true volume as determined by pathological examination. Both Node{sub CT} and PET{sub VIS} showed good correlations with the pathological volume. PET segmentation tools using the metastatic node as reference all performed well but not better than PET{sub VIS}. The tools using the primary tumour as reference correlated poorly with pathology. PET{sub SUV} was unsatisfactory in 35 % of the patients due to merging of the contours of adjacent nodes. FDG PET accurately estimates metastatic lymph node volume, but beyond the detection of lymph node metastases (staging), it has no added value over CT alone for the delineation of routine radiotherapy target volumes. If FDG PET is used in radiotherapy planning, treatment adaptation or response assessment, we recommend an automated segmentation method for purposes of reproducibility and interinstitutional comparison. (orig.)

  12. An Experimental Facility to Validate Ground Source Heat Pump Optimisation Models for the Australian Climate

    Directory of Open Access Journals (Sweden)

    Yuanshen Lu

    2017-01-01

    Full Text Available Ground source heat pumps (GSHPs are one of the most widespread forms of geothermal energy technology. They utilise the near-constant temperature of the ground below the frost line to achieve energy-efficiencies two or three times that of conventional air-conditioners, consequently allowing a significant offset in electricity demand for space heating and cooling. Relatively mature GSHP markets are established in Europe and North America. GSHP implementation in Australia, however, is limited, due to high capital price, uncertainties regarding optimum designs for the Australian climate, and limited consumer confidence in the technology. Existing GSHP design standards developed in the Northern Hemisphere are likely to lead to suboptimal performance in Australia where demand might be much more cooling-dominated. There is an urgent need to develop Australia’s own GSHP system optimisation principles on top of the industry standards to provide confidence to bring the GSHP market out of its infancy. To assist in this, the Queensland Geothermal Energy Centre of Excellence (QGECE has commissioned a fully instrumented GSHP experimental facility in Gatton, Australia, as a publically-accessible demonstration of the technology and a platform for systematic studies of GSHPs, including optimisation of design and operations. This paper presents a brief review on current GSHP use in Australia, the technical details of the Gatton GSHP facility, and an analysis on the observed cooling performance of this facility to date.

  13. Use of a tibial accelerometer to measure ground reaction force in running: A reliability and validity comparison with force plates.

    Science.gov (United States)

    Raper, Damian P; Witchalls, Jeremy; Philips, Elissa J; Knight, Emma; Drew, Michael K; Waddington, Gordon

    2018-01-01

    The use of microsensor technologies to conduct research and implement interventions in sports and exercise medicine has increased recently. The objective of this paper was to determine the validity and reliability of the ViPerform as a measure of load compared to vertical ground reaction force (GRF) as measured by force plates. Absolute reliability assessment, with concurrent validity. 10 professional triathletes ran 10 trials over force plates with the ViPerform mounted on the mid portion of the medial tibia. Calculated vertical ground reaction force data from the ViPerform was matched to the same stride on the force plate. Bland-Altman (BA) plot of comparative measure of agreement was used to assess the relationship between the calculated load from the accelerometer and the force plates. Reliability was calculated by intra-class correlation coefficients (ICC) with 95% confidence intervals. BA plot indicates minimal agreement between the measures derived from the force plate and ViPerform, with variation at an individual participant plot level. Reliability was excellent (ICC=0.877; 95% CI=0.825-0.917) in calculating the same vertical GRF in a repeated trial. Standard error of measure (SEM) equalled 99.83 units (95% CI=82.10-119.09), which, in turn, gave a minimum detectable change (MDC) value of 276.72 units (95% CI=227.32-330.07). The ViPerform does not calculate absolute values of vertical GRF similar to those measured by a force plate. It does provide a valid and reliable calculation of an athlete's lower limb load at constant velocity. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  14. Modelling floor heating systems using a validated two-dimensional ground coupled numerical model

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Kragh, Jesper; Roots, Peter

    2005-01-01

    This paper presents a two-dimensional simulation model of the heat losses and tempera-tures in a slab on grade floor with floor heating which is able to dynamically model the floor heating system. The aim of this work is to be able to model, in detail, the influence from the floor construction...... the floor. This model can be used to design energy efficient houses with floor heating focusing on the heat loss through the floor construction and foundation. It is found that it is impor-tant to model the dynamics of the floor heating system to find the correct heat loss to the ground, and further......, that the foundation has a large impact on the energy consumption of buildings heated by floor heating. Consequently, this detail should be in focus when designing houses with floor heating....

  15. A spike sorting toolbox for up to thousands of electrodes validated with ground truth recordings in vitro and in vivo

    Science.gov (United States)

    Lefebvre, Baptiste; Deny, Stéphane; Gardella, Christophe; Stimberg, Marcel; Jetter, Florian; Zeck, Guenther; Picaud, Serge; Duebel, Jens

    2018-01-01

    In recent years, multielectrode arrays and large silicon probes have been developed to record simultaneously between hundreds and thousands of electrodes packed with a high density. However, they require novel methods to extract the spiking activity of large ensembles of neurons. Here, we developed a new toolbox to sort spikes from these large-scale extracellular data. To validate our method, we performed simultaneous extracellular and loose patch recordings in rodents to obtain ‘ground truth’ data, where the solution to this sorting problem is known for one cell. The performance of our algorithm was always close to the best expected performance, over a broad range of signal-to-noise ratios, in vitro and in vivo. The algorithm is entirely parallelized and has been successfully tested on recordings with up to 4225 electrodes. Our toolbox thus offers a generic solution to sort accurately spikes for up to thousands of electrodes. PMID:29557782

  16. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    Science.gov (United States)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  17. Investigations and model validation of a ground-coupled heat pump for the combination with solar collectors

    International Nuclear Information System (INIS)

    Pärisch, Peter; Mercker, Oliver; Warmuth, Jonas; Tepe, Rainer; Bertram, Erik; Rockendorf, Gunter

    2014-01-01

    The operation of ground-coupled heat pumps in combination with solar collectors requires comprising knowledge of the heat pump behavior under non-standard conditions. Especially higher temperatures and varying flow rates in comparison to non-solar systems have to be taken into account. Furthermore the dynamic behavior becomes more important. At ISFH, steady-state and dynamic tests of a typical brine/water heat pump have been carried out in order to analyze its behavior under varying operation conditions. It has been shown, that rising source temperatures do only significantly increase the coefficient of performance (COP), if the source temperature is below 10–20 °C, depending on the temperature lift between source and sink. The flow rate, which has been varied both on the source and the sink side, only showed a minor influence on the exergetic efficiency. Additionally a heat pump model for TRNSYS has been validated under non-standard conditions. The results are assessed by means of TRNSYS simulations. -- Highlights: • A brine/water heat pump was tested under steady-state and transient conditions. • Decline of exergetic efficiency at low temperature lifts, no influence of flow rate. • Expected improvement by reciprocating compressor and electronic expansion valve for solar assisted heat source. • A TRNSYS black box model (YUM) was validated and a flow rate correction was proven • The start-up behavior is a very important parameter for system simulations

  18. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    Science.gov (United States)

    Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.

    2014-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results

  19. Assessing the Relative Performance of Microwave-Based Satellite Rain Rate Retrievals Using TRMM Ground Validation Data

    Science.gov (United States)

    Wolff, David B.; Fisher, Brad L.

    2011-01-01

    Space-borne microwave sensors provide critical rain information used in several global multi-satellite rain products, which in turn are used for a variety of important studies, including landslide forecasting, flash flood warning, data assimilation, climate studies, and validation of model forecasts of precipitation. This study employs four years (2003-2006) of satellite data to assess the relative performance and skill of SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), AMSR-E (Aqua) and the TRMM Microwave Imager (TMI) in estimating surface rainfall based on direct instantaneous comparisons with ground-based rain estimates from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The relative performance of each of these satellite estimates is examined via comparisons with space- and time-coincident GV radar-based rain rate estimates. Because underlying surface terrain is known to affect the relative performance of the satellite algorithms, the data for MELB was further stratified into ocean, land and coast categories using a 0.25deg terrain mask. Of all the satellite estimates compared in this study, TMI and AMSR-E exhibited considerably higher correlations and skills in estimating/observing surface precipitation. While SSM/I and AMSU-B exhibited lower correlations and skills for each of the different terrain categories, the SSM/I absolute biases trended slightly lower than AMSR-E over ocean, where the observations from both emission and scattering channels were used in the retrievals. AMSU-B exhibited the least skill relative to GV in all of the relevant statistical categories, and an anomalous spike was observed in the probability distribution functions near 1.0 mm/hr. This statistical artifact appears to be related to attempts by algorithm developers to include some lighter rain rates, not easily detectable by its scatter-only frequencies. AMSU

  20. Metrology of ground-based satellite validation: co-location mismatch and smoothing issues of total ozone comparisons

    Directory of Open Access Journals (Sweden)

    T. Verhoelst

    2015-12-01

    Full Text Available Comparisons with ground-based correlative measurements constitute a key component in the validation of satellite data on atmospheric composition. The error budget of these comparisons contains not only the measurement errors but also several terms related to differences in sampling and smoothing of the inhomogeneous and variable atmospheric field. A versatile system for Observing System Simulation Experiments (OSSEs, named OSSSMOSE, is used here to quantify these terms. Based on the application of pragmatic observation operators onto high-resolution atmospheric fields, it allows a simulation of each individual measurement, and consequently, also of the differences to be expected from spatial and temporal field variations between both measurements making up a comparison pair. As a topical case study, the system is used to evaluate the error budget of total ozone column (TOC comparisons between GOME-type direct fitting (GODFITv3 satellite retrievals from GOME/ERS2, SCIAMACHY/Envisat, and GOME-2/MetOp-A, and ground-based direct-sun and zenith–sky reference measurements such as those from Dobsons, Brewers, and zenith-scattered light (ZSL-DOAS instruments, respectively. In particular, the focus is placed on the GODFITv3 reprocessed GOME-2A data record vs. the ground-based instruments contributing to the Network for the Detection of Atmospheric Composition Change (NDACC. The simulations are found to reproduce the actual measurements almost to within the measurement uncertainties, confirming that the OSSE approach and its technical implementation are appropriate. This work reveals that many features of the comparison spread and median difference can be understood as due to metrological differences, even when using strict co-location criteria. In particular, sampling difference errors exceed measurement uncertainties regularly at most mid- and high-latitude stations, with values up to 10 % and more in extreme cases. Smoothing difference errors only

  1. Multi-granularity synthesis segmentation for high spatial resolution Remote sensing images

    International Nuclear Information System (INIS)

    Yi, Lina; Liu, Pengfei; Qiao, Xiaojun; Zhang, Xiaoning; Gao, Yuan; Feng, Boyan

    2014-01-01

    Traditional segmentation method can only partition an image in a single granularity space, with segmentation accuracy limited to the single granularity space. This paper proposes a multi-granularity synthesis segmentation method for high spatial resolution remote sensing images based on a quotient space model. Firstly, we divide the whole image area into multiple granules (regions), each region is consisted of ground objects that have similar optimal segmentation scale, and then select and synthesize the sub-optimal segmentations of each region to get the final segmentation result. To validate this method, the land cover category map is used to guide the scale synthesis of multi-scale image segmentations for Quickbird image land use classification. Firstly, the image is coarsely divided into multiple regions, each region belongs to a certain land cover category. Then multi-scale segmentation results are generated by the Mumford-Shah function based region merging method. For each land cover category, the optimal segmentation scale is selected by the supervised segmentation accuracy assessment method. Finally, the optimal scales of segmentation results are synthesized under the guide of land cover category. Experiments show that the multi-granularity synthesis segmentation can produce more accurate segmentation than that of a single granularity space and benefit the classification

  2. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors.

    Science.gov (United States)

    Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel

    2017-08-11

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.

  3. Validation of Atmosphere/Ionosphere Signals Associated with Major Earthquakes by Multi-Instrument Space-Borne and Ground Observations

    Science.gov (United States)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Parrot, Michel; Liu, J. Y.; Yang, T. F.; Arellano-Baeza, Alonso; Kafatos, M.; Taylor, Patrick

    2012-01-01

    regions of the atmosphere and the modifications, by dc electric fields, in the ionosphere-atmosphere electric circuit. We retrospectively analyzed temporal and spatial variations of four different physical parameters (gas/radon counting rate, lineaments change, long-wave radiation transitions and ionospheric electron density/plasma variations) characterizing the state of the lithosphere/atmosphere coupling several days before the onset of the earthquakes. Validation processes consist in two phases: A. Case studies for seven recent major earthquakes: Japan (M9.0, 2011), China (M7.9, 2008), Italy (M6.3, 2009), Samoa (M7, 2009), Haiti (M7.0, 2010) and, Chile (M8.8, 2010) and B. A continuous retrospective analysis was preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009. Satellite, ground surface, and troposphere data were obtained from Terra/ASTER, Aqua/AIRS, POES and ionospheric variations from DEMETER and COSMIC-I data. Radon and GPS/TEC were obtaining from monitoring sites in Taiwan, Japan and Italy and from global ionosphere maps (GIM) respectively. Our analysis of ground and satellite data during the occurrence of 7 global earthquakes has shown the presence of anomalies in the atmosphere. Our results for Tohoku M9.0 earthquake show that on March 7th, 2011 (4 days before the main shock and 1 day before the M7.2 foreshock of March 8, 2011) a rapid increase of emitted infrared radiation was observed by the satellite data and an anomaly was developed near the epicenter. The GPS/TEC data indicate an increase and variation in electron density reaching a maximum value on March 8. From March 3 to 11 a large increase in electron concentration was recorded at all four Japanese ground-based ionosondes, which returned to normal after the main earthquake. Similar approach for analyzing atmospheric and ionospheric parameters has been applied for China (M7.9, 2008), Italy (M6.3, 2009), Samoa (M7, 2009), Haiti (M7.0, 2010) and Chile (M8.8, 2010

  4. Development and Experimental Validation of a TRNSYS Dynamic Tool for Design and Energy Optimization of Ground Source Heat Pump Systems

    Directory of Open Access Journals (Sweden)

    Félix Ruiz-Calvo

    2017-09-01

    Full Text Available Ground source heat pump (GSHP systems stand for an efficient technology for renewable heating and cooling in buildings. To optimize not only the design but also the operation of the system, a complete dynamic model becomes a highly useful tool, since it allows testing any design modifications and different optimization strategies without actually implementing them at the experimental facility. Usually, this type of systems presents strong dynamic operating conditions. Therefore, the model should be able to predict not only the steady-state behavior of the system but also the short-term response. This paper presents a complete GSHP system model based on an experimental facility, located at Universitat Politècnica de València. The installation was constructed in the framework of a European collaborative project with title GeoCool. The model, developed in TRNSYS, has been validated against experimental data, and it accurately predicts both the short- and long-term behavior of the system.

  5. Automatic segmentation of human cortical layer-complexes and architectural areas using diffusion MRI and its validation

    Directory of Open Access Journals (Sweden)

    Matteo Bastiani

    2016-11-01

    Full Text Available Recently, several magnetic resonance imaging contrast mechanisms have been shown to distinguish cortical substructure corresponding to selected cortical layers. Here, we investigate cortical layer and area differentiation by automatized unsupervised clustering of high resolution diffusion MRI data. Several groups of adjacent layers could be distinguished in human primary motor and premotor cortex. We then used the signature of diffusion MRI signals along cortical depth as a criterion to detect area boundaries and find borders at which the signature changes abruptly. We validate our clustering results by histological analysis of the same tissue. These results confirm earlier studies which show that diffusion MRI can probe layer-specific intracortical fiber organization and, moreover, suggests that it contains enough information to automatically classify architecturally distinct cortical areas. We discuss the strengths and weaknesses of the automatic clustering approach and its appeal for MR-based cortical histology.

  6. Modelling flow and heat transfer through unsaturated chalk - Validation with experimental data from the ground surface to the aquifer

    Science.gov (United States)

    Thiéry, Dominique; Amraoui, Nadia; Noyer, Marie-Luce

    2018-01-01

    During the winter and spring of 2000-2001, large floods occurred in northern France (Somme River Basin) and southern England (Patcham area of Brighton) in valleys that are developed on Chalk outcrops. The floods durations were particularly long (more than 3 months in the Somme Basin) and caused significant damage in both countries. To improve the understanding of groundwater flooding in Chalk catchments, an experimental site was set up in the Hallue basin, which is located in the Somme River Basin (France). Unsaturated fractured chalk formation overlying the Chalk aquifer was monitored to understand its reaction to long and heavy rainfall events when it reaches a near saturation state. The water content and soil temperature were monitored to a depth of 8 m, and the matrix pressure was monitored down to the water table, 26.5 m below ground level. The monitoring extended over a 2.5-year period (2006-2008) under natural conditions and during two periods when heavy, artificial infiltration was induced. The objective of the paper is to describe a vertical numerical flow model based on Richards' equation using these data that was developed to simulate infiltrating rainwater flow from the ground surface to the saturated aquifer. The MARTHE computer code, which models the unsaturated-saturated continuum, was adapted to reproduce the monitored high saturation periods. Composite constitutive functions (hydraulic conductivity-saturation and pressure-saturation) that integrate the increase in hydraulic conductivity near saturation and extra available porosity resulting from fractures were introduced into the code. Using these composite constitutive functions, the model was able to accurately simulate the water contents and pressures at all depths over the entire monitored period, including the infiltration tests. The soil temperature was also accurately simulated at all depths, except during the infiltrations tests, which contributes to the model validation. The model was used

  7. Development and validation of a prognostic model incorporating texture analysis derived from standardised segmentation of PET in patients with oesophageal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Kieran G. [Cardiff University, Division of Cancer and Genetics, Cardiff (United Kingdom); Hills, Robert K. [Cardiff University, Haematology Clinical Trials Unit, Cardiff (United Kingdom); Berthon, Beatrice; Marshall, Christopher [Wales Research and Diagnostic PET Imaging Centre, Cardiff (United Kingdom); Parkinson, Craig; Spezi, Emiliano [Cardiff University, School of Engineering, Cardiff (United Kingdom); Lewis, Wyn G. [University Hospital of Wales, Department of Upper GI Surgery, Cardiff (United Kingdom); Crosby, Tom D.L. [Department of Oncology, Velindre Cancer Centre, Cardiff (United Kingdom); Roberts, Stuart Ashley [University Hospital of Wales, Department of Clinical Radiology, Cardiff (United Kingdom)

    2018-01-15

    This retrospective cohort study developed a prognostic model incorporating PET texture analysis in patients with oesophageal cancer (OC). Internal validation of the model was performed. Consecutive OC patients (n = 403) were chronologically separated into development (n = 302, September 2010-September 2014, median age = 67.0, males = 227, adenocarcinomas = 237) and validation cohorts (n = 101, September 2014-July 2015, median age = 69.0, males = 78, adenocarcinomas = 79). Texture metrics were obtained using a machine-learning algorithm for automatic PET segmentation. A Cox regression model including age, radiological stage, treatment and 16 texture metrics was developed. Patients were stratified into quartiles according to a prognostic score derived from the model. A p-value < 0.05 was considered statistically significant. Primary outcome was overall survival (OS). Six variables were significantly and independently associated with OS: age [HR =1.02 (95% CI 1.01-1.04), p < 0.001], radiological stage [1.49 (1.20-1.84), p < 0.001], treatment [0.34 (0.24-0.47), p < 0.001], log(TLG) [5.74 (1.44-22.83), p = 0.013], log(Histogram Energy) [0.27 (0.10-0.74), p = 0.011] and Histogram Kurtosis [1.22 (1.04-1.44), p = 0.017]. The prognostic score demonstrated significant differences in OS between quartiles in both the development (X{sup 2} 143.14, df 3, p < 0.001) and validation cohorts (X{sup 2} 20.621, df 3, p < 0.001). This prognostic model can risk stratify patients and demonstrates the additional benefit of PET texture analysis in OC staging. (orig.)

  8. Validation of low-volume enrichment protocols for detection of Escherichia coli O157 in raw ground beef components, using commercial kits.

    Science.gov (United States)

    Ahmed, Imtiaz; Hughes, Denise; Jenson, Ian; Karalis, Tass

    2009-03-01

    Testing of beef destined for use in ground beef products for the presence of Escherichia coli O157:H7 has become an important cornerstone of control and verification activities within many meat supply chains. Validation of the ability of methods to detect low levels of E. coli O157:H7 is critical to confidence in test systems. Many rapid methods have been validated against standard cultural methods for 25-g samples. In this study, a number of previously validated enrichment broths and commercially available test kits were validated for the detection of low numbers of E. coli O157:H7 in 375-g samples of raw ground beef component matrices using 1 liter of enrichment broth (large-sample:low-volume enrichment protocol). Standard AOAC International methods for 25-g samples in 225 ml of enrichment broth, using the same media, incubation conditions, and test kits, were used as reference methods. No significant differences were detected in the ability of any of the tests to detect low levels of E. coli O157:H7 in samples of raw ground beef components when enriched according to standard or large-sample:low-volume enrichment protocols. The use of large-sample:low-volume enrichment protocols provides cost savings for media and logistical benefits when handling and incubating large numbers of samples.

  9. Contour tracing for segmentation of mammographic masses

    International Nuclear Information System (INIS)

    Elter, Matthias; Held, Christian; Wittenberg, Thomas

    2010-01-01

    CADx systems have the potential to support radiologists in the difficult task of discriminating benign and malignant mammographic lesions. The segmentation of mammographic masses from the background tissue is an important module of CADx systems designed for the characterization of mass lesions. In this work, a novel approach to this task is presented. The segmentation is performed by automatically tracing the mass' contour in-between manually provided landmark points defined on the mass' margin. The performance of the proposed approach is compared to the performance of implementations of three state-of-the-art approaches based on region growing and dynamic programming. For an unbiased comparison of the different segmentation approaches, optimal parameters are selected for each approach by means of tenfold cross-validation and a genetic algorithm. Furthermore, segmentation performance is evaluated on a dataset of ROI and ground-truth pairs. The proposed method outperforms the three state-of-the-art methods. The benchmark dataset will be made available with publication of this paper and will be the first publicly available benchmark dataset for mass segmentation.

  10. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms.

    Science.gov (United States)

    Wiesmann, Veit; Bergler, Matthias; Palmisano, Ralf; Prinzen, Martin; Franz, Daniela; Wittenberg, Thomas

    2017-03-18

    Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs.

  11. A Dirichlet process mixture model for automatic (18)F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions.

    Science.gov (United States)

    Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco

    2016-05-01

    The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to

  12. A Dirichlet process mixture model for automatic {sup 18}F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions

    Energy Technology Data Exchange (ETDEWEB)

    Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo [Medical Physics Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126 (Italy); Mazzarotto, Renzo [Radiation Oncology Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126 (Italy); Ferdeghini, Marco [Nuclear Medicine Unit, University Hospital of Verona, P.le Stefani 1, Verona 37126 (Italy)

    2016-05-15

    Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a

  13. A Dirichlet process mixture model for automatic 18F-FDG PET image segmentation: Validation study on phantoms and on lung and esophageal lesions

    International Nuclear Information System (INIS)

    Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco

    2016-01-01

    Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on 18 F-fluorodeoxyglucose positron emission tomography ( 18 F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve

  14. Bridging Ground Validation and Algorithms: Using Scattering and Integral Tables to Incorporate Observed DSD Correlations into Satellite Algorithms

    Science.gov (United States)

    Williams, C. R.

    2012-12-01

    The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V

  15. 3D ground‐motion simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone: Variability of long‐period (T≥1  s) ground motions and sensitivity to kinematic rupture parameters

    Science.gov (United States)

    Moschetti, Morgan P.; Hartzell, Stephen; Ramirez-Guzman, Leonardo; Frankel, Arthur; Angster, Stephen J.; Stephenson, William J.

    2017-01-01

    We examine the variability of long‐period (T≥1  s) earthquake ground motions from 3D simulations of Mw 7 earthquakes on the Salt Lake City segment of the Wasatch fault zone, Utah, from a set of 96 rupture models with varying slip distributions, rupture speeds, slip velocities, and hypocenter locations. Earthquake ruptures were prescribed on a 3D fault representation that satisfies geologic constraints and maintained distinct strands for the Warm Springs and for the East Bench and Cottonwood faults. Response spectral accelerations (SA; 1.5–10 s; 5% damping) were measured, and average distance scaling was well fit by a simple functional form that depends on the near‐source intensity level SA0(T) and a corner distance Rc:SA(R,T)=SA0(T)(1+(R/Rc))−1. Period‐dependent hanging‐wall effects manifested and increased the ground motions by factors of about 2–3, though the effects appeared partially attributable to differences in shallow site response for sites on the hanging wall and footwall of the fault. Comparisons with modern ground‐motion prediction equations (GMPEs) found that the simulated ground motions were generally consistent, except within deep sedimentary basins, where simulated ground motions were greatly underpredicted. Ground‐motion variability exhibited strong lateral variations and, at some sites, exceeded the ground‐motion variability indicated by GMPEs. The effects on the ground motions of changing the values of the five kinematic rupture parameters can largely be explained by three predominant factors: distance to high‐slip subevents, dynamic stress drop, and changes in the contributions from directivity. These results emphasize the need for further characterization of the underlying distributions and covariances of the kinematic rupture parameters used in 3D ground‐motion simulations employed in probabilistic seismic‐hazard analyses.

  16. Managing Media: Segmenting Media Through Consumer Expectancies

    Directory of Open Access Journals (Sweden)

    Matt Eastin

    2014-04-01

    Full Text Available It has long been understood that consumers are motivated to media differently. However, given the lack of comparative model analysis, this assumption is without empirical validation, and thus, the orientation of segmentation from a media management perspective is without motivational grounds. Thus, evolving the literature on media consumption, the current study develops and compares models of media segmentation within the context of use. From this study, six models of media expectancies were constructed so that motivational differences between media (i.e., local and national newspapers, network and cable television, radio, and Internet could be observed. Utilizing higher order statistical analyses the data indicates differences across a model comparison approach for media motivations. Furthermore, these differences vary across numerous demographic factors. Results afford theoretical advancement within the literature of consumer media consumption as well as provide media planners’ insight into consumer choices.

  17. Functional Validation of an Alpha-Actinin-4 Mutation as a Potential Cause of an Aggressive Presentation of Adolescent Focal Segmental Glomerulosclerosis: Implications for Genetic Testing.

    Directory of Open Access Journals (Sweden)

    Di Feng

    Full Text Available Genetic testing in the clinic and research lab is becoming more routinely used to identify rare genetic variants. However, attributing these rare variants as the cause of disease in an individual patient remains challenging. Here, we report a patient who presented with nephrotic syndrome and focal segmental glomerulosclerosis (FSGS with collapsing features at age 14. Despite treatment, her kidney disease progressed to end-stage within a year of diagnosis. Through genetic testing, an Y265H variant with unknown clinical significance in alpha-actinin-4 gene (ACTN4 was identified. This variant has not been seen previously in FSGS patients nor is it present in genetic databases. Her clinical presentation is different from previous descriptions of ACTN4 mediated FSGS, which is characterized by sub-nephrotic proteinuria and slow progression to end stage kidney disease. We performed in vitro and cellular assays to characterize this novel ACTN4 variant before attributing causation. We found that ACTN4 with either Y265H or K255E (a known disease-causing mutation increased the actin bundling activity of ACTN4 in vitro, was associated with the formation of intracellular aggregates, and increased podocyte contractile force. Despite the absence of a familial pattern of inheritance, these similar biological changes caused by the Y265H and K255E amino acid substitutions suggest that this new variant is potentially the cause of FSGS in this patient. Our studies highlight that functional validation in complement with genetic testing may be required to confirm the etiology of rare disease, especially in the setting of unusual clinical presentations.

  18. Smoke Management: Toward a Data Base to Validate PB-Piedmont - Numerical Simulation of Smoke on the Ground at Night

    Science.gov (United States)

    Gary L. Achtemeier

    1999-01-01

    The use of fire for controlled burning to meet objectives for silviculture or for ecosystem management carries the risk of liability for smoke. Near-ground smoke can degrade air quality, reduce visibility, aggravate health problems, and create a general nuisance. At night, smoke can locally limit visibility over roadways creating serious hazards to transportation. PB-...

  19. Validation of the AGDISP model for predicting airborne atrazine spray drift: a South African ground application case study

    CSIR Research Space (South Africa)

    Nsibande, SA

    2015-06-01

    Full Text Available Air dispersion software models for evaluating pesticide spray drift during application have been developed that can potentially serve as a cheaper convenient alternative to field monitoring campaigns. Such models require validation against field...

  20. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  1. Comparison of atlas-based techniques for whole-body bone segmentation

    DEFF Research Database (Denmark)

    Arabi, Hossein; Zaidi, Habib

    2017-01-01

    out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice....../MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross...... validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean...

  2. In-Situ Load System for Calibrating and Validating Aerodynamic Properties of Scaled Aircraft in Ground-Based Aerospace Testing Applications

    Science.gov (United States)

    Commo, Sean A. (Inventor); Lynn, Keith C. (Inventor); Landman, Drew (Inventor); Acheson, Michael J. (Inventor)

    2016-01-01

    An In-Situ Load System for calibrating and validating aerodynamic properties of scaled aircraft in ground-based aerospace testing applications includes an assembly having upper and lower components that are pivotably interconnected. A test weight can be connected to the lower component to apply a known force to a force balance. The orientation of the force balance can be varied, and the measured forces from the force balance can be compared to applied loads at various orientations to thereby develop calibration factors.

  3. Essays in international market segmentation

    NARCIS (Netherlands)

    Hofstede, ter F.

    1999-01-01

    The primary objective of this thesis is to develop and validate new methodologies to improve the effectiveness of international segmentation strategies. The current status of international market segmentation research is reviewed in an introductory chapter, which provided a number of

  4. Validation of clinical acceptability of an atlas-based segmentation algorithm for the delineation of organs at risk in head and neck cancer

    Energy Technology Data Exchange (ETDEWEB)

    Hoang Duc, Albert K., E-mail: albert.hoangduc.ucl@gmail.com; McClelland, Jamie; Modat, Marc; Cardoso, M. Jorge; Mendelson, Alex F. [Center for Medical Image Computing, University College London, London WC1E 6BT (United Kingdom); Eminowicz, Gemma; Mendes, Ruheena; Wong, Swee-Ling; D’Souza, Derek [Radiotherapy Department, University College London Hospitals, 235 Euston Road, London NW1 2BU (United Kingdom); Veiga, Catarina [Department of Medical Physics and Bioengineering, University College London, London WC1E 6BT (United Kingdom); Kadir, Timor [Mirada Medical UK, Oxford Center for Innovation, New Road, Oxford OX1 1BY (United Kingdom); Ourselin, Sebastien [Centre for Medical Image Computing, University College London, London WC1E 6BT (United Kingdom)

    2015-09-15

    Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) was then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic

  5. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  6. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Ground-motion modeling of the 1906 San Francisco earthquake, part I: Validation using the 1989 Loma Prieta earthquake

    Science.gov (United States)

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.

    2008-01-01

    We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.

  8. Multi-body simulation of a canine hind limb: model development, experimental validation and calculation of ground reaction forces

    Directory of Open Access Journals (Sweden)

    Wefstaedt Patrick

    2009-11-01

    Full Text Available Abstract Background Among other causes the long-term result of hip prostheses in dogs is determined by aseptic loosening. A prevention of prosthesis complications can be achieved by an optimization of the tribological system which finally results in improved implant duration. In this context a computerized model for the calculation of hip joint loadings during different motions would be of benefit. In a first step in the development of such an inverse dynamic multi-body simulation (MBS- model we here present the setup of a canine hind limb model applicable for the calculation of ground reaction forces. Methods The anatomical geometries of the MBS-model have been established using computer tomography- (CT- and magnetic resonance imaging- (MRI- data. The CT-data were collected from the pelvis, femora, tibiae and pads of a mixed-breed adult dog. Geometric information about 22 muscles of the pelvic extremity of 4 mixed-breed adult dogs was determined using MRI. Kinematic and kinetic data obtained by motion analysis of a clinically healthy dog during a gait cycle (1 m/s on an instrumented treadmill were used to drive the model in the multi-body simulation. Results and Discussion As a result the vertical ground reaction forces (z-direction calculated by the MBS-system show a maximum deviation of 1.75%BW for the left and 4.65%BW for the right hind limb from the treadmill measurements. The calculated peak ground reaction forces in z- and y-direction were found to be comparable to the treadmill measurements, whereas the curve characteristics of the forces in y-direction were not in complete alignment. Conclusion In conclusion, it could be demonstrated that the developed MBS-model is suitable for simulating ground reaction forces of dogs during walking. In forthcoming investigations the model will be developed further for the calculation of forces and moments acting on the hip joint during different movements, which can be of help in context with the in

  9. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  10. Development of a ground segment for the scientific analysis of MIPAS/ENVISAT. Final report; Aufbau eines Bodensegments fuer die wissenschaftliche Auswertung von MIPAS/ENVISAT. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Stiller, G.P.; Clarmann, T. von; Fischer, H.; Grabowski, U.; Lutz, R.; Kiefer, M.; Milz, M.; Schulirsch, M.

    2001-11-30

    Based on the scientific work on the level-2 data analysis as performed in the parallel project 07UFE10/6 a partly automated analysis system for the MIPAS/ENVISAT data has been developed. The system fulfils the scientific requirements in terms of high flexibility and the needs of high effectivity and good computational performance. We expect that about 10% of all spectral data of MIPAS can be exhaustively analysed with respect to the geophysical information contained. The components of the system are a retrieval kernel consisting of a radiative transfer forward model and the inversion with respect to the geophysical parameters, a databank system which stores and administrates the level-1, level-2, and additional data, automated pre- and post-processing modules, as well as a computer cluster consisting of 8 Compaq work stations and a RAID system as kernel. The controlling of the system is managed via graphical user interfaces (GUIs). The system allows to analyse the MIPAS data with respect to ca. 45 trace species, their isotopomers and horizontally inhomogeneous distribution, non-LTE effects, and microphysical properties of atmospheric particles, and it supports activities in terms of instrument characterisation and validation. (orig.)

  11. A 2.5D finite element and boundary element model for the ground vibration from trains in tunnels and validation using measurement data

    Science.gov (United States)

    Jin, Qiyun; Thompson, David J.; Lurcock, Daniel E. J.; Toward, Martin G. R.; Ntotsios, Evangelos

    2018-05-01

    A numerical model is presented for the ground-borne vibration produced by trains running in tunnels. The model makes use of the assumption that the geometry and material properties are invariant in the axial direction. It is based on the so-called two-and-a-half dimensional (2.5D) coupled Finite Element and Boundary Element methodology, in which a two-dimensional cross-section is discretised into finite elements and boundary elements and the third dimension is represented by a Fourier transform over wavenumbers. The model is applied to a particular case of a metro line built with a cast-iron tunnel lining. An equivalent continuous model of the tunnel is developed to allow it to be readily implemented in the 2.5D framework. The tunnel structure and the track are modelled using solid and beam finite elements while the ground is modelled using boundary elements. The 2.5D track-tunnel-ground model is coupled with a train consisting of several vehicles, which are represented by multi-body models. The response caused by the passage of a train is calculated as the sum of the dynamic component, excited by the combined rail and wheel roughness, and the quasi-static component, induced by the constant moving axle loads. Field measurements have been carried out to provide experimental validation of the model. These include measurements of the vibration of the rail, the tunnel invert and the tunnel wall. In addition, simultaneous measurements were made on the ground surface above the tunnel. Rail roughness and track characterisation measurements were also made. The prediction results are compared with measured vibration obtained during train passages, with good agreement.

  12. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    Science.gov (United States)

    Martin, Spencer; Brophy, Mark; Palma, David; Louie, Alexander V.; Yu, Edward; Yaremko, Brian; Ahmad, Belal; Barron, John L.; Beauchemin, Steven S.; Rodrigues, George; Gaede, Stewart

    2015-02-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development.

  13. A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging

    International Nuclear Information System (INIS)

    Martin, Spencer; Rodrigues, George; Gaede, Stewart; Brophy, Mark; Barron, John L; Beauchemin, Steven S; Palma, David; Louie, Alexander V; Yu, Edward; Yaremko, Brian; Ahmad, Belal

    2015-01-01

    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development. (paper)

  14. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 (United States); Chen, Ken-Chung; Tang, Zhen [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Xia, James J., E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery, Shanghai Jiao Tong University School of Medicine, Shanghai Ninth People’s Hospital, Shanghai 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 and Department of Brain and Cognitive Engineering, Korea University, Seoul 02841 (Korea, Republic of)

    2016-01-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  15. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Chen, Ken-Chung; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  16. ASSESSMENT OF SEA ICE FREEBOARD AND THICKNESS IN MCMURDO SOUND, ANTARCTICA, DERIVED BY GROUND VALIDATED SATELLITE ALTIMETER DATA

    Directory of Open Access Journals (Sweden)

    D. Price

    2012-07-01

    Full Text Available This investigation employs the use of ICESat to derive freeboard measurements in McMurdo Sound in the western Ross Sea, Antarctica, for the time period 2003-2009. Methods closely follow those previously presented in the literature but are complemented by a good understanding of general sea ice characteristics in the study region from extensive temporal ground investigations but with limited spatial coverage. The aim of remote sensing applications in this area is to expand the good knowledge of sea ice characteristics within these limited areas to the wider McMurdo Sound and western Ross Sea region. The seven year Austral Spring (September, October, and November investigation is presented for sea ice freeboard alone. An interannual comparison of mean freeboard indicates an increase in multiyear sea ice freeboard from 1.08 m in 2003 to 1.15 m in 2009 with positive and negative variation in between. No significant trend was detected for first year sea ice freeboard. Further, an Envisat imagery investigation complements the freeboard assessment. The multiyear sea ice was observed to increase by 254 % of its original 2003 area, as firstyear sea ice persisted through the 2004 melt season into 2005. This maximum coverage then gradually diminished by 2009 to 20 % above the original 2003 value. The mid study period increase is likely attributed to the passage of iceberg B-15A minimising oceanic pressures and preventing sea ice breakout in the region.

  17. M3 version 3.0: Verification and validation; Hydrochemical model of ground water at repository site

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Javier B. (Dept. of Earth Sciences, Univ. of Zaragoza, Zaragoza (Spain)); Laaksoharju, Marcus (Geopoint AB, Sollentuna (Sweden)); Skaarman, Erik (Abscondo, Bromma (Sweden)); Gurban, Ioana (3D-Terra (Canada))

    2009-01-15

    Hydrochemical evaluation is a complex type of work that is carried out by specialists. The outcome of this work is generally presented as qualitative models and process descriptions of a site. To support and help to quantify the processes in an objective way, a multivariate mathematical tool entitled M3 (Multivariate Mixing and Mass balance calculations) has been constructed. The computer code can be used to trace the origin of the groundwater, and to calculate the mixing proportions and mass balances from groundwater data. The M3 code is a groundwater response model, which means that changes in the groundwater chemistry in terms of sources and sinks are traced in relation to an ideal mixing model. The complexity of the measured groundwater data determines the configuration of the ideal mixing model. Deviations from the ideal mixing model are interpreted as being due to reactions. Assumptions concerning important mineral phases altering the groundwater or uncertainties associated with thermodynamic constants do not affect the modelling because the calculations are solely based on the measured groundwater composition. M3 uses the opposite approach to that of many standard hydrochemical models. In M3, mixing is evaluated and calculated first. The constituents that cannot be described by mixing are described by reactions. The M3 model consists of three steps: the first is a standard principal component analysis, followed by mixing and finally mass balance calculations. The measured groundwater composition can be described in terms of mixing proportions (%), while the sinks and sources of an element associated with reactions are reported in mg/L. This report contains a set of verification and validation exercises with the intention of building confidence in the use of the M3 methodology. At the same time, clear answers are given to questions related to the accuracy and the precision of the results, including the inherent uncertainties and the errors that can be made

  18. Evaluation of right ventricular function by coronary computed tomography angiography using a novel automated 3D right ventricle volume segmentation approach: a validation study.

    Science.gov (United States)

    Burghard, Philipp; Plank, Fabian; Beyer, Christoph; Müller, Silvana; Dörler, Jakob; Zaruba, Marc-Michael; Pölzl, Leo; Pölzl, Gerhard; Klauser, Andrea; Rauch, Stefan; Barbieri, Fabian; Langer, Christian-Ekkehardt; Schgoer, Wilfried; Williamson, Eric E; Feuchtner, Gudrun

    2018-06-04

    To evaluate right ventricle (RV) function by coronary computed tomography angiography (CTA) using a novel automated three-dimensional (3D) RV volume segmentation tool in comparison with clinical reference modalities. Twenty-six patients with severe end-stage heart failure [left ventricle (LV) ejection fraction (EF) right heart invasive catheterisation (IC). Automated 3D RV volume segmentation was successful in 26 (100%) patients. Read-out time was 3 min 33 s (range, 1 min 50s-4 min 33s). RV EF by CTA was stronger correlated with right atrial pressure (RAP) by IC (r = -0.595; p = 0.006) but weaker with TAPSE (r = 0.366, p = 0.94). When comparing TAPSE with RAP by IC (r = -0.317, p = 0.231), a weak-to-moderate non-significant inverse correlation was found. Interobserver correlation was high with r = 0.96 (p right atrium (RA) and right ventricle (RV) was 196.9 ± 75.3 and 217.5 ± 76.1 HU, respectively. Measurement of RV function by CTA using a novel 3D volumetric segmentation tool is fast and reliable by applying a dedicated biphasic injection protocol. The RV EF from CTA is a closer surrogate of RAP than TAPSE by TTE. • Evaluation of RV function by cardiac CTA by using a novel 3D volume segmentation tool is fast and reliable. • A biphasic contrast agent injection protocol ensures homogenous RV contrast attenuation. • Cardiac CT is a valuable alternative modality to CMR for the evaluation of RV function.

  19. Forward Modeling and validation of a new formulation to compute self-potential signals associated with ground water flow

    Directory of Open Access Journals (Sweden)

    A. Bolève

    2007-10-01

    Full Text Available The classical formulation of the coupled hydroelectrical flow in porous media is based on a linear formulation of two coupled constitutive equations for the electrical current density and the seepage velocity of the water phase and obeying Onsager's reciprocity. This formulation shows that the streaming current density is controlled by the gradient of the fluid pressure of the water phase and a streaming current coupling coefficient that depends on the so-called zeta potential. Recently a new formulation has been introduced in which the streaming current density is directly connected to the seepage velocity of the water phase and to the excess of electrical charge per unit pore volume in the porous material. The advantages of this formulation are numerous. First this new formulation is more intuitive not only in terms of establishing a constitutive equation for the generalized Ohm's law but also in specifying boundary conditions for the influence of the flow field upon the streaming potential. With the new formulation, the streaming potential coupling coefficient shows a decrease of its magnitude with permeability in agreement with published results. The new formulation has been extended in the inertial laminar flow regime and to unsaturated conditions with applications to the vadose zone. This formulation is suitable to model self-potential signals in the field. We investigate infiltration of water from an agricultural ditch, vertical infiltration of water into a sinkhole, and preferential horizontal flow of ground water in a paleochannel. For the three cases reported in the present study, a good match is obtained between finite element simulations performed and field observations. Thus, this formulation could be useful for the inverse mapping of the geometry of groundwater flow from self-potential field measurements.

  20. Mixed segmentation

    DEFF Research Database (Denmark)

    Hansen, Allan Grutt; Bonde, Anders; Aagaard, Morten

    content analysis and audience segmentation in a single-source perspective. The aim is to explain and understand target groups in relation to, on the one hand, emotional response to commercials or other forms of audio-visual communication and, on the other hand, living preferences and personality traits...

  1. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  2. Proof-of-Concept of a Networked Validation Environment for Distributed Air/Ground NextGen Concepts

    Science.gov (United States)

    Grisham, James; Larson, Natalie; Nelson, Justin; Reed, Joshua; Suggs, Marvin; Underwood, Matthew; Papelis, Yiannis; Ballin, Mark G.

    2013-01-01

    The National Airspace System (NAS) must be improved to increase capacity, reduce flight delays, and minimize environmental impacts of air travel. NASA has been tasked with aiding the Federal Aviation Administration (FAA) in NAS modernization. Automatic Dependent Surveillance-Broadcast (ADS-B) is an enabling technology that is fundamental to realization of the Next Generation Air Transportation System (NextGen). Despite the 2020 FAA mandate requiring ADS-B Out equipage, airspace users are lacking incentives to equip with the requisite ADS-B avionics. A need exists to validate in flight tests advanced concepts of operation (ConOps) that rely on ADS-B and other data links without requiring costly equipage. A potential solution is presented in this paper. It is possible to emulate future data link capabilities using the existing in-flight Internet and reduced-cost test equipment. To establish proof-of-concept, a high-fidelity traffic operations simulation was modified to include a module that simulated Internet transmission of ADS-B messages. An advanced NASA ConOp, Flight Deck Interval Management (FIM), was used to evaluate technical feasibility. A preliminary assessment of the effects of latency and dropout rate on FIM was performed. Flight hardware that would be used by proposed test environment was connected to the simulation so that data transfer from aircraft systems to test equipment could be verified. The results indicate that the FIM ConOp, and therefore, many other advanced ConOps with equal or lesser response characteristics and data requirements, can be evaluated in flight using the proposed concept.

  3. Retrieval of nitrogen dioxide stratospheric profiles from ground-based zenith-sky UV-visible observations: validation of the technique through correlative comparisons

    Directory of Open Access Journals (Sweden)

    F. Hendrick

    2004-01-01

    Full Text Available A retrieval algorithm based on the Optimal Estimation Method (OEM has been developed in order to provide vertical distributions of NO2 in the stratosphere from ground-based (GB zenith-sky UV-visible observations. It has been applied to observational data sets from the NDSC (Network for Detection of Stratospheric Change stations of Harestua (60° N, 10° E and Andøya (69° N, 16° E in Norway. The information content and retrieval errors have been analyzed following a formalism used for characterizing ozone profiles retrieved from solar infrared absorption spectra. In order to validate the technique, the retrieved NO2 vertical profiles and columns have been compared to correlative balloon and satellite observations. Such extensive validation of the profile and column retrievals was not reported in previously published work on the profiling from GB UV-visible measurements. A good agreement - generally better than 25% - has been found with the SAOZ (Système d'Analyse par Observations Zénithales and DOAS (Differential Optical Absorption Spectroscopy balloons. A similar agreement has been reached with correlative satellite data from the HALogen Occultation Experiment (HALOE and Polar Ozone and Aerosol Measurement (POAM III instruments above 25km of altitude. Below 25km, a systematic underestimation - by up to 40% in some cases - of both HALOE and POAM III profiles by our GB profile retrievals has been observed, pointing out more likely a limitation of both satellite instruments at these altitudes. We have concluded that our study strengthens our confidence in the reliability of the retrieval of vertical distribution information from GB UV-visible observations and offers new perspectives in the use of GB UV-visible network data for validation purposes.

  4. Validation of POLDER/ADEOS data using a ground-based lidar network: Preliminary results for semi-transparent and cirrus clouds

    Science.gov (United States)

    Chepfer, H.; Sauvage, L.; Flamant, P. H.; Pelon, J.; Goloub, P.; Brogniez, G.; spinhirne, J.; Lavorato, M.; Sugimoto, N.

    1998-01-01

    At mid and tropical latitudes, cirrus clouds are present more than 50% of the time in satellites observations. Due to their large spatial and temporal coverage, and associated low temperatures, cirrus clouds have a major influence on the Earth-Ocean-Atmosphere energy balance through their effects on the incoming solar radiation and outgoing infrared radiation. At present the impact of cirrus clouds on climate is well recognized but remains to be asserted more precisely, for their optical and radiative properties are not very well known. In order to understand the effects of cirrus clouds on climate, their optical and radiative characteristics of these clouds need to be determined accurately at different scales in different locations i.e. latitude. Lidars are well suited to observe cirrus clouds, they can detect very thin and semi-transparent layers, and retrieve the clouds geometrical properties i.e. altitude and multilayers, as well as radiative properties i.e. optical depth, backscattering phase functions of ice crystals. Moreover the linear depolarization ratio can give information on the ice crystal shape. In addition, the data collected with an airborne version of POLDER (POLarization and Directionality of Earth Reflectances) instrument have shown that bidirectional polarized measurements can provide information on cirrus cloud microphysical properties (crystal shapes, preferred orientation in space). The spaceborne version of POLDER-1 has been flown on ADEOS-1 platform during 8 months (October 96 - June 97), and the next POLDER-2 instrument will be launched in 2000 on ADEOS-2. The POLDER-1 cloud inversion algorithms are currently under validation. For cirrus clouds, a validation based on comparisons between cloud properties retrieved from POLDER-1 data and cloud properties inferred from a ground-based lidar network is currently under consideration. We present the first results of the validation.

  5. Validation of an enhanced knowledge-based method for segmentation and quantitative analysis of intrathoracic airway trees from three-dimensional CT images

    International Nuclear Information System (INIS)

    Sonka, M.; Park, W.; Hoffman, E.A.

    1995-01-01

    Accurate assessment of airway physiology, evaluated in terms of geometric changes, is critically dependent upon the accurate imaging and image segmentation of the three-dimensional airway tree structure. The authors have previously reported a knowledge-based method for three-dimensional airway tree segmentation from high resolution CT (HRCT) images. Here, they report a substantially improved version of the method. In the current implementation, the method consists of several stages. First, the lung borders are automatically determined in the three-dimensional set of HRCT data. The primary airway tree is semi-automatically identified. In the next stage, potential airways are determined in individual CT slices using a rule-based system that uses contextual information and a priori knowledge about pulmonary anatomy. Using three-dimensional connectivity properties of the pulmonary airway tree, the three-dimensional tree is constructed from the set of adjacent slices. The method's performance and accuracy were assessed in five 3D HRCT canine images. Computer-identified airways matched 226/258 observer-defined airways (87.6%); the computer method failed to detect the airways in the remaining 32 locations. By visual assessment of rendered airway trees, the experienced observers judged the computer-detected airway trees as highly realistic

  6. Neural Scene Segmentation by Oscillatory Correlation

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2000-01-01

    The segmentation of a visual scene into a set of coherent patterns (objects) is a fundamental aspect of perception, which underlies a variety of important tasks such as figure/ground segregation, and scene analysis...

  7. Estimates of evapotranspiration for riparian sites (Eucalyptus) in the Lower Murray -Darling Basin using ground validated sap flow and vegetation index scaling techniques

    Science.gov (United States)

    Doody, T.; Nagler, P. L.; Glenn, E. P.

    2014-12-01

    Water accounting is becoming critical globally, and balancing consumptive water demands with environmental water requirements is especially difficult in in arid and semi-arid regions. Within the Murray-Darling Basin (MDB) in Australia, riparian water use has not been assessed across broad scales. This study therefore aimed to apply and validate an existing U.S. riparian ecosystem evapotranspiration (ET) algorithm for the MDB river systems to assist water resource managers to quantify environmental water needs over wide ranges of niche conditions. Ground-based sap flow ET was correlated with remotely sensed predictions of ET, to provide a method to scale annual rates of water consumption by riparian vegetation over entire irrigation districts. Sap flux was measured at nine locations on the Murrumbidgee River between July 2011 and June 2012. Remotely sensed ET was calculated using a combination of local meteorological estimates of potential ET (ETo) and rainfall and MODIS Enhanced Vegetation Index (EVI) from selected 250 m resolution pixels. The sap flow data correlated well with MODIS EVI. Sap flow ranged from 0.81 mm/day to 3.60 mm/day and corresponded to a MODIS-based ET range of 1.43 mm/day to 2.42 mm/day. We found that mean ET across sites could be predicted by EVI-ETo methods with a standard error of about 20% across sites, but that ET at any given site could vary much more due to differences in aquifer and soil properties among sites. Water use was within range of that expected. We conclude that our algorithm developed for US arid land crops and riparian plants is applicable to this region of Australia. Future work includes the development of an adjusted algorithm using these sap flow validated results.

  8. Concurrent validity and reliability of using ground reaction force and center of pressure parameters in the determination of leg movement initiation during single leg lift.

    Science.gov (United States)

    Aldabe, Daniela; de Castro, Marcelo Peduzzi; Milosavljevic, Stephan; Bussey, Melanie Dawn

    2016-09-01

    Postural adjustment evaluations during single leg lift requires the initiation of heel lift (T1) identification. T1 measured by means of motion analyses system is the most reliable approach. However, this method involves considerable workspace, expensive cameras, and time processing data and setting up laboratory. The use of ground reaction forces (GRF) and centre of pressure (COP) data is an alternative method as its data processing and setting up is less time consuming. Further, kinetic data is normally collected using frequency samples higher than 1000Hz whereas kinematic data are commonly captured using 50-200Hz. This study describes the concurrent-validity and reliability of GRF and COP measurements in determining T1, using a motion analysis system as reference standard. Kinematic and kinetic data during single leg lift were collected from ten participants. GRF and COP data were collected using one and two force plates. Displacement of a single heel marker was captured by means of ten Vicon(©) cameras. Kinetic and kinematic data were collected using a sample frequency of 1000Hz. Data were analysed in two stages: identification of key events in the kinetic data, and assessing concurrent validity of T1 based on the chosen key events with T1 provided by the kinematic data. The key event presenting the least systematic bias, along with a narrow 95% CI and limits of agreement against the reference standard T1, was the Baseline COPy event. Baseline COPy event was obtained using one force plate and presented excellent between-tester reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. CO measurements from the ACE-FTS satellite instrument: data analysis and validation using ground-based, airborne and spaceborne observations

    Directory of Open Access Journals (Sweden)

    C. Clerbaux

    2008-05-01

    Full Text Available The Atmospheric Chemistry Experiment (ACE mission was launched in August 2003 to sound the atmosphere by solar occultation. Carbon monoxide (CO, a good tracer of pollution plumes and atmospheric dynamics, is one of the key species provided by the primary instrument, the ACE-Fourier Transform Spectrometer (ACE-FTS. This instrument performs measurements in both the CO 1-0 and 2-0 ro-vibrational bands, from which vertically resolved CO concentration profiles are retrieved, from the mid-troposphere to the thermosphere. This paper presents an updated description of the ACE-FTS version 2.2 CO data product, along with a comprehensive validation of these profiles using available observations (February 2004 to December 2006. We have compared the CO partial columns with ground-based measurements using Fourier transform infrared spectroscopy and millimeter wave radiometry, and the volume mixing ratio profiles with airborne (both high-altitude balloon flight and airplane observations. CO satellite observations provided by nadir-looking instruments (MOPITT and TES as well as limb-viewing remote sensors (MIPAS, SMR and MLS were also compared with the ACE-FTS CO products. We show that the ACE-FTS measurements provide CO profiles with small retrieval errors (better than 5% from the upper troposphere to 40 km, and better than 10% above. These observations agree well with the correlative measurements, considering the rather loose coincidence criteria in some cases. Based on the validation exercise we assess the following uncertainties to the ACE-FTS measurement data: better than 15% in the upper troposphere (8–12 km, than 30% in the lower stratosphere (12–30 km, and than 25% from 30 to 100 km.

  10. Validation of S-NPP VIIRS Day-Night Band and M Bands Performance Using Ground Reference Targets of Libya 4 and Dome C

    Science.gov (United States)

    Chen, Xuexia; Wu, Aisheng; Xiong, Xiaoxiong; Lei, Ning; Wang, Zhipeng; Chiang, Kwofu

    2015-01-01

    This paper provides methodologies developed and implemented by the NASA VIIRS Calibration Support Team (VCST) to validate the S-NPP VIIRS Day-Night band (DNB) and M bands calibration performance. The Sensor Data Records produced by the Interface Data Processing Segment (IDPS) and NASA Land Product Evaluation and Algorithm Testing Element (PEATE) are acquired nearly nadir overpass for Libya 4 desert and Dome C snow surfaces. In the past 3.5 years, the modulated relative spectral responses (RSR) change with time and lead to 3.8% increase on the DNB sensed solar irradiance and 0.1% or less increases on the M4-M7 bands. After excluding data before April 5th, 2013, IDPS DNB radiance and reflectance data are consistent with Land PEATE data with 0.6% or less difference for Libya 4 site and 2% or less difference for Dome C site. These difference are caused by inconsistent LUTs and algorithms used in calibration. In Libya 4 site, the SCIAMACHY spectral and modulated RSR derived top of atmosphere (TOA) reflectance are compared with Land PEATE TOA reflectance and they indicate a decrease of 1.2% and 1.3%, respectively. The radiance of Land PEATE DNB are compared with the simulated radiance from aggregated M bands (M4, M5, and M7). These data trends match well with 2% or less difference for Libya 4 site and 4% or less difference for Dome C. This study demonstrate the consistent quality of DNB and M bands calibration for Land PEATE products during operational period and for IDPS products after April 5th, 2013.

  11. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  12. NEPR Ground Validation Points 2015

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and videos taken in shallow water (0-35m) benthic habitats surrounding Northeast Puerto Rico and Culebra...

  13. OMI satellite observed formaldehyde column from 2006 to 2015 over Xishuangbanna, southwest China, and validation using ground based zenith-sky DOAS.

    Science.gov (United States)

    Liu, Rui; Feng, Tao; Wang, Shanshan; Shi, Chanzhen; Guo, Yanlin; Nan, Jialiang; Deng, Yun; Zhou, Bin

    2018-02-01

    Formaldehyde (HCHO) provides a proxy to reveal the isoprene and biogenic volatile organic compounds emission which plays important roles in atmospheric chemical process and climate change. The ground-based observation with zenith-sky DOAS is carried out in order to validate the HCHO columns from OMI. It has a good correlation of 0.71678 between the HCHO columns from two sources. Then we use the OMI HCHO columns from January 2006 to December 2015 to indicate the interannual variation and spatial distribution in Xishuangbanna. The HCHO concentration peaks appeared in March or April for each year significantly corresponding to the intensive fire counts at the same time, which illustrate that the high HCHO columns are strongly influenced by the biomass burning in spring. Temperature and precipitation are also the important influence factors in the seasonal variation when there is nearly no biomass burning. The spatial patterns over the past ten years strengthen the deduction from the temporal variation and show the relationship with land cover and land use, elevation and population density. It is concluded that the biogenic activity plays a role in controlling the background level of HCHO in Xishuangbanna, while biomass burning is the main driving force of high HCHO concentration. And forests are greater contributor to HCHO rather than rubber trees which cover over 20% of the land in the region. Moreover, uncertainties from HCHO slant column retrieval and AMFs calculation are discussed in detail. Copyright © 2017. Published by Elsevier B.V.

  14. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  15. Quality assessment of the Ozone_cci Climate Research Data Package (release 2017 – Part 1: Ground-based validation of total ozone column data products

    Directory of Open Access Journals (Sweden)

    K. Garane

    2018-03-01

    Full Text Available The GOME-type Total Ozone Essential Climate Variable (GTO-ECV is a level-3 data record, which combines individual sensor products into one single cohesive record covering the 22-year period from 1995 to 2016, generated in the frame of the European Space Agency's Climate Change Initiative Phase II. It is based on level-2 total ozone data produced by the GODFIT (GOME-type Direct FITting v4 algorithm as applied to the GOME/ERS-2, OMI/Aura, SCIAMACHY/Envisat and GOME-2/Metop-A and Metop-B observations. In this paper we examine whether GTO-ECV meets the specific requirements set by the international climate–chemistry modelling community for decadal stability long-term and short-term accuracy. In the following, we present the validation of the 2017 release of the Climate Research Data Package Total Ozone Column (CRDP TOC at both level 2 and level 3. The inter-sensor consistency of the individual level-2 data sets has mean differences generally within 0.5 % at moderate latitudes (±50°, whereas the level-3 data sets show mean differences with respect to the OMI reference data record that span between −0.2 ± 0.9 % (for GOME-2B and 1.0 ± 1.4 % (for SCIAMACHY. Very similar findings are reported for the level-2 validation against independent ground-based TOC observations reported by Brewer, Dobson and SAOZ instruments: the mean bias between GODFIT v4 satellite TOC and the ground instrument is well within 1.0 ± 1.0 % for all sensors, the drift per decade spans between −0.5 % and 1.0 ± 1.0 % depending on the sensor, and the peak-to-peak seasonality of the differences ranges from ∼ 1 % for GOME and OMI to  ∼ 2 % for SCIAMACHY. For the level-3 validation, our first goal was to show that the level-3 CRDP produces findings consistent with the level-2 individual sensor comparisons. We show a very good agreement with 0.5 to 2 % peak-to-peak amplitude for the monthly mean difference time series and a

  16. Quality assessment of the Ozone_cci Climate Research Data Package (release 2017) - Part 1: Ground-based validation of total ozone column data products

    Science.gov (United States)

    Garane, Katerina; Lerot, Christophe; Coldewey-Egbers, Melanie; Verhoelst, Tijl; Elissavet Koukouli, Maria; Zyrichidou, Irene; Balis, Dimitris S.; Danckaert, Thomas; Goutail, Florence; Granville, Jose; Hubert, Daan; Keppens, Arno; Lambert, Jean-Christopher; Loyola, Diego; Pommereau, Jean-Pierre; Van Roozendael, Michel; Zehner, Claus

    2018-03-01

    The GOME-type Total Ozone Essential Climate Variable (GTO-ECV) is a level-3 data record, which combines individual sensor products into one single cohesive record covering the 22-year period from 1995 to 2016, generated in the frame of the European Space Agency's Climate Change Initiative Phase II. It is based on level-2 total ozone data produced by the GODFIT (GOME-type Direct FITting) v4 algorithm as applied to the GOME/ERS-2, OMI/Aura, SCIAMACHY/Envisat and GOME-2/Metop-A and Metop-B observations. In this paper we examine whether GTO-ECV meets the specific requirements set by the international climate-chemistry modelling community for decadal stability long-term and short-term accuracy. In the following, we present the validation of the 2017 release of the Climate Research Data Package Total Ozone Column (CRDP TOC) at both level 2 and level 3. The inter-sensor consistency of the individual level-2 data sets has mean differences generally within 0.5 % at moderate latitudes (±50°), whereas the level-3 data sets show mean differences with respect to the OMI reference data record that span between -0.2 ± 0.9 % (for GOME-2B) and 1.0 ± 1.4 % (for SCIAMACHY). Very similar findings are reported for the level-2 validation against independent ground-based TOC observations reported by Brewer, Dobson and SAOZ instruments: the mean bias between GODFIT v4 satellite TOC and the ground instrument is well within 1.0 ± 1.0 % for all sensors, the drift per decade spans between -0.5 % and 1.0 ± 1.0 % depending on the sensor, and the peak-to-peak seasonality of the differences ranges from ˜ 1 % for GOME and OMI to ˜ 2 % for SCIAMACHY. For the level-3 validation, our first goal was to show that the level-3 CRDP produces findings consistent with the level-2 individual sensor comparisons. We show a very good agreement with 0.5 to 2 % peak-to-peak amplitude for the monthly mean difference time series and a negligible drift per decade of the differences in the Northern Hemisphere

  17. Cross-validation of IASI/MetOp derived tropospheric δD with TES and ground-based FTIR observations

    Science.gov (United States)

    Lacour, J.-L.; Clarisse, L.; Worden, J.; Schneider, M.; Barthlott, S.; Hase, F.; Risi, C.; Clerbaux, C.; Hurtmans, D.; Coheur, P.-F.

    2015-03-01

    The Infrared Atmospheric Sounding Interferometer (IASI) flying onboard MetOpA and MetOpB is able to capture fine isotopic variations of the HDO to H2O ratio (δD) in the troposphere. Such observations at the high spatio-temporal resolution of the sounder are of great interest to improve our understanding of the mechanisms controlling humidity in the troposphere. In this study we aim to empirically assess the validity of our error estimation previously evaluated theoretically. To achieve this, we compare IASI δD retrieved profiles with other available profiles of δD, from the TES infrared sounder onboard AURA and from three ground-based FTIR stations produced within the MUSICA project: the NDACC (Network for the Detection of Atmospheric Composition Change) sites Kiruna and Izaña, and the TCCON site Karlsruhe, which in addition to near-infrared TCCON spectra also records mid-infrared spectra. We describe the achievable level of agreement between the different retrievals and show that these theoretical errors are in good agreement with empirical differences. The comparisons are made at different locations from tropical to Arctic latitudes, above sea and above land. Generally IASI and TES are similarly sensitive to δD in the free troposphere which allows one to compare their measurements directly. At tropical latitudes where IASI's sensitivity is lower than that of TES, we show that the agreement improves when taking into account the sensitivity of IASI in the TES retrieval. For the comparison IASI-FTIR only direct comparisons are performed because the sensitivity profiles of the two observing systems do not allow to take into account their differences of sensitivity. We identify a quasi negligible bias in the free troposphere (-3‰) between IASI retrieved δD with the TES, which are bias corrected, but important with the ground-based FTIR reaching -47‰. We also suggest that model-satellite observation comparisons could be optimized with IASI thanks to its high

  18. Reflectance conversion methods for the VIS/NIR imaging spectrometer aboard the Chang'E-3 lunar rover: based on ground validation experiment data

    International Nuclear Information System (INIS)

    Liu Bin; Liu Jian-Zhong; Zhang Guang-Liang; Zou Yong-Liao; Ling Zong-Cheng; Zhang Jiang; He Zhi-Ping; Yang Ben-Yong

    2013-01-01

    The second phase of the Chang'E Program (also named Chang'E-3) has the goal to land and perform in-situ detection on the lunar surface. A VIS/NIR imaging spectrometer (VNIS) will be carried on the Chang'E-3 lunar rover to detect the distribution of lunar minerals and resources. VNIS is the first mission in history to perform in-situ spectral measurement on the surface of the Moon, the reflectance data of which are fundamental for interpretation of lunar composition, whose quality would greatly affect the accuracy of lunar element and mineral determination. Until now, in-situ detection by imaging spectrometers was only performed by rovers on Mars. We firstly review reflectance conversion methods for rovers on Mars (Viking landers, Pathfinder and Mars Exploration rovers, etc). Secondly, we discuss whether these conversion methods used on Mars can be applied to lunar in-situ detection. We also applied data from a laboratory bidirectional reflectance distribution function (BRDF) using simulated lunar soil to test the availability of this method. Finally, we modify reflectance conversion methods used on Mars by considering differences between environments on the Moon and Mars and apply the methods to experimental data obtained from the ground validation of VNIS. These results were obtained by comparing reflectance data from the VNIS measured in the laboratory with those from a standard spectrometer obtained at the same time and under the same observing conditions. The shape and amplitude of the spectrum fits well, and the spectral uncertainty parameters for most samples are within 8%, except for the ilmenite sample which has a low albedo. In conclusion, our reflectance conversion method is suitable for lunar in-situ detection.

  19. Two-Column Aerosol Project (TCAP): Ground-Based Radiation and Aerosol Validation Using the NOAA Mobile SURFRAD Station Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Michalsky, Joseph [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States); Lantz, Kathy [Univ. of Colorado, Boulder, CO (United States)

    2016-05-01

    The National Oceanic and Atmospheric Administration (NOAA) is preparing for the launch of the Geostationary Operational Environmental Satellite R-Series (GOES-R) satellite in 2015. This satellite will feature higher time (5-minute versus 30-minute sampling) and spatial resolution (0.5 km vs 1 km in the visible channel) than current GOES instruments provide. NOAA’s National Environmental Satellite Data and Information Service has funded the Global Monitoring Division at the Earth System Research Laboratory to provide ground-based validation data for many of the new and old products the new GOES instruments will retrieve specifically related to radiation at the surface and aerosol and its extensive and intensive properties in the column. The Two-Column Aerosol Project (TCAP) had an emphasis on aerosol; therefore, we asked to be involved in this campaign to de-bug our new instrumentation and to provide a new capability that the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s Mobile Facilities (AMF) did not possess, namely surface albedo measurement out to 1625 nm. This gave us a chance to test remote operation of our new multi-filter rotating shadowband radiometer/multi-filter radiometer (MFRSR/MFR) combination. We did not deploy standard broadband shortwave and longwave radiation instrumentation because ARM does this as part of every AMF deployment. As it turned out, the ARM standard MFRSR had issues, and we were able to provide the aerosol column data for the first 2 months of the campaign covering the summer flight phase of the deployment. Using these data, we were able to work with personnel at Pacific Northwest National Laboratory (PNNL) to retrieve not only aerosol optical depth (AOD), but single scattering albedo and asymmetry parameter, as well.

  20. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  1. Bayesian segmentation of brainstem structures in MRI

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Van Leemput, Koen; Bhatt, Priyanka

    2015-01-01

    the brainstem structures in novel scans. Thanks to the generative nature of the scheme, the segmentation method is robust to changes in MRI contrast or acquisition hardware. Using cross validation, we show that the algorithm can segment the structures in previously unseen T1 and FLAIR scans with great accuracy...

  2. Validation of the IASI operational CH4 and N2O products using ground-based Fourier Transform Spectrometer: preliminary results at the Izaña Observatory (28ºN, 17ºW

    Directory of Open Access Journals (Sweden)

    Omaira García

    2014-01-01

    Full Text Available Within the project VALIASI (VALidation of IASI level 2 products the validation of the IASI operational atmospheric trace gas products (total column amounts of H2O, O3, CH4, N2O, CO2 and CO as well H2O and O3 profiles will be carried out. Ground-based FTS (Fourier Transform Spectrometer trace gas measurements made in the framework of NDACC (Network for the Detection of Atmospheric Composition Change serve as the validation reference. In this work, we will present the validation methodology developed for this project and show the first intercomparison results obtained for the Izaña Atmospheric Observatory between 2008 and 2012. As example, we will focus on two of the most important greenhouse gases, CH4 and N2O.

  3. Clinical Validation of Atlas-Based Auto-Segmentation of Multiple Target Volumes and Normal Tissue (Swallowing/Mastication) Structures in the Head and Neck

    International Nuclear Information System (INIS)

    Teguh, David N.; Levendag, Peter C.; Voet, Peter W.J.; Al-Mamgani, Abrahim; Han Xiao; Wolf, Theresa K.; Hibbard, Lyndon S.; Nowak, Peter; Akhiat, Hafid; Dirkx, Maarten L.P.; Heijmen, Ben J.M.; Hoogeman, Mischa S.

    2011-01-01

    Purpose: To validate and clinically evaluate autocontouring using atlas-based autosegmentation (ABAS) of computed tomography images. Methods and Materials: The data from 10 head-and-neck patients were selected as input for ABAS, and neck levels I-V and 20 organs at risk were manually contoured according to published guidelines. The total contouring times were recorded. Two different ABAS strategies, multiple and single subject, were evaluated, and the similarity of the autocontours with the atlas contours was assessed using Dice coefficients and the mean distances, using the leave-one-out method. For 12 clinically treated patients, 5 experienced observers edited the autosegmented contours. The editing times were recorded. The Dice coefficients and mean distances were calculated among the clinically used contours, autocontours, and edited autocontours. Finally, an expert panel scored all autocontours and the edited autocontours regarding their adequacy relative to the published atlas. Results: The time to autosegment all the structures using ABAS was 7 min/patient. No significant differences were observed in the autosegmentation accuracy for stage N0 and N+ patients. The multisubject atlas performed best, with a Dice coefficient and mean distance of 0.74 and 2 mm, 0.67 and 3 mm, 0.71 and 2 mm, 0.50 and 2 mm, and 0.78 and 2 mm for the salivary glands, neck levels, chewing muscles, swallowing muscles, and spinal cord-brainstem, respectively. The mean Dice coefficient and mean distance of the autocontours vs. the clinical contours was 0.8 and 2.4 mm for the neck levels and salivary glands, respectively. For the autocontours vs. the edited autocontours, the mean Dice coefficient and mean distance was 0.9 and 1.6 mm, respectively. The expert panel scored 100% of the autocontours as a “minor deviation, editable” or better. The expert panel scored 88% of the edited contours as good compared with 83% of the clinical contours. The total editing time was 66 min

  4. Clinical Validation of Atlas-Based Auto-Segmentation of Multiple Target Volumes and Normal Tissue (Swallowing/Mastication) Structures in the Head and Neck

    Energy Technology Data Exchange (ETDEWEB)

    Teguh, David N. [Department of Radiation Oncology, Erasmus Medical Center-Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Levendag, Peter C., E-mail: p.levendag@erasmusmc.nl [Department of Radiation Oncology, Erasmus Medical Center-Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Voet, Peter W.J.; Al-Mamgani, Abrahim [Department of Radiation Oncology, Erasmus Medical Center-Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Han Xiao; Wolf, Theresa K.; Hibbard, Lyndon S. [Elekta-CMS Software, Maryland Heights, MO 63043 (United States); Nowak, Peter; Akhiat, Hafid; Dirkx, Maarten L.P.; Heijmen, Ben J.M.; Hoogeman, Mischa S. [Department of Radiation Oncology, Erasmus Medical Center-Daniel den Hoed Cancer Center, Rotterdam (Netherlands)

    2011-11-15

    Purpose: To validate and clinically evaluate autocontouring using atlas-based autosegmentation (ABAS) of computed tomography images. Methods and Materials: The data from 10 head-and-neck patients were selected as input for ABAS, and neck levels I-V and 20 organs at risk were manually contoured according to published guidelines. The total contouring times were recorded. Two different ABAS strategies, multiple and single subject, were evaluated, and the similarity of the autocontours with the atlas contours was assessed using Dice coefficients and the mean distances, using the leave-one-out method. For 12 clinically treated patients, 5 experienced observers edited the autosegmented contours. The editing times were recorded. The Dice coefficients and mean distances were calculated among the clinically used contours, autocontours, and edited autocontours. Finally, an expert panel scored all autocontours and the edited autocontours regarding their adequacy relative to the published atlas. Results: The time to autosegment all the structures using ABAS was 7 min/patient. No significant differences were observed in the autosegmentation accuracy for stage N0 and N+ patients. The multisubject atlas performed best, with a Dice coefficient and mean distance of 0.74 and 2 mm, 0.67 and 3 mm, 0.71 and 2 mm, 0.50 and 2 mm, and 0.78 and 2 mm for the salivary glands, neck levels, chewing muscles, swallowing muscles, and spinal cord-brainstem, respectively. The mean Dice coefficient and mean distance of the autocontours vs. the clinical contours was 0.8 and 2.4 mm for the neck levels and salivary glands, respectively. For the autocontours vs. the edited autocontours, the mean Dice coefficient and mean distance was 0.9 and 1.6 mm, respectively. The expert panel scored 100% of the autocontours as a 'minor deviation, editable' or better. The expert panel scored 88% of the edited contours as good compared with 83% of the clinical contours. The total editing time was 66 min

  5. Integration of sparse multi-modality representation and geometrical constraint for isointense infant brain segmentation.

    Science.gov (United States)

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2013-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.

  6. Stochastic ground motion simulation

    Science.gov (United States)

    Rezaeian, Sanaz; Xiaodan, Sun; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan

    2014-01-01

    Strong earthquake ground motion records are fundamental in engineering applications. Ground motion time series are used in response-history dynamic analysis of structural or geotechnical systems. In such analysis, the validity of predicted responses depends on the validity of the input excitations. Ground motion records are also used to develop ground motion prediction equations(GMPEs) for intensity measures such as spectral accelerations that are used in response-spectrum dynamic analysis. Despite the thousands of available strong ground motion records, there remains a shortage of records for large-magnitude earthquakes at short distances or in specific regions, as well as records that sample specific combinations of source, path, and site characteristics.

  7. Review of segmentation process in consumer markets

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2013-01-01

    Full Text Available Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step of the segmentation process with equal treatment. Hence, the objective of this paper is two-fold. First, a snapshot of the segmentation process in a step-by-step fashion will be provided. Second, each step (where possible will be evaluated on chosen criteria by means of description, comparison, analysis and synthesis of 32 academic papers and 13 commercial typology systems. Ultimately, the segmentation stages will be discussed with empirical findings prevalent in the segmentation studies and last but not least suggestions calling for further investigation will be presented. This seven-step-framework may assist when segmenting in practice allowing for more confidential targeting which in turn might prepare grounds for creating of a differential advantage.

  8. Social Connectedness and Perceived Listening Effort in Adult Cochlear Implant Users: A Grounded Theory to Establish Content Validity for a New Patient-Reported Outcome Measure.

    Science.gov (United States)

    Hughes, Sarah E; Hutchings, Hayley A; Rapport, Frances L; McMahon, Catherine M; Boisvert, Isabelle

    2018-02-08

    Individuals with hearing loss often report a need for increased effort when listening, particularly in challenging acoustic environments. Despite audiologists' recognition of the impact of listening effort on individuals' quality of life, there are currently no standardized clinical measures of listening effort, including patient-reported outcome measures (PROMs). To generate items and content for a new PROM, this qualitative study explored the perceptions, understanding, and experiences of listening effort in adults with severe-profound sensorineural hearing loss before and after cochlear implantation. Three focus groups (1 to 3) were conducted. Purposive sampling was used to recruit 17 participants from a cochlear implant (CI) center in the United Kingdom. The participants included adults (n = 15, mean age = 64.1 years, range 42 to 84 years) with acquired severe-profound sensorineural hearing loss who satisfied the UK's national candidacy criteria for cochlear implantation and their normal-hearing significant others (n = 2). Participants were CI candidates who used hearing aids (HAs) and were awaiting CI surgery or CI recipients who used a unilateral CI or a CI and contralateral HA (CI + HA). Data from a pilot focus group conducted with 2 CI recipients were included in the analysis. The data, verbatim transcripts of the focus group proceedings, were analyzed qualitatively using constructivist grounded theory (GT) methodology. A GT of listening effort in cochlear implantation was developed from participants' accounts. The participants provided rich, nuanced descriptions of the complex and multidimensional nature of their listening effort. Interpreting and integrating these descriptions through GT methodology, listening effort was described as the mental energy required to attend to and process the auditory signal, as well as the effort required to adapt to, and compensate for, a hearing loss. Analyses also suggested that listening effort for most participants was

  9. Technical Note: Validation of Odin/SMR limb observations of ozone, comparisons with OSIRIS, POAM III, ground-based and balloon-borne instruments

    Directory of Open Access Journals (Sweden)

    F. Jégou

    2008-06-01

    Full Text Available The Odin satellite carries two instruments capable of determining stratospheric ozone profiles by limb sounding: the Sub-Millimetre Radiometer (SMR and the UV-visible spectrograph of the OSIRIS (Optical Spectrograph and InfraRed Imager System instrument. A large number of ozone profiles measurements were performed during six years from November 2001 to present. This ozone dataset is here used to make quantitative comparisons with satellite measurements in order to assess the quality of the Odin/SMR ozone measurements. In a first step, we compare Swedish SMR retrievals version 2.1, French SMR ozone retrievals version 222 (both from the 501.8 GHz band, and the OSIRIS retrievals version 3.0, with the operational version 4.0 ozone product from POAM III (Polar Ozone Atmospheric Measurement. In a second step, we refine the Odin/SMR validation by comparisons with ground-based instruments and balloon-borne observations. We use observations carried out within the framework of the Network for Detection of Atmospheric Composition Change (NDACC and balloon flight missions conducted by the Canadian Space Agency (CSA, the Laboratoire de Physique et de Chimie de l'{}Environnement (LPCE, Orléans, France, and the Service d'Aéronomie (SA, Paris, France. Coincidence criteria were 5° in latitude×10° in longitude, and 5 h in time in Odin/POAM III comparisons, 12 h in Odin/NDACC comparisons, and 72 h in Odin/balloons comparisons. An agreement is found with the POAM III experiment (10–60 km within −0.3±0.2 ppmv (bias±standard deviation for SMR (v222, v2.1 and within −0.5±0.2 ppmv for OSIRIS (v3.0. Odin ozone mixing ratio products are systematically slightly lower than the POAM III data and show an ozone maximum lower by 1–5 km in altitude. The comparisons with the NDACC data (10–34 km for ozonesonde, 10–50 km for lidar, 10–60 for microwave instruments yield a good agreement within −0.15±0.3 ppmv for the SMR data and −0.3±0.3 ppmv

  10. Marketing ambulatory care to women: a segmentation approach.

    Science.gov (United States)

    Harrell, G D; Fors, M F

    1985-01-01

    Although significant changes are occurring in health care delivery, in many instances the new offerings are not based on a clear understanding of market segments being served. This exploratory study suggests that important differences may exist among women with regard to health care selection. Five major women's segments are identified for consideration by health care executives in developing marketing strategies. Additional research is suggested to confirm this segmentation hypothesis, validate segmental differences and quantify the findings.

  11. NPOESS Interface Data Processing Segment Product Generation

    Science.gov (United States)

    Grant, K. D.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. The IDPS will process environmental data products beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. Within the overall NPOESS processing environment, the IDPS must process a data volume nearly 1000 times the size of current systems -- in one-quarter of the time. Further, it must support the calibration, validation, and data quality improvement initiatives of the NPOESS program to ensure the production of atmospheric and environmental products that meet strict requirements for accuracy and precision. This paper will describe the architecture approach that is necessary to meet these challenging, and seemingly exclusive, NPOESS IDPS design requirements, with a focus on the processing relationships required to generate the NPP products.

  12. NPOESS Interface Data Processing Segment (IDPS) Hardware

    Science.gov (United States)

    Sullivan, W. J.; Grant, K. D.; Bergeron, C.

    2008-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The NPOESS design allows centralized mission management and delivers high quality environmental products to military, civil and scientific users. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. IDPS processes NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. IDPS will process environmental data products beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. Within the overall NPOESS processing environment, the IDPS must process a data volume several orders of magnitude the size of current systems -- in one-quarter of the time. Further, it must support the calibration, validation, and data quality improvement initiatives of the NPOESS program to ensure the production of atmospheric and environmental products that meet strict requirements for accuracy and precision. This poster will illustrate and describe the IDPS HW architecture that is necessary to meet these challenging design requirements. In addition, it will illustrate the expandability features of the architecture in support of future data processing and data distribution needs.

  13. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  14. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach.

    Science.gov (United States)

    Avendi, Michael R; Kheradvar, Arash; Jafarkhani, Hamid

    2017-12-01

    This study aims to accurately segment the right ventricle (RV) from cardiac MRI using a fully automatic learning-based method. The proposed method uses deep learning algorithms, i.e., convolutional neural networks and stacked autoencoders, for automatic detection and initial segmentation of the RV chamber. The initial segmentation is then combined with the deformable models to improve the accuracy and robustness of the process. We trained our algorithm using 16 cardiac MRI datasets of the MICCAI 2012 RV Segmentation Challenge database and validated our technique using the rest of the dataset (32 subjects). An average Dice metric of 82.5% along with an average Hausdorff distance of 7.85 mm were achieved for all the studied subjects. Furthermore, a high correlation and level of agreement with the ground truth contours for end-diastolic volume (0.98), end-systolic volume (0.99), and ejection fraction (0.93) were observed. Our results show that deep learning algorithms can be effectively used for automatic segmentation of the RV. Computed quantitative metrics of our method outperformed that of the existing techniques participated in the MICCAI 2012 challenge, as reported by the challenge organizers. Magn Reson Med 78:2439-2448, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    Science.gov (United States)

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Development of gait segmentation methods for wearable foot pressure sensors.

    Science.gov (United States)

    Crea, S; De Rossi, S M M; Donati, M; Reberšek, P; Novak, D; Vitiello, N; Lenzi, T; Podobnik, J; Munih, M; Carrozza, M C

    2012-01-01

    We present an automated segmentation method based on the analysis of plantar pressure signals recorded from two synchronized wireless foot insoles. Given the strict limits on computational power and power consumption typical of wearable electronic components, our aim is to investigate the capability of a Hidden Markov Model machine-learning method, to detect gait phases with different levels of complexity in the processing of the wearable pressure sensors signals. Therefore three different datasets are developed: raw voltage values, calibrated sensor signals and a calibrated estimation of total ground reaction force and position of the plantar center of pressure. The method is tested on a pool of 5 healthy subjects, through a leave-one-out cross validation. The results show high classification performances achieved using estimated biomechanical variables, being on average the 96%. Calibrated signals and raw voltage values show higher delays and dispersions in phase transition detection, suggesting a lower reliability for online applications.

  17. Segmented trapped vortex cavity

    Science.gov (United States)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  18. Pavement management segment consolidation

    Science.gov (United States)

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  19. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    Science.gov (United States)

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    Science.gov (United States)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-10-01

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging

  1. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  2. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  3. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  4. Growth and inactivation of Salmonella enterica and Listeria monocytogenes in broth and validation in ground pork meat during simulated home storage abusive temperature and home pan-frying

    Directory of Open Access Journals (Sweden)

    Xiang eWang

    2015-10-01

    Full Text Available Ground pork meat with natural microbiota and inoculated with low initial densities (1-10 or 10-100 CFU/g of Salmonella enterica or Listeria monocytogenes was stored under abusive temperature at 10°C and thermally treated by a simulated home pan-frying procedure. The growth and inactivation characteristics were also evaluated in broth. In ground pork meat, the population of S. enterica increased by less than one log after 12-days of storage at 10°C, whereas L. monocytogenes increased by 2.3 to 2.8 log units. No unusual intrinsic heat resistance of the pathogens was noted when tested in broth at 60°C although shoulders were observed on the inactivation curves of L. monocytogenes. After growth of S. enterica and L. monocytogenes at 10°C for 5 days to levels of 1.95 log CFU/g and 3.10 log CFU/g, respectively, in ground pork meat, their inactivation in the burger subjected to a simulated home pan-frying was studied. After thermal treatment S. enterica was undetectable but L. monocytogenes was recovered in three out of six of the 25 g burger samples. Overall, the present study shows that data on growth and inactivation of broths are indicative but may underestimate as well as overestimate behavior of pathogens and thus need confirmation in food matrix conditions to assess food safety in reasonably foreseen abusive conditions of storage and usual home pan-frying of of meat burgers in Belgium.

  5. Grounded theory.

    Science.gov (United States)

    Harris, Tina

    2015-04-29

    Grounded theory is a popular research approach in health care and the social sciences. This article provides a description of grounded theory methodology and its key components, using examples from published studies to demonstrate practical application. It aims to demystify grounded theory for novice nurse researchers, by explaining what it is, when to use it, why they would want to use it and how to use it. It should enable nurse researchers to decide if grounded theory is an appropriate approach for their research, and to determine the quality of any grounded theory research they read.

  6. A Hybrid Hierarchical Approach for Brain Tissue Segmentation by Combining Brain Atlas and Least Square Support Vector Machine

    Science.gov (United States)

    Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh

    2013-01-01

    In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800

  7. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    Science.gov (United States)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-10-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  8. A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.

    Science.gov (United States)

    Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F

    2012-09-01

    Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.

  9. Cross-validation Methodology between Ground and GPM Satellite-based Radar Rainfall Product over Dallas-Fort Worth (DFW) Metroplex

    Science.gov (United States)

    Chen, H.; Chandrasekar, V.; Biswas, S.

    2015-12-01

    Over the past two decades, a large number of rainfall products have been developed based on satellite, radar, and/or rain gauge observations. However, to produce optimal rainfall estimation for a given region is still challenging due to the space time variability of rainfall at many scales and the spatial and temporal sampling difference of different rainfall instruments. In order to produce high-resolution rainfall products for urban flash flood applications and improve the weather sensing capability in urban environment, the center for Collaborative Adaptive Sensing of the Atmosphere (CASA), in collaboration with National Weather Service (NWS) and North Central Texas Council of Governments (NCTCOG), has developed an urban radar remote sensing network in DFW Metroplex. DFW is the largest inland metropolitan area in the U.S., that experiences a wide range of natural weather hazards such as flash flood and hailstorms. The DFW urban remote sensing network, centered by the deployment of eight dual-polarization X-band radars and a NWS WSR-88DP radar, is expected to provide impacts-based warning and forecasts for benefit of the public safety and economy. High-resolution quantitative precipitation estimation (QPE) is one of the major goals of the development of this urban test bed. In addition to ground radar-based rainfall estimation, satellite-based rainfall products for this area are also of interest for this study. Typical example is the rainfall rate product produced by the Dual-frequency Precipitation Radar (DPR) onboard Global Precipitation Measurement (GPM) Core Observatory satellite. Therefore, cross-comparison between ground and space-based rainfall estimation is critical to building an optimal regional rainfall system, which can take advantages of the sampling differences of different sensors. This paper presents the real-time high-resolution QPE system developed for DFW urban radar network, which is based upon the combination of S-band WSR-88DP and X

  10. MAX-DOAS measurements in southern China: retrieval of aerosol extinctions and validation using ground-based in-situ data

    Directory of Open Access Journals (Sweden)

    X. Li

    2010-03-01

    Full Text Available We performed MAX-DOAS measurements during the PRiDe-PRD2006 campaign in the Pearl River Delta region 50 km north of Guangzhou, China, for 4 weeks in June 2006. We used an instrument sampling at 7 different elevation angles between 3° and 90°. During 9 cloud-free days, differential slant column densities (DSCDs of O4 (O2 dimer absorptions between 351 nm and 389 nm were evaluated for 6 elevation angles. Here, we show that radiative transfer modeling of the DSCDS can be used to retrieve the aerosol extinction and the height of the boundary layer. A comparison of the aerosol extinction with simultaneously recorded, ground based nephelometer data shows excellent agreement.

  11. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  12. Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling.

    Science.gov (United States)

    Deng, Minghui; Yu, Renping; Wang, Li; Shi, Feng; Yap, Pew-Thian; Shen, Dinggang

    2016-12-01

    Segmentation of brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is crucial for brain structural measurement and disease diagnosis. Learning-based segmentation methods depend largely on the availability of good training ground truth. However, the commonly used 3T MR images are of insufficient image quality and often exhibit poor intensity contrast between WM, GM, and CSF. Therefore, they are not ideal for providing good ground truth label data for training learning-based methods. Recent advances in ultrahigh field 7T imaging make it possible to acquire images with excellent intensity contrast and signal-to-noise ratio. In this paper, the authors propose an algorithm based on random forest for segmenting 3T MR images by training a series of classifiers based on reliable labels obtained semiautomatically from 7T MR images. The proposed algorithm iteratively refines the probability maps of WM, GM, and CSF via a cascade of random forest classifiers for improved tissue segmentation. The proposed method was validated on two datasets, i.e., 10 subjects collected at their institution and 797 3T MR images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Specifically, for the mean Dice ratio of all 10 subjects, the proposed method achieved 94.52% ± 0.9%, 89.49% ± 1.83%, and 79.97% ± 4.32% for WM, GM, and CSF, respectively, which are significantly better than the state-of-the-art methods (p-values brain MR image segmentation. © 2016 American Association of Physicists in Medicine.

  13. Detection and Segmentation of Small Trees in the Forest-Tundra Ecotone Using Airborne Laser Scanning

    Directory of Open Access Journals (Sweden)

    Marius Hauglin

    2016-05-01

    Full Text Available Due to expected climate change and increased focus on forests as a potential carbon sink, it is of interest to map and monitor even marginal forests where trees exist close to their tolerance limits, such as small pioneer trees in the forest-tundra ecotone. Such small trees might indicate tree line migrations and expansion of the forests into treeless areas. Airborne laser scanning (ALS has been suggested and tested as a tool for this purpose and in the present study a novel procedure for identification and segmentation of small trees is proposed. The study was carried out in the Rollag municipality in southeastern Norway, where ALS data and field measurements of individual trees were acquired. The point density of the ALS data was eight points per m2, and the field tree heights ranged from 0.04 to 6.3 m, with a mean of 1.4 m. The proposed method is based on an allometric model relating field-measured tree height to crown diameter, and another model relating field-measured tree height to ALS-derived height. These models are calibrated with local field data. Using these simple models, every positive above-ground height derived from the ALS data can be related to a crown diameter, and by assuming a circular crown shape, this crown diameter can be extended to a crown segment. Applying this model to all ALS echoes with a positive above-ground height value yields an initial map of possible circular crown segments. The final crown segments were then derived by applying a set of simple rules to this initial “map” of segments. The resulting segments were validated by comparison with field-measured crown segments. Overall, 46% of the field-measured trees were successfully detected. The detection rate increased with tree size. For trees with height >3 m the detection rate was 80%. The relatively large detection errors were partly due to the inherent limitations in the ALS data; a substantial fraction of the smaller trees was hit by no or just a few

  14. Validation of the Atmospheric Chemistry Experiment (ACE version 2.2 temperature using ground-based and space-borne measurements

    Directory of Open Access Journals (Sweden)

    R. J. Sica

    2008-01-01

    Full Text Available An ensemble of space-borne and ground-based instruments has been used to evaluate the quality of the version 2.2 temperature retrievals from the Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS. The agreement of ACE-FTS temperatures with other sensors is typically better than 2 K in the stratosphere and upper troposphere and 5 K in the lower mesosphere. There is evidence of a systematic high bias (roughly 3–6 K in the ACE-FTS temperatures in the mesosphere, and a possible systematic low bias (roughly 2 K in ACE-FTS temperatures near 23 km. Some ACE-FTS temperature profiles exhibit unphysical oscillations, a problem fixed in preliminary comparisons with temperatures derived using the next version of the ACE-FTS retrieval software. Though these relatively large oscillations in temperature can be on the order of 10 K in the mesosphere, retrieved volume mixing ratio profiles typically vary by less than a percent or so. Statistical comparisons suggest these oscillations occur in about 10% of the retrieved profiles. Analysis from a set of coincident lidar measurements suggests that the random error in ACE-FTS version 2.2 temperatures has a lower limit of about ±2 K.

  15. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    Energy Technology Data Exchange (ETDEWEB)

    Ren, X; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China); Sharp, G [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  16. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    International Nuclear Information System (INIS)

    Ren, X; Gao, H; Sharp, G

    2015-01-01

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  17. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  18. Correlative 3D-imaging of Pipistrellus penis micromorphology: Validating quantitative microCT images with undecalcified serial ground section histomorphology.

    Science.gov (United States)

    Herdina, Anna Nele; Plenk, Hanns; Benda, Petr; Lina, Peter H C; Herzig-Straschil, Barbara; Hilgers, Helge; Metscher, Brian D

    2015-06-01

    Detailed knowledge of histomorphology is a prerequisite for the understanding of function, variation, and development. In bats, as in other mammals, penis and baculum morphology are important in species discrimination and phylogenetic studies. In this study, nondestructive 3D-microtomographic (microCT, µCT) images of bacula and iodine-stained penes of Pipistrellus pipistrellus were correlated with light microscopic images from undecalcified surface-stained ground sections of three of these penes of P. pipistrellus (1 juvenile). The results were then compared with µCT-images of bacula of P. pygmaeus, P. hanaki, and P. nathusii. The Y-shaped baculum in all studied Pipistrellus species has a proximal base with two club-shaped branches, a long slender shaft, and a forked distal tip. The branches contain a medullary cavity of variable size, which tapers into a central canal of variable length in the proximal baculum shaft. Both are surrounded by a lamellar and a woven bone layer and contain fatty marrow and blood vessels. The distal shaft consists of woven bone only, without a vascular canal. The proximal ends of the branches are connected with the tunica albuginea of the corpora cavernosa via entheses. In the penis shaft, the corpus spongiosum-surrounded urethra lies in a ventral grove of the corpora cavernosa, and continues in the glans under the baculum. The glans penis predominantly comprises an enlarged corpus spongiosum, which surrounds urethra and baculum. In the 12 studied juvenile and subadult P. pipistrellus specimens the proximal branches of the baculum were shorter and without marrow cavity, while shaft and distal tip appeared already fully developed. The present combination with light microscopic images from one species enabled a more reliable interpretation of histomorphological structures in the µCT-images from all four Pipistrellus species. © 2015 Wiley Periodicals, Inc.

  19. Local Stereo Matching Using Adaptive Local Segmentation

    NARCIS (Netherlands)

    Damjanovic, S.; van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan

    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The

  20. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two

  1. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    mechanisms may act on the level of gene expression, cell proliferation, tissue differentiation and organ system formation in individual segments. Accordingly, in some polychaete annelids the first three pairs of segmental peripheral neurons arise synchronously, while the metameric commissures of the ventral...

  2. Segmentation of culturally diverse visitors' values in forest recreation management

    Science.gov (United States)

    C. Li; H.C. Zinn; G.E. Chick; J.D. Absher; A.R. Graefe; Y. Hsu

    2007-01-01

    The purpose of this study was to examine the potential utility of HOFSTEDE’s measure of cultural values (1980) for group segmentation in an ethnically diverse population in a forest recreation context, and to validate the values segmentation, if any, via socio-demographic and service quality related variables. In 2002, the visitors to the Angeles National Forest (ANF)...

  3. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jinzhong; Aristophanous, Michalis, E-mail: MAristophanous@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Beadle, Beth M.; Garden, Adam S. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Schwartz, David L. [Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States)

    2015-09-15

    Purpose: To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Methods: Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation–maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the “ground truth” for quantitative evaluation. Results: The median multichannel segmented GTV of the primary tumor was 15.7 cm{sup 3} (range, 6.6–44.3 cm{sup 3}), while the PET segmented GTV was 10.2 cm{sup 3} (range, 2.8–45.1 cm{sup 3}). The median physician-defined GTV was 22.1 cm{sup 3} (range, 4.2–38.4 cm{sup 3}). The median difference between the multichannel segmented and physician-defined GTVs was −10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was −19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented

  4. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy.

    Science.gov (United States)

    Yang, Jinzhong; Beadle, Beth M; Garden, Adam S; Schwartz, David L; Aristophanous, Michalis

    2015-09-01

    To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation-maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the "ground truth" for quantitative evaluation. The median multichannel segmented GTV of the primary tumor was 15.7 cm(3) (range, 6.6-44.3 cm(3)), while the PET segmented GTV was 10.2 cm(3) (range, 2.8-45.1 cm(3)). The median physician-defined GTV was 22.1 cm(3) (range, 4.2-38.4 cm(3)). The median difference between the multichannel segmented and physician-defined GTVs was -10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was -19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55-0.84), and the

  5. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy

    International Nuclear Information System (INIS)

    Yang, Jinzhong; Aristophanous, Michalis; Beadle, Beth M.; Garden, Adam S.; Schwartz, David L.

    2015-01-01

    Purpose: To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Methods: Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation–maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the “ground truth” for quantitative evaluation. Results: The median multichannel segmented GTV of the primary tumor was 15.7 cm"3 (range, 6.6–44.3 cm"3), while the PET segmented GTV was 10.2 cm"3 (range, 2.8–45.1 cm"3). The median physician-defined GTV was 22.1 cm"3 (range, 4.2–38.4 cm"3). The median difference between the multichannel segmented and physician-defined GTVs was −10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was −19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was

  6. Methods for recognition and segmentation of active fault

    International Nuclear Information System (INIS)

    Hyun, Chang Hun; Noh, Myung Hyun; Lee, Kieh Hwa; Chang, Tae Woo; Kyung, Jai Bok; Kim, Ki Young

    2000-03-01

    In order to identify and segment the active faults, the literatures of structural geology, paleoseismology, and geophysical explorations were investigated. The existing structural geological criteria for segmenting active faults were examined. These are mostly based on normal fault systems, thus, the additional criteria are demanded for application to different types of fault systems. Definition of the seismogenic fault, characteristics of fault activity, criteria and study results of fault segmentation, relationship between segmented fault length and maximum displacement, and estimation of seismic risk of segmented faults were examined in paleoseismic study. The history of earthquake such as dynamic pattern of faults, return period, and magnitude of the maximum earthquake originated by fault activity can be revealed by the study. It is confirmed through various case studies that numerous geophysical explorations including electrical resistivity, land seismic, marine seismic, ground-penetrating radar, magnetic, and gravity surveys have been efficiently applied to the recognition and segmentation of active faults

  7. Active mask segmentation of fluorescence microscope images.

    Science.gov (United States)

    Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena

    2009-08-01

    We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.

  8. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  9. An objective evaluation framework for segmentation techniques of functional positron emission tomography studies

    CERN Document Server

    Kim, J; Eberl, S; Feng, D

    2004-01-01

    Segmentation of multi-dimensional functional positron emission tomography (PET) studies into regions of interest (ROI) exhibiting similar temporal behavior is useful in diagnosis and evaluation of neurological images. Quantitative evaluation plays a crucial role in measuring the segmentation algorithm's performance. Due to the lack of "ground truth" available for evaluating segmentation of clinical images, automated segmentation results are usually compared with manual delineation of structures which is, however, subjective, and is difficult to perform. Alternatively, segmentation of co-registered anatomical images such as magnetic resonance imaging (MRI) can be used as the ground truth to the PET segmentation. However, this is limited to PET studies which have corresponding MRI. In this study, we introduce a framework for the objective and quantitative evaluation of functional PET study segmentation without the need for manual delineation or registration to anatomical images of the patient. The segmentation ...

  10. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  11. A cognitively grounded measure of pronunciation distance.

    Directory of Open Access Journals (Sweden)

    Martijn Wieling

    Full Text Available In this study we develop pronunciation distances based on naive discriminative learning (NDL. Measures of pronunciation distance are used in several subfields of linguistics, including psycholinguistics, dialectology and typology. In contrast to the commonly used Levenshtein algorithm, NDL is grounded in cognitive theory of competitive reinforcement learning and is able to generate asymmetrical pronunciation distances. In a first study, we validated the NDL-based pronunciation distances by comparing them to a large set of native-likeness ratings given by native American English speakers when presented with accented English speech. In a second study, the NDL-based pronunciation distances were validated on the basis of perceptual dialect distances of Norwegian speakers. Results indicated that the NDL-based pronunciation distances matched perceptual distances reasonably well with correlations ranging between 0.7 and 0.8. While the correlations were comparable to those obtained using the Levenshtein distance, the NDL-based approach is more flexible as it is also able to incorporate acoustic information other than sound segments.

  12. Automated vessel shadow segmentation of fovea-centered spectral-domain images from multiple OCT devices

    Science.gov (United States)

    Wu, Jing; Gerendas, Bianca S.; Waldstein, Sebastian M.; Simader, Christian; Schmidt-Erfurth, Ursula

    2014-03-01

    Spectral-domain Optical Coherence Tomography (SD-OCT) is a non-invasive modality for acquiring high reso- lution, three-dimensional (3D) cross sectional volumetric images of the retina and the subretinal layers. SD-OCT also allows the detailed imaging of retinal pathology, aiding clinicians in the diagnosis of sight degrading diseases such as age-related macular degeneration (AMD) and glaucoma.1 Disease diagnosis, assessment, and treatment requires a patient to undergo multiple OCT scans, possibly using different scanning devices, to accurately and precisely gauge disease activity, progression and treatment success. However, the use of OCT imaging devices from different vendors, combined with patient movement may result in poor scan spatial correlation, potentially leading to incorrect patient diagnosis or treatment analysis. Image registration can be used to precisely compare disease states by registering differing 3D scans to one another. In order to align 3D scans from different time- points and vendors using registration, landmarks are required, the most obvious being the retinal vasculature. Presented here is a fully automated cross-vendor method to acquire retina vessel locations for OCT registration from fovea centred 3D SD-OCT scans based on vessel shadows. Noise filtered OCT scans are flattened based on vendor retinal layer segmentation, to extract the retinal pigment epithelium (RPE) layer of the retina. Voxel based layer profile analysis and k-means clustering is used to extract candidate vessel shadow regions from the RPE layer. In conjunction, the extracted RPE layers are combined to generate a projection image featuring all candidate vessel shadows. Image processing methods for vessel segmentation of the OCT constructed projection image are then applied to optimize the accuracy of OCT vessel shadow segmentation through the removal of false positive shadow regions such as those caused by exudates and cysts. Validation of segmented vessel shadows uses

  13. TU-H-CAMPUS-IeP3-01: Simultaneous PET Restoration and PET/CT Co-Segmentation Using a Variational Method

    International Nuclear Information System (INIS)

    Li, L; Tan, S; Lu, W

    2016-01-01

    Purpose: PET images are usually blurred due to the finite spatial resolution, while CT images suffer from low contrast. Segment a tumor from either a single PET or CT image is thus challenging. To make full use of the complementary information between PET and CT, we propose a novel variational method for simultaneous PET image restoration and PET/CT images co-segmentation. Methods: The proposed model was constructed based on the Γ-convergence approximation of Mumford-Shah (MS) segmentation model for PET/CT co-segmentation. Moreover, a PET de-blur process was integrated into the MS model to improve the segmentation accuracy. An interaction edge constraint term over the two modalities were specially designed to share the complementary information. The energy functional was iteratively optimized using an alternate minimization (AM) algorithm. The performance of the proposed method was validated on ten lung cancer cases and five esophageal cancer cases. The ground truth were manually delineated by an experienced radiation oncologist using the complementary visual features of PET and CT. The segmentation accuracy was evaluated by Dice similarity index (DSI) and volume error (VE). Results: The proposed method achieved an expected restoration result for PET image and satisfactory segmentation results for both PET and CT images. For lung cancer dataset, the average DSI (0.72) increased by 0.17 and 0.40 than single PET and CT segmentation. For esophageal cancer dataset, the average DSI (0.85) increased by 0.07 and 0.43 than single PET and CT segmentation. Conclusion: The proposed method took full advantage of the complementary information from PET and CT images. This work was supported in part by the National Cancer Institute Grants R01CA172638. Shan Tan and Laquan Li were supported in part by the National Natural Science Foundation of China, under Grant Nos. 60971112 and 61375018.

  14. Segmental tuberculosis verrucosa cutis

    Directory of Open Access Journals (Sweden)

    Hanumanthappa H

    1994-01-01

    Full Text Available A case of segmental Tuberculosis Verrucosa Cutis is reported in 10 year old boy. The condition was resembling the ascending lymphangitic type of sporotrichosis. The lesions cleared on treatment with INH 150 mg daily for 6 months.

  15. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  16. International EUREKA: Initialization Segment

    International Nuclear Information System (INIS)

    1982-02-01

    The Initialization Segment creates the starting description of the uranium market. The starting description includes the international boundaries of trade, the geologic provinces, resources, reserves, production, uranium demand forecasts, and existing market transactions. The Initialization Segment is designed to accept information of various degrees of detail, depending on what is known about each region. It must transform this information into a specific data structure required by the Market Segment of the model, filling in gaps in the information through a predetermined sequence of defaults and built in assumptions. A principal function of the Initialization Segment is to create diagnostic messages indicating any inconsistencies in data and explaining which assumptions were used to organize the data base. This permits the user to manipulate the data base until such time the user is satisfied that all the assumptions used are reasonable and that any inconsistencies are resolved in a satisfactory manner

  17. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  18. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    DEFF Research Database (Denmark)

    Bertholet, Jenny; Wan, Hanlin; Toftegaard, Jakob

    2017-01-01

    segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP...... algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated...

  19. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  20. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  1. Applications of magnetic resonance image segmentation in neurology

    Science.gov (United States)

    Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu

    1999-05-01

    After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.

  2. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.

    Directory of Open Access Journals (Sweden)

    Stephen M Plaza

    Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

  3. Quantitative Comparison of SPM, FSL, and Brainsuite for Brain MR Image Segmentation

    Directory of Open Access Journals (Sweden)

    Kazemi K

    2014-03-01

    Full Text Available Background: Accurate brain tissue segmentation from magnetic resonance (MR images is an important step in analysis of cerebral images. There are software packages which are used for brain segmentation. These packages usually contain a set of skull stripping, intensity non-uniformity (bias correction and segmentation routines. Thus, assessment of the quality of the segmented gray matter (GM, white matter (WM and cerebrospinal fluid (CSF is needed for the neuroimaging applications. Methods: In this paper, performance evaluation of three widely used brain segmentation software packages SPM8, FSL and Brainsuite is presented. Segmentation with SPM8 has been performed in three frameworks: i default segmentation, ii SPM8 New-segmentation and iii modified version using hidden Markov random field as implemented in SPM8-VBM toolbox. Results: The accuracy of the segmented GM, WM and CSF and the robustness of the tools against changes of image quality has been assessed using Brainweb simulated MR images and IBSR real MR images. The calculated similarity between the segmented tissues using different tools and corresponding ground truth shows variations in segmentation results. Conclusion: A few studies has investigated GM, WM and CSF segmentation. In these studies, the skull stripping and bias correction are performed separately and they just evaluated the segmentation. Thus, in this study, assessment of complete segmentation framework consisting of pre-processing and segmentation of these packages is performed. The obtained results can assist the users in choosing an appropriate segmentation software package for the neuroimaging application of interest.

  4. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  5. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  6. Ground water

    International Nuclear Information System (INIS)

    Osmond, J.K.; Cowart, J.B.

    1982-01-01

    The subject is discussed under the headings: background and theory (introduction; fractionation in the hydrosphere; mobility factors; radioisotope evolution and aquifer classification; aquifer disequilibria and geochemical fronts); case studies (introduction; (a) conservative, and (b) non-conservative, behaviour); ground water dating applications (general requirements; radon and helium; radium isotopes; uranium isotopes). (U.K.)

  7. Ground water

    International Nuclear Information System (INIS)

    Osmond, J.K.; Cowart, J.B.

    1992-01-01

    The great variations in concentrations and activity ratios of 234 U/ 238 U in ground waters and the features causing elemental and isotopic mobility in the hydrosphere are discussed. Fractionation processes and their application to hydrology and other environmental problems such as earthquake, groundwater and aquifer dating are described. (UK)

  8. Multi-modal RGB–Depth–Thermal Human Body Segmentation

    DEFF Research Database (Denmark)

    Palmero, Cristina; Clapés, Albert; Bahnsen, Chris

    2016-01-01

    This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration...... to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations....

  9. Social discourses of healthy eating. A market segmentation approach.

    Science.gov (United States)

    Chrysochou, Polymeros; Askegaard, Søren; Grunert, Klaus G; Kristensen, Dorthe Brogård

    2010-10-01

    This paper proposes a framework of discourses regarding consumers' healthy eating as a useful conceptual scheme for market segmentation purposes. The objectives are: (a) to identify the appropriate number of health-related segments based on the underlying discursive subject positions of the framework, (b) to validate and further describe the segments based on their socio-demographic characteristics and attitudes towards healthy eating, and (c) to explore differences across segments in types of associations with food and health, as well as perceptions of food healthfulness.316 Danish consumers participated in a survey that included measures of the underlying subject positions of the proposed framework, followed by a word association task that aimed to explore types of associations with food and health, and perceptions of food healthfulness. A latent class clustering approach revealed three consumer segments: the Common, the Idealists and the Pragmatists. Based on the addressed objectives, differences across the segments are described and implications of findings are discussed.

  10. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  11. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  12. Ground Validation GPS for American Samoa

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project is a cooperative effort among the National Ocean Service, National Centers for Coastal Ocean Science, Center for Coastal Monitoring and Assessment; the...

  13. Ground Validation GPS of the Mariana Archipelago

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project is a cooperative effort among the National Ocean Service, National Centers for Coastal Ocean Science, Center for Coastal Monitoring and Assessment; the...

  14. Detailed Design of On-Board and Ground Segment

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Image processing, attitude determination, quaternion estimation, and performance test of short range camera for rendez-vous and docking of spacecraft.......Image processing, attitude determination, quaternion estimation, and performance test of short range camera for rendez-vous and docking of spacecraft....

  15. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  16. Superiority Of Graph-Based Visual Saliency GVS Over Other Image Segmentation Methods

    Directory of Open Access Journals (Sweden)

    Umu Lamboi

    2017-02-01

    Full Text Available Although inherently tedious the segmentation of images and the evaluation of segmented images are critical in computer vision processes. One of the main challenges in image segmentation evaluation arises from the basic conflict between generality and objectivity. For general segmentation purposes the lack of well-defined ground-truth and segmentation accuracy limits the evaluation of specific applications. Subjectivity is the most common method of evaluation of segmentation quality where segmented images are visually compared. This is daunting task however limits the scope of segmentation evaluation to a few predetermined sets of images. As an alternative supervised evaluation compares segmented images against manually-segmented or pre-processed benchmark images. Not only good evaluation methods allow for different comparisons but also for integration with target recognition systems for adaptive selection of appropriate segmentation granularity with improved recognition accuracy. Most of the current segmentation methods still lack satisfactory measures of effectiveness. Thus this study proposed a supervised framework which uses visual saliency detection to quantitatively evaluate image segmentation quality. The new benchmark evaluator uses Graph-based Visual Saliency GVS to compare boundary outputs for manually segmented images. Using the Berkeley Segmentation Database the proposed algorithm was tested against 4 other quantitative evaluation methods Probabilistic Rand Index PRI Variation of Information VOI Global Consistency Error GSE and Boundary Detection Error BDE. Based on the results the GVS approach outperformed any of the other 4 independent standard methods in terms of visual saliency detection of images.

  17. Connecting textual segments

    DEFF Research Database (Denmark)

    Brügger, Niels

    2017-01-01

    history than just the years of the emergence of the web, the chapter traces the history of how segments of text have deliberately been connected to each other by the use of specific textual and media features, from clay tablets, manuscripts on parchment, and print, among others, to hyperlinks on stand......In “Connecting textual segments: A brief history of the web hyperlink” Niels Brügger investigates the history of one of the most fundamental features of the web: the hyperlink. Based on the argument that the web hyperlink is best understood if it is seen as another step in a much longer and broader...

  18. Ground Pollution Science

    International Nuclear Information System (INIS)

    Oh, Jong Min; Bae, Jae Geun

    1997-08-01

    This book deals with ground pollution science and soil science, classification of soil and fundamentals, ground pollution and human, ground pollution and organic matter, ground pollution and city environment, environmental problems of the earth and ground pollution, soil pollution and development of geological features of the ground, ground pollution and landfill of waste, case of measurement of ground pollution.

  19. Retinal Image Preprocessing: Background and Noise Segmentation

    Directory of Open Access Journals (Sweden)

    Usman Akram

    2012-09-01

    Full Text Available Retinal images are used for the automated screening and diagnosis of diabetic retinopathy. The retinal image quality must be improved for the detection of features and abnormalities and for this purpose preprocessing of retinal images is vital. In this paper, we present a novel automated approach for preprocessing of colored retinal images. The proposed technique improves the quality of input retinal image by separating the background and noisy area from the overall image. It contains coarse segmentation and fine segmentation. Standard retinal images databases Diaretdb0, Diaretdb1, DRIVE and STARE are used to test the validation of our preprocessing technique. The experimental results show the validity of proposed preprocessing technique.

  20. Spine segmentation from C-arm CT data sets: application to region-of-interest volumes for spinal interventions

    Science.gov (United States)

    Buerger, C.; Lorenz, C.; Babic, D.; Hoppenbrouwers, J.; Homan, R.; Nachabe, R.; Racadio, J. M.; Grass, M.

    2017-03-01

    Spinal fusion is a common procedure to stabilize the spinal column by fixating parts of the spine. In such procedures, metal screws are inserted through the patients back into a vertebra, and the screws of adjacent vertebrae are connected by metal rods to generate a fixed bridge. In these procedures, 3D image guidance for intervention planning and outcome control is required. Here, for anatomical guidance, an automated approach for vertebra segmentation from C-arm CT images of the spine is introduced and evaluated. As a prerequisite, 3D C-arm CT images are acquired covering the vertebrae of interest. An automatic model-based segmentation approach is applied to delineate the outline of the vertebrae of interest. The segmentation approach is based on 24 partial models of the cervical, thoracic and lumbar vertebrae which aggregate information about (i) the basic shape itself, (ii) trained features for image based adaptation, and (iii) potential shape variations. Since the volume data sets generated by the C-arm system are limited to a certain region of the spine the target vertebra and hence initial model position is assigned interactively. The approach was trained and tested on 21 human cadaver scans. A 3-fold cross validation to ground truth annotations yields overall mean segmentation errors of 0.5 mm for T1 to 1.1 mm for C6. The results are promising and show potential to support the clinician in pedicle screw path and rod planning to allow accurate and reproducible insertions.

  1. Unsupervised motion-based object segmentation refined by color

    Science.gov (United States)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    . The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  2. Segmentation in cinema perception.

    Science.gov (United States)

    Carroll, J M; Bever, T G

    1976-03-12

    Viewers perceptually segment moving picture sequences into their cinematically defined units: excerpts that follow short film sequences are recognized faster when the excerpt originally came after a structural cinematic break (a cut or change in the action) than when it originally came before the break.

  3. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  4. Unsupervised Image Segmentation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    2014-01-01

    Roč. 36, č. 4 (2014), s. 23-23 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : unsupervised image segmentation Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2014/RO/haindl-0434412.pdf

  5. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  6. Communication grounding facility

    International Nuclear Information System (INIS)

    Lee, Gye Seong

    1998-06-01

    It is about communication grounding facility, which is made up twelve chapters. It includes general grounding with purpose, materials thermal insulating material, construction of grounding, super strength grounding method, grounding facility with grounding way and building of insulating, switched grounding with No. 1A and LCR, grounding facility of transmission line, wireless facility grounding, grounding facility in wireless base station, grounding of power facility, grounding low-tenton interior power wire, communication facility of railroad, install of arrester in apartment and house, install of arrester on introduction and earth conductivity and measurement with introduction and grounding resistance.

  7. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  8. Boundary segmentation for fluorescence microscopy using steerable filters

    Science.gov (United States)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.

  9. Adaptive attenuation of aliased ground roll using the shearlet transform

    Science.gov (United States)

    Hosseini, Seyed Abolfazl; Javaherian, Abdolrahim; Hassani, Hossien; Torabi, Siyavash; Sadri, Maryam

    2015-01-01

    Attenuation of ground roll is an essential step in seismic data processing. Spatial aliasing of the ground roll may cause the overlap of the ground roll with reflections in the f-k domain. The shearlet transform is a directional and multidimensional transform that separates the events with different dips and generates subimages in different scales and directions. In this study, the shearlet transform was used adaptively to attenuate aliased and non-aliased ground roll. After defining a filtering zone, an input shot record is divided into segments. Each segment overlaps adjacent segments. To apply the shearlet transform on each segment, the subimages containing aliased and non-aliased ground roll, the locations of these events on each subimage are selected adaptively. Based on these locations, mute is applied on the selected subimages. The filtered segments are merged together, using the Hanning function, after applying the inverse shearlet transform. This adaptive process of ground roll attenuation was tested on synthetic data, and field shot records from west of Iran. Analysis of the results using the f-k spectra revealed that the non-aliased and most of the aliased ground roll were attenuated using the proposed adaptive attenuation procedure. Also, we applied this method on shot records of a 2D land survey, and the data sets before and after ground roll attenuation were stacked and compared. The stacked section after ground roll attenuation contained less linear ground roll noise and more continuous reflections in comparison with the stacked section before the ground roll attenuation. The proposed method has some drawbacks such as more run time in comparison with traditional methods such as f-k filtering and reduced performance when the dip and frequency content of aliased ground roll are the same as those of the reflections.

  10. Market segmentation: Venezuelan ADRs

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2012-12-01

    Full Text Available The control on foreign exchange imposed by Venezuela in 2003 constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, the shares in the firm CANTV were, through their American Depositary Receipts (ADRs, partially integrated with the global market. Following the imposition of the exchange controls this integration was lost. Research also documents the spectacular and apparently contradictory rise experienced by the Caracas Stock Exchange during the serious economic crisis of 2003. It is argued that, as it happened in Argentina in 2002, the rise in share prices occurred because the depreciation of the Bolívar in the parallel currency market increased the local price of the stocks that had associated ADRs, which were negotiated in dollars.

  11. Scintillation counter, segmented shield

    International Nuclear Information System (INIS)

    Olson, R.E.; Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  12. Head segmentation in vertebrates

    OpenAIRE

    Kuratani, Shigeru; Schilling, Thomas

    2008-01-01

    Classic theories of vertebrate head segmentation clearly exemplify the idealistic nature of comparative embryology prior to the 20th century. Comparative embryology aimed at recognizing the basic, primary structure that is shared by all vertebrates, either as an archetype or an ancestral developmental pattern. Modern evolutionary developmental (Evo-Devo) studies are also based on comparison, and therefore have a tendency to reduce complex embryonic anatomy into overly simplified patterns. Her...

  13. The accelerated site technology deployment program presents the segmented gate system

    International Nuclear Information System (INIS)

    Patteson, Raymond; Maynor, Doug; Callan, Connie

    2000-01-01

    The Department of Energy (DOE) is working to accelerate the acceptance and application of innovative technologies that improve the way the nation manages its environmental remediation problems. The DOE Office of Science and Technology established the Accelerated Site Technology Deployment Program (ASTD) to help accelerate the acceptance and implementation of new and innovative soil and ground water remediation technologies. Coordinated by the Department of Energy's Idaho Office, the ASTD Program reduces many of the classic barriers to the deployment of new technologies by involving government, industry, and regulatory agencies in the assessment, implementation, and validation of innovative technologies. The paper uses the example of the Segmented Gate System (SGS) to illustrate how the ASTD program works. The SGS was used to cost effectively separate clean and contaminated soil for four different radionuclides: plutonium, uranium, thorium, and cesium. Based on those results, it has been proposed to use the SGS at seven other DOE sites across the country

  14. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  15. Video segmentation using keywords

    Science.gov (United States)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  16. 'Grounded' Politics

    DEFF Research Database (Denmark)

    Schmidt, Garbi

    2012-01-01

    play within one particular neighbourhood: Nørrebro in the Danish capital, Copenhagen. The article introduces the concept of grounded politics to analyse how groups of Muslim immigrants in Nørrebro use the space, relationships and history of the neighbourhood for identity political statements....... The article further describes how national political debates over the Muslim presence in Denmark affect identity political manifestations within Nørrebro. By using Duncan Bell’s concept of mythscape (Bell, 2003), the article shows how some political actors idealize Nørrebro’s past to contest the present...... ethnic and religious diversity of the neighbourhood and, further, to frame what they see as the deterioration of genuine Danish identity....

  17. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S.W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  18. Fully automated chest wall line segmentation in breast MRI by using context information

    Science.gov (United States)

    Wu, Shandong; Weinstein, Susan P.; Conant, Emily F.; Localio, A. Russell; Schnall, Mitchell D.; Kontos, Despina

    2012-03-01

    Breast MRI has emerged as an effective modality for the clinical management of breast cancer. Evidence suggests that computer-aided applications can further improve the diagnostic accuracy of breast MRI. A critical and challenging first step for automated breast MRI analysis, is to separate the breast as an organ from the chest wall. Manual segmentation or user-assisted interactive tools are inefficient, tedious, and error-prone, which is prohibitively impractical for processing large amounts of data from clinical trials. To address this challenge, we developed a fully automated and robust computerized segmentation method that intensively utilizes context information of breast MR imaging and the breast tissue's morphological characteristics to accurately delineate the breast and chest wall boundary. A critical component is the joint application of anisotropic diffusion and bilateral image filtering to enhance the edge that corresponds to the chest wall line (CWL) and to reduce the effect of adjacent non-CWL tissues. A CWL voting algorithm is proposed based on CWL candidates yielded from multiple sequential MRI slices, in which a CWL representative is generated and used through a dynamic time warping (DTW) algorithm to filter out inferior candidates, leaving the optimal one. Our method is validated by a representative dataset of 20 3D unilateral breast MRI scans that span the full range of the American College of Radiology (ACR) Breast Imaging Reporting and Data System (BI-RADS) fibroglandular density categorization. A promising performance (average overlay percentage of 89.33%) is observed when the automated segmentation is compared to manually segmented ground truth obtained by an experienced breast imaging radiologist. The automated method runs time-efficiently at ~3 minutes for each breast MR image set (28 slices).

  19. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains.

    Science.gov (United States)

    Bricq, S; Collet, Ch; Armspach, J P

    2008-12-01

    In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.

  20. A Model Ground State of Polyampholytes

    International Nuclear Information System (INIS)

    Wofling, S.; Kantor, Y.

    1998-01-01

    The ground state of randomly charged polyampholytes (polymers with positive and negatively charged groups along their backbone) is conjectured to have a structure similar to a necklace, made of weakly charged parts of the chain, compacting into globules, connected by highly charged stretched 'strings' attempted to quantify the qualitative necklace model, by suggesting a zero approximation model, in which the longest neutral segment of the polyampholyte forms a globule, while the remaining part will form a tail. Expanding this approximation, we suggest a specific necklace-type structure for the ground state of randomly charged polyampholyte's, where all the neutral parts of the chain compact into globules: The longest neutral segment compacts into a globule; in the remaining part of the chain, the longest neutral segment (the second longest neutral segment) compacts into a globule, then the third, and so on. A random sequence of charges is equivalent to a random walk, and a neutral segment is equivalent to a loop inside the random walk. We use analytical and Monte Carlo methods to investigate the size distribution of loops in a one-dimensional random walk. We show that the length of the nth longest neutral segment in a sequence of N monomers (or equivalently, the nth longest loop in a random walk of N steps) is proportional to N/n 2 , while the mean number of neutral segments increases as √N. The polyampholytes in the ground state within our model is found to have an average linear size proportional to dN, and an average surface area proportional to N 2/3

  1. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  2. Market Segmentation for Information Services.

    Science.gov (United States)

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  3. Segmentation of DTI based on tensorial morphological gradient

    Science.gov (United States)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  4. Automatic liver volume segmentation and fibrosis classification

    Science.gov (United States)

    Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit

    2018-02-01

    In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.

  5. A toolbox for multiple sclerosis lesion segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Roura, Eloy; Oliver, Arnau; Valverde, Sergi; Llado, Xavier [University of Girona, Computer Vision and Robotics Group, Girona (Spain); Cabezas, Mariano; Pareto, Deborah; Rovira, Alex [Vall d' Hebron University Hospital, Magnetic Resonance Unit, Dept. of Radiology, Barcelona (Spain); Vilanova, Joan C. [Girona Magnetic Resonance Center, Girona (Spain); Ramio-Torrenta, Lluis [Dr. Josep Trueta University Hospital, Institut d' Investigacio Biomedica de Girona, Multiple Sclerosis and Neuroimmunology Unit, Girona (Spain)

    2015-10-15

    Lesion segmentation plays an important role in the diagnosis and follow-up of multiple sclerosis (MS). This task is very time-consuming and subject to intra- and inter-rater variability. In this paper, we present a new tool for automated MS lesion segmentation using T1w and fluid-attenuated inversion recovery (FLAIR) images. Our approach is based on two main steps, initial brain tissue segmentation according to the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) performed in T1w images, followed by a second step where the lesions are segmented as outliers to the normal apparent GM brain tissue on the FLAIR image. The tool has been validated using data from more than 100 MS patients acquired with different scanners and at different magnetic field strengths. Quantitative evaluation provided a better performance in terms of precision while maintaining similar results on sensitivity and Dice similarity measures compared with those of other approaches. Our tool is implemented as a publicly available SPM8/12 extension that can be used by both the medical and research communities. (orig.)

  6. A toolbox for multiple sclerosis lesion segmentation

    International Nuclear Information System (INIS)

    Roura, Eloy; Oliver, Arnau; Valverde, Sergi; Llado, Xavier; Cabezas, Mariano; Pareto, Deborah; Rovira, Alex; Vilanova, Joan C.; Ramio-Torrenta, Lluis

    2015-01-01

    Lesion segmentation plays an important role in the diagnosis and follow-up of multiple sclerosis (MS). This task is very time-consuming and subject to intra- and inter-rater variability. In this paper, we present a new tool for automated MS lesion segmentation using T1w and fluid-attenuated inversion recovery (FLAIR) images. Our approach is based on two main steps, initial brain tissue segmentation according to the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) performed in T1w images, followed by a second step where the lesions are segmented as outliers to the normal apparent GM brain tissue on the FLAIR image. The tool has been validated using data from more than 100 MS patients acquired with different scanners and at different magnetic field strengths. Quantitative evaluation provided a better performance in terms of precision while maintaining similar results on sensitivity and Dice similarity measures compared with those of other approaches. Our tool is implemented as a publicly available SPM8/12 extension that can be used by both the medical and research communities. (orig.)

  7. Automatic segmentation of the glenohumeral cartilages from magnetic resonance images

    International Nuclear Information System (INIS)

    Neubert, A.; Yang, Z.; Engstrom, C.; Xia, Y.; Strudwick, M. W.; Chandra, S. S.; Crozier, S.; Fripp, J.

    2016-01-01

    Purpose: Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hip joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. Methods: The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone–cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Results: Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head

  8. Automatic segmentation of the glenohumeral cartilages from magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Neubert, A., E-mail: ales.neubert@csiro.au [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane 4072, Australia and The Australian E-Health Research Centre, CSIRO Health and Biosecurity, Brisbane 4029 (Australia); Yang, Z. [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane 4072, Australia and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190 (China); Engstrom, C. [School of Human Movement Studies, University of Queensland, Brisbane 4072 (Australia); Xia, Y.; Strudwick, M. W.; Chandra, S. S.; Crozier, S. [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane 4072 (Australia); Fripp, J. [The Australian E-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, 4029 (Australia)

    2016-10-15

    Purpose: Magnetic resonance (MR) imaging plays a key role in investigating early degenerative disorders and traumatic injuries of the glenohumeral cartilages. Subtle morphometric and biochemical changes of potential relevance to clinical diagnosis, treatment planning, and evaluation can be assessed from measurements derived from in vivo MR segmentation of the cartilages. However, segmentation of the glenohumeral cartilages, using approaches spanning manual to automated methods, is technically challenging, due to their thin, curved structure and overlapping intensities of surrounding tissues. Automatic segmentation of the glenohumeral cartilages from MR imaging is not at the same level compared to the weight-bearing knee and hip joint cartilages despite the potential applications with respect to clinical investigation of shoulder disorders. In this work, the authors present a fully automated segmentation method for the glenohumeral cartilages using MR images of healthy shoulders. Methods: The method involves automated segmentation of the humerus and scapula bones using 3D active shape models, the extraction of the expected bone–cartilage interface, and cartilage segmentation using a graph-based method. The cartilage segmentation uses localization, patient specific tissue estimation, and a model of the cartilage thickness variation. The accuracy of this method was experimentally validated using a leave-one-out scheme on a database of MR images acquired from 44 asymptomatic subjects with a true fast imaging with steady state precession sequence on a 3 T scanner (Siemens Trio) using a dedicated shoulder coil. The automated results were compared to manual segmentations from two experts (an experienced radiographer and an experienced musculoskeletal anatomist) using the Dice similarity coefficient (DSC) and mean absolute surface distance (MASD) metrics. Results: Accurate and precise bone segmentations were achieved with mean DSC of 0.98 and 0.93 for the humeral head

  9. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  10. Validation of ACE-FTS v2.2 measurements of HCl, HF, CCl3F and CCl2F2 using space-, balloon- and ground-based instrument observations

    Directory of Open Access Journals (Sweden)

    C. Servais

    2008-10-01

    Full Text Available Hydrogen chloride (HCl and hydrogen fluoride (HF are respectively the main chlorine and fluorine reservoirs in the Earth's stratosphere. Their buildup resulted from the intensive use of man-made halogenated source gases, in particular CFC-11 (CCl3F and CFC-12 (CCl2F2, during the second half of the 20th century. It is important to continue monitoring the evolution of these source gases and reservoirs, in support of the Montreal Protocol and also indirectly of the Kyoto Protocol. The Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS is a space-based instrument that has been performing regular solar occultation measurements of over 30 atmospheric gases since early 2004. In this validation paper, the HCl, HF, CFC-11 and CFC-12 version 2.2 profile data products retrieved from ACE-FTS measurements are evaluated. Volume mixing ratio profiles have been compared to observations made from space by MLS and HALOE, and from stratospheric balloons by SPIRALE, FIRS-2 and Mark-IV. Partial columns derived from the ACE-FTS data were also compared to column measurements from ground-based Fourier transform instruments operated at 12 sites. ACE-FTS data recorded from March 2004 to August 2007 have been used for the comparisons. These data are representative of a variety of atmospheric and chemical situations, with sounded air masses extending from the winter vortex to summer sub-tropical conditions. Typically, the ACE-FTS products are available in the 10–50 km altitude range for HCl and HF, and in the 7–20 and 7–25 km ranges for CFC-11 and -12, respectively. For both reservoirs, comparison results indicate an agreement generally better than 5–10% above 20 km altitude, when accounting for the known offset affecting HALOE measurements of HCl and HF. Larger positive differences are however found for comparisons with single profiles from FIRS-2 and SPIRALE. For CFCs, the few coincident measurements available suggest that the differences

  11. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  12. Malignant pleural mesothelioma segmentation for photodynamic therapy planning.

    Science.gov (United States)

    Brahim, Wael; Mestiri, Makram; Betrouni, Nacim; Hamrouni, Kamel

    2018-04-01

    Medical imaging modalities such as computed tomography (CT) combined with computer-aided diagnostic processing have already become important part of clinical routine specially for pleural diseases. The segmentation of the thoracic cavity represents an extremely important task in medical imaging for different reasons. Multiple features can be extracted by analyzing the thoracic cavity space and these features are signs of pleural diseases including the malignant pleural mesothelioma (MPM) which is the main focus of our research. This paper presents a method that detects the MPM in the thoracic cavity and plans the photodynamic therapy in the preoperative phase. This is achieved by using a texture analysis of the MPM region combined with a thoracic cavity segmentation method. The algorithm to segment the thoracic cavity consists of multiple stages. First, the rib cage structure is segmented using various image processing techniques. We used the segmented rib cage to detect feature points which represent the thoracic cavity boundaries. Next, the proposed method segments the structures of the inner thoracic cage and fits 2D closed curves to the detected pleural cavity features in each slice. The missing bone structures are interpolated using a prior knowledge from manual segmentation performed by an expert. Next, the tumor region is segmented inside the thoracic cavity using a texture analysis approach. Finally, the contact surface between the tumor region and the thoracic cavity curves is reconstructed in order to plan the photodynamic therapy. Using the adjusted output of the thoracic cavity segmentation method and the MPM segmentation method, we evaluated the contact surface generated from these two steps by comparing it to the ground truth. For this evaluation, we used 10 CT scans with pathologically confirmed MPM at stages 1 and 2. We obtained a high similarity rate between the manually planned surface and our proposed method. The average value of Jaccard index

  13. Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations.

    Directory of Open Access Journals (Sweden)

    Miha Amon

    Full Text Available Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping, thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB. As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3 classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The

  14. Segmentation and Location Computation of Bin Objects

    Directory of Open Access Journals (Sweden)

    C.R. Hema

    2008-11-01

    Full Text Available In this paper we present a stereo vision based system for segmentation and location computation of partially occluded objects in bin picking environments. Algorithms to segment partially occluded objects and to find the object location [midpoint,x, y and z coordinates] with respect to the bin area are proposed. The z co ordinate is computed using stereo images and neural networks. The proposed algorithms is tested using two neural network architectures namely the Radial Basis Function nets and Simple Feedforward nets. The training results fo feedforward nets are found to be more suitable for the current application.The proposed stereo vision system is interfaced with an Adept SCARA Robot to perform bin picking operations. The vision system is found to be effective for partially occluded objects, in the absence of albedo effects. The results are validated through real time bin picking experiments on the Adept Robot.

  15. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus

    We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... is not available.We will illustrate the results for magnet design problems from different areas, such as electric motors/generators (as the example in the picture), beam focusing for particle accelerators and magnetic refrigeration devices.......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... magnets[1][2]. However, the powerful rare-earth magnets are generally expensive, so both the scientific and industrial communities have devoted a lot of effort into developing suitable design methods. Even so, many magnet optimization algorithms either are based on heuristic approaches[3...

  16. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    Science.gov (United States)

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  17. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  18. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    Science.gov (United States)

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78

  19. Figure-ground segregation modulates apparent motion.

    Science.gov (United States)

    Ramachandran, V S; Anstis, S

    1986-01-01

    We explored the relationship between figure-ground segmentation and apparent motion. Results suggest that: static elements in the surround can eliminate apparent motion of a cluster of dots in the centre, but only if the cluster and surround have similar "grain" or texture; outlines that define occluding surfaces are taken into account by the motion mechanism; the brain uses a hierarchy of precedence rules in attributing motion to different segments of the visual scene. Being designated as "figure" confers a high rank in this scheme of priorities.

  20. Phasing multi-segment undulators

    International Nuclear Information System (INIS)

    Chavanne, J.; Elleaume, P.; Vaerenbergh, P. Van

    1996-01-01

    An important issue in the manufacture of multi-segment undulators as a source of synchrotron radiation or as a free-electron laser (FEL) is the phasing between successive segments. The state of the art is briefly reviewed, after which a novel pure permanent magnet phasing section that is passive and does not require any current is presented. The phasing section allows the introduction of a 6 mm longitudinal gap between each segment, resulting in complete mechanical independence and reduced magnetic interaction between segments. The tolerance of the longitudinal positioning of one segment with respect to the next is found to be 2.8 times lower than that of conventional phasing. The spectrum at all gaps and useful harmonics is almost unchanged when compared with a single-segment undulator of the same total length. (au) 3 refs

  1. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M [Erasmus MC Cancer Institute, Rotterdam (Netherlands); Myronenko, A; Jordan, P [Accuray Incorporated, Sunnyvale, United States. (United States)

    2014-06-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto-segmentation

  2. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    International Nuclear Information System (INIS)

    Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M; Myronenko, A; Jordan, P

    2014-01-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto-segmentation

  3. Segmented heat exchanger

    Science.gov (United States)

    Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann

    2010-12-14

    A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.

  4. International EUREKA: Market Segment

    International Nuclear Information System (INIS)

    1982-03-01

    The purpose of the Market Segment of the EUREKA model is to simultaneously project uranium market prices, uranium supply and purchasing activities. The regional demands are extrinsic. However, annual forward contracting activities to meet these demands as well as inventory requirements are calculated. The annual price forecast is based on relatively short term, forward balances between available supply and desired purchases. The forecasted prices and extrapolated price trends determine decisions related to exploration and development, new production operations, and the operation of existing capacity. Purchasing and inventory requirements are also adjusted based on anticipated prices. The calculation proceeds one year at a time. Conditions calculated at the end of one year become the starting conditions for the calculation in the subsequent year

  5. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  6. Segmented rail linear induction motor

    Science.gov (United States)

    Cowan, Jr., Maynard; Marder, Barry M.

    1996-01-01

    A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.

  7. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  8. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  9. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  10. Rational Variety Mapping for Contrast-Enhanced Nonlinear Unsupervised Segmentation of Multispectral Images of Unstained Specimen

    Science.gov (United States)

    Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej

    2011-01-01

    A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. PMID:21708116

  11. PSNet: prostate segmentation on MRI based on a convolutional neural network.

    Science.gov (United States)

    Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei

    2018-04-01

    Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.

  12. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Seoungjae Cho

    2014-01-01

    Full Text Available A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  13. Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.

    Science.gov (United States)

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P

    2016-05-01

    A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.

  14. Statistical shape model with random walks for inner ear segmentation

    DEFF Research Database (Denmark)

    Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma

    2016-01-01

    is required. We propose a new framework for segmentation of micro-CT cochlear images using random walks combined with a statistical shape model (SSM). The SSM allows us to constrain the less contrasted areas and ensures valid inner ear shape outputs. Additionally, a topology preservation method is proposed...

  15. Ground water '89

    International Nuclear Information System (INIS)

    1989-01-01

    The proceedings of the 5th biennial symposium of the Ground Water Division of the Geological Society of South Africa are presented. The theme of the symposium was ground water and mining. Papers were presented on the following topics: ground water resources; ground water contamination; chemical analyses of ground water and mining and its influece on ground water. Separate abstracts were prepared for 5 of the papers presented. The remaining papers were considered outside the subject scope of INIS

  16. Volume measurements of individual muscles in human quadriceps femoris using atlas-based segmentation approaches.

    Science.gov (United States)

    Le Troter, Arnaud; Fouré, Alexandre; Guye, Maxime; Confort-Gouny, Sylviane; Mattei, Jean-Pierre; Gondin, Julien; Salort-Campana, Emmanuelle; Bendahan, David

    2016-04-01

    Atlas-based segmentation is a powerful method for automatic structural segmentation of several sub-structures in many organs. However, such an approach has been very scarcely used in the context of muscle segmentation, and so far no study has assessed such a method for the automatic delineation of individual muscles of the quadriceps femoris (QF). In the present study, we have evaluated a fully automated multi-atlas method and a semi-automated single-atlas method for the segmentation and volume quantification of the four muscles of the QF and for the QF as a whole. The study was conducted in 32 young healthy males, using high-resolution magnetic resonance images (MRI) of the thigh. The multi-atlas-based segmentation method was conducted in 25 subjects. Different non-linear registration approaches based on free-form deformable (FFD) and symmetric diffeomorphic normalization algorithms (SyN) were assessed. Optimal parameters of two fusion methods, i.e., STAPLE and STEPS, were determined on the basis of the highest Dice similarity index (DSI) considering manual segmentation (MSeg) as the ground truth. Validation and reproducibility of this pipeline were determined using another MRI dataset recorded in seven healthy male subjects on the basis of additional metrics such as the muscle volume similarity values, intraclass coefficient, and coefficient of variation. Both non-linear registration methods (FFD and SyN) were also evaluated as part of a single-atlas strategy in order to assess longitudinal muscle volume measurements. The multi- and the single-atlas approaches were compared for the segmentation and the volume quantification of the four muscles of the QF and for the QF as a whole. Considering each muscle of the QF, the DSI of the multi-atlas-based approach was high 0.87 ± 0.11 and the best results were obtained with the combination of two deformation fields resulting from the SyN registration method and the STEPS fusion algorithm. The optimal variables for FFD

  17. Region segmentation along image sequence

    International Nuclear Information System (INIS)

    Monchal, L.; Aubry, P.

    1995-01-01

    A method to extract regions in sequence of images is proposed. Regions are not matched from one image to the following one. The result of a region segmentation is used as an initialization to segment the following and image to track the region along the sequence. The image sequence is exploited as a spatio-temporal event. (authors). 12 refs., 8 figs

  18. Market segmentation using perceived constraints

    Science.gov (United States)

    Jinhee Jun; Gerard Kyle; Andrew Mowen

    2008-01-01

    We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...

  19. Market Segmentation: An Instructional Module.

    Science.gov (United States)

    Wright, Peter H.

    A concept-based introduction to market segmentation is provided in this instructional module for undergraduate and graduate transportation-related courses. The material can be used in many disciplines including engineering, business, marketing, and technology. The concept of market segmentation is primarily a transportation planning technique by…

  20. IFRS 8 – OPERATING SEGMENTS

    Directory of Open Access Journals (Sweden)

    BOCHIS LEONICA

    2009-05-01

    Full Text Available Segment reporting in accordance with IFRS 8 will be mandatory for annual financial statements covering periods beginning on or after 1 January 2009. The standards replaces IAS 14, Segment Reporting, from that date. The objective of IFRS 8 is to require

  1. Reduplication Facilitates Early Word Segmentation

    Science.gov (United States)

    Ota, Mitsuhiko; Skarabela, Barbora

    2018-01-01

    This study explores the possibility that early word segmentation is aided by infants' tendency to segment words with repeated syllables ("reduplication"). Twenty-four nine-month-olds were familiarized with passages containing one novel reduplicated word and one novel non-reduplicated word. Their central fixation times in response to…

  2. The Importance of Marketing Segmentation

    Science.gov (United States)

    Martin, Gillian

    2011-01-01

    The rationale behind marketing segmentation is to allow businesses to focus on their consumers' behaviors and purchasing patterns. If done effectively, marketing segmentation allows an organization to achieve its highest return on investment (ROI) in turn for its marketing and sales expenses. If an organization markets its products or services to…

  3. Model-based studies into ground water movement, with water density depending on salt content. Case studies and model validation with respect to the long-term safety of radwaste repositories. Final report

    International Nuclear Information System (INIS)

    Schelkes, K.

    1995-12-01

    Near-to-reality studies into ground water movement in the environment of planned radwaste repositories have to take into account that the flow conditions are influenced by the water density which in turn depends on the salt content. Based on results from earlier studies, computer programs were established that allow computation and modelling of ground water movement in salt water/fresh water systems, and the programs were tested and improved according to progress of the studies performed under the INTRAVAL international project. The computed models of ground water movement in the region of the Gorlebener Rinne showed for strongly simplified model profiles that the developing salinity distribution varies very sensitively in response to the applied model geometry, initial input data for salinity distribution, time frame of the model, and size of the transversal dispersion length. The WIPP 2 INTRAVAL experiment likewise studied a large-area ground water movement system influenced by salt water. Based on the concept of a hydraulically closed, regional ground water system (basin model), a sectional profile was worked out covering all relevant layers of the cap rock above the salt formation planned to serve as a repository. The model data derived to describe the salt water/fresh water movements in this profile resulted in essential enlargements and modifications of the ROCKFLOW computer program applied, (relating to input data for dispersion modelling, particle-tracker, computer graphics interface), and yielded important information for the modelling of such systems (relating to initial pressure data at the upper margin, network enhancement for important concentration boundary conditions, or treatment of permeability contrasts). (orig.) [de

  4. State-of-the-Art Methods for Brain Tissue Segmentation: A Review.

    Science.gov (United States)

    Dora, Lingraj; Agrawal, Sanjay; Panda, Rutuparna; Abraham, Ajith

    2017-01-01

    Brain tissue segmentation is one of the most sought after research areas in medical image processing. It provides detailed quantitative brain analysis for accurate disease diagnosis, detection, and classification of abnormalities. It plays an essential role in discriminating healthy tissues from lesion tissues. Therefore, accurate disease diagnosis and treatment planning depend merely on the performance of the segmentation method used. In this review, we have studied the recent advances in brain tissue segmentation methods and their state-of-the-art in neuroscience research. The review also highlights the major challenges faced during tissue segmentation of the brain. An effective comparison is made among state-of-the-art brain tissue segmentation methods. Moreover, a study of some of the validation measures to evaluate different segmentation methods is also discussed. The brain tissue segmentation, content in terms of methodologies, and experiments presented in this review are encouraging enough to attract researchers working in this field.

  5. Towards Autonomous Agriculture: Automatic Ground Detection Using Trinocular Stereovision

    Directory of Open Access Journals (Sweden)

    Annalisa Milella

    2012-09-01

    Full Text Available Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively.

  6. Segmental vitiligo with segmental morphea: An autoimmune link?

    Directory of Open Access Journals (Sweden)

    Pravesh Yadav

    2014-01-01

    Full Text Available An 18-year old girl with segmental vitiligo involving the left side of the trunk and left upper limb with segmental morphea involving the right side of trunk and right upper limb without any deeper involvement is illustrated. There was no history of preceding drug intake, vaccination, trauma, radiation therapy, infection, or hormonal therapy. Family history of stable vitiligo in her brother and a history of type II diabetes mellitus in the father were elicited. Screening for autoimmune diseases and antithyroid antibody was negative. An autoimmune link explaining the co-occurrence has been proposed. Cutaneous mosiacism could explain the presence of both the pathologies in a segmental distribution.

  7. TH-CD-202-05: DECT Based Tissue Segmentation as Input to Monte Carlo Simulations for Proton Treatment Verification Using PET Imaging

    International Nuclear Information System (INIS)

    Berndt, B; Wuerl, M; Dedes, G; Landry, G; Parodi, K; Tessonnier, T; Schwarz, F; Kamp, F; Thieke, C; Belka, C; Reiser, M; Sommer, W; Bauer, J; Verhaegen, F

    2016-01-01

    Purpose: To improve agreement of predicted and measured positron emitter yields in patients, after proton irradiation for PET-based treatment verification, using a novel dual energy CT (DECT) tissue segmentation approach, overcoming known deficiencies from single energy CT (SECT). Methods: DECT head scans of 5 trauma patients were segmented and compared to existing decomposition methods with a first focus on the brain. For validation purposes, three brain equivalent solutions [water, white matter (WM) and grey matter (GM) – equivalent with respect to their reference carbon and oxygen contents and CT numbers at 90kVp and 150kVp] were prepared from water, ethanol, sucrose and salt. The activities of all brain solutions, measured during a PET scan after uniform proton irradiation, were compared to Monte Carlo simulations. Simulation inputs were various solution compositions obtained from different segmentation approaches from DECT, SECT scans, and known reference composition. Virtual GM solution salt concentration corrections were applied based on DECT measurements of solutions with varying salt concentration. Results: The novel tissue segmentation showed qualitative improvements in %C for patient brain scans (ground truth unavailable). The activity simulations based on reference solution compositions agree with the measurement within 3–5% (4–8Bq/ml). These reference simulations showed an absolute activity difference between WM (20%C) and GM (10%C) to H2O (0%C) of 43 Bq/ml and 22 Bq/ml, respectively. Activity differences between reference simulations and segmented ones varied from −6 to 1 Bq/ml for DECT and −79 to 8 Bq/ml for SECT. Conclusion: Compared to the conventionally used SECT segmentation, the DECT based segmentation indicates a qualitative and quantitative improvement. In controlled solutions, a MC input based on DECT segmentation leads to better agreement with the reference. Future work will address the anticipated improvement of quantification

  8. TH-CD-202-05: DECT Based Tissue Segmentation as Input to Monte Carlo Simulations for Proton Treatment Verification Using PET Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Berndt, B; Wuerl, M; Dedes, G; Landry, G; Parodi, K [Ludwig-Maximilians-Universitaet Muenchen, Garching, DE (Germany); Tessonnier, T [Ludwig-Maximilians-Universitaet Muenchen, Garching, DE (Germany); Universitaetsklinikum Heidelberg, Heidelberg, DE (Germany); Schwarz, F; Kamp, F; Thieke, C; Belka, C; Reiser, M; Sommer, W [LMU Munich, Munich, DE (Germany); Bauer, J [Universitaetsklinikum Heidelberg, Heidelberg, DE (Germany); Heidelberg Ion-Beam Therapy Center, Heidelberg, DE (Germany); Verhaegen, F [Maastro Clinic, Maastricht (Netherlands)

    2016-06-15

    Purpose: To improve agreement of predicted and measured positron emitter yields in patients, after proton irradiation for PET-based treatment verification, using a novel dual energy CT (DECT) tissue segmentation approach, overcoming known deficiencies from single energy CT (SECT). Methods: DECT head scans of 5 trauma patients were segmented and compared to existing decomposition methods with a first focus on the brain. For validation purposes, three brain equivalent solutions [water, white matter (WM) and grey matter (GM) – equivalent with respect to their reference carbon and oxygen contents and CT numbers at 90kVp and 150kVp] were prepared from water, ethanol, sucrose and salt. The activities of all brain solutions, measured during a PET scan after uniform proton irradiation, were compared to Monte Carlo simulations. Simulation inputs were various solution compositions obtained from different segmentation approaches from DECT, SECT scans, and known reference composition. Virtual GM solution salt concentration corrections were applied based on DECT measurements of solutions with varying salt concentration. Results: The novel tissue segmentation showed qualitative improvements in %C for patient brain scans (ground truth unavailable). The activity simulations based on reference solution compositions agree with the measurement within 3–5% (4–8Bq/ml). These reference simulations showed an absolute activity difference between WM (20%C) and GM (10%C) to H2O (0%C) of 43 Bq/ml and 22 Bq/ml, respectively. Activity differences between reference simulations and segmented ones varied from −6 to 1 Bq/ml for DECT and −79 to 8 Bq/ml for SECT. Conclusion: Compared to the conventionally used SECT segmentation, the DECT based segmentation indicates a qualitative and quantitative improvement. In controlled solutions, a MC input based on DECT segmentation leads to better agreement with the reference. Future work will address the anticipated improvement of quantification

  9. Unsupervised Tattoo Segmentation Combining Bottom-Up and Top-Down Cues

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Josef D [ORNL

    2011-01-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for nding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a gure-ground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is e cient and suitable for further tattoo classi cation and retrieval purpose.

  10. High-dynamic-range imaging for cloud segmentation

    Science.gov (United States)

    Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan

    2018-04-01

    Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.

  11. Foreground-background segmentation and attention: a change blindness study.

    Science.gov (United States)

    Mazza, Veronica; Turatto, Massimo; Umiltà, Carlo

    2005-01-01

    One of the most debated questions in visual attention research is what factors affect the deployment of attention in the visual scene? Segmentation processes are influential factors, providing candidate objects for further attentional selection, and the relevant literature has concentrated on how figure-ground segmentation mechanisms influence visual attention. However, another crucial process, namely foreground-background segmentation, seems to have been neglected. By using a change blindness paradigm, we explored whether attention is preferentially allocated to the foreground elements or to the background ones. The results indicated that unless attention was voluntarily deployed to the background, large changes in the color of its elements remained unnoticed. In contrast, minor changes in the foreground elements were promptly reported. Differences in change blindness between the two regions of the display indicate that attention is, by default, biased toward the foreground elements. This also supports the phenomenal observations made by Gestaltists, who demonstrated the greater salience of the foreground than the background.

  12. Robust Object Segmentation Using a Multi-Layer Laser Scanner

    Science.gov (United States)

    Kim, Beomseong; Choi, Baehoon; Yoo, Minkyun; Kim, Hyunju; Kim, Euntai

    2014-01-01

    The major problem in an advanced driver assistance system (ADAS) is the proper use of sensor measurements and recognition of the surrounding environment. To this end, there are several types of sensors to consider, one of which is the laser scanner. In this paper, we propose a method to segment the measurement of the surrounding environment as obtained by a multi-layer laser scanner. In the segmentation, a full set of measurements is decomposed into several segments, each representing a single object. Sometimes a ghost is detected due to the ground or fog, and the ghost has to be eliminated to ensure the stability of the system. The proposed method is implemented on a real vehicle, and its performance is tested in a real-world environment. The experiments show that the proposed method demonstrates good performance in many real-life situations. PMID:25356645

  13. Statistical validation of individual fibre segmentation from tomograms and microscopy

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Dahl, Vedrana Andersen; Conradsen, Knut

    2018-01-01

    at different resolutions to optical microscopy (OM) and scanning electron microscopy (SEM), where we characterise fibres by their diameters and positions. In addition to comparing individual fibre diameters, we also model their spatial distribution, and compare the obtained model parameters. Our study shows...

  14. Using Predictability for Lexical Segmentation.

    Science.gov (United States)

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  15. The use of zeolites to generate PET phantoms for the validation of quantification strategies in oncology

    Energy Technology Data Exchange (ETDEWEB)

    Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe [Nuclear Medicine Department, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, 20122 Milan (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy) and Tecnomed Foundation, University of Milano-Bicocca, via Pergolesi 33, 20900 Monza (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy); Nuclear Medicine Department, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, 20122 Milan (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy)

    2012-09-15

    Purpose: In recent years, segmentation algorithms and activity quantification methods have been proposed for oncological {sup 18}F-fluorodeoxyglucose (FDG) PET. A full assessment of these algorithms, necessary for a clinical transfer, requires a validation on data sets provided with a reliable ground truth as to the imaged activity distribution, which must be as realistic as possible. The aim of this work is to propose a strategy to simulate lesions of uniform uptake and irregular shape in an anthropomorphic phantom, with the possibility to easily obtain a ground truth as to lesion activity and borders. Methods: Lesions were simulated with samples of clinoptilolite, a family of natural zeolites of irregular shape, able to absorb aqueous solutions of {sup 18}F-FDG, available in a wide size range, and nontoxic. Zeolites were soaked in solutions of {sup 18}F-FDG for increasing times up to 120 min and their absorptive properties were characterized as function of soaking duration, solution concentration, and zeolite dry weight. Saturated zeolites were wrapped in Parafilm, positioned inside an Alderson thorax-abdomen phantom and imaged with a PET-CT scanner. The ground truth for the activity distribution of each zeolite was obtained by segmenting high-resolution finely aligned CT images, on the basis of independently obtained volume measurements. The fine alignment between CT and PET was validated by comparing the CT-derived ground truth to a set of zeolites' PET threshold segmentations in terms of Dice index and volume error. Results: The soaking time necessary to achieve saturation increases with zeolite dry weight, with a maximum of about 90 min for the largest sample. At saturation, a linear dependence of the uptake normalized to the solution concentration on zeolite dry weight (R{sup 2}= 0.988), as well as a uniform distribution of the activity over the entire zeolite volume from PET imaging were demonstrated. These findings indicate that the {sup 18}F

  16. The use of zeolites to generate PET phantoms for the validation of quantification strategies in oncology

    International Nuclear Information System (INIS)

    Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe

    2012-01-01

    Purpose: In recent years, segmentation algorithms and activity quantification methods have been proposed for oncological 18 F-fluorodeoxyglucose (FDG) PET. A full assessment of these algorithms, necessary for a clinical transfer, requires a validation on data sets provided with a reliable ground truth as to the imaged activity distribution, which must be as realistic as possible. The aim of this work is to propose a strategy to simulate lesions of uniform uptake and irregular shape in an anthropomorphic phantom, with the possibility to easily obtain a ground truth as to lesion activity and borders. Methods: Lesions were simulated with samples of clinoptilolite, a family of natural zeolites of irregular shape, able to absorb aqueous solutions of 18 F-FDG, available in a wide size range, and nontoxic. Zeolites were soaked in solutions of 18 F-FDG for increasing times up to 120 min and their absorptive properties were characterized as function of soaking duration, solution concentration, and zeolite dry weight. Saturated zeolites were wrapped in Parafilm, positioned inside an Alderson thorax–abdomen phantom and imaged with a PET–CT scanner. The ground truth for the activity distribution of each zeolite was obtained by segmenting high-resolution finely aligned CT images, on the basis of independently obtained volume measurements. The fine alignment between CT and PET was validated by comparing the CT-derived ground truth to a set of zeolites’ PET threshold segmentations in terms of Dice index and volume error. Results: The soaking time necessary to achieve saturation increases with zeolite dry weight, with a maximum of about 90 min for the largest sample. At saturation, a linear dependence of the uptake normalized to the solution concentration on zeolite dry weight (R 2 = 0.988), as well as a uniform distribution of the activity over the entire zeolite volume from PET imaging were demonstrated. These findings indicate that the 18 F-FDG solution is able to

  17. Fluorescence Image Segmentation by using Digitally Reconstructed Fluorescence Images

    OpenAIRE

    Blumer, Clemens; Vivien, Cyprien; Oertner, Thomas G; Vetter, Thomas

    2011-01-01

    In biological experiments fluorescence imaging is used to image living and stimulated neurons. But the analysis of fluorescence images is a difficult task. It is not possible to conclude the shape of an object from fluorescence images alone. Therefore, it is not feasible to get good manual segmented nor ground truth data from fluorescence images. Supervised learning approaches are not possible without training data. To overcome this issues we propose to synthesize fluorescence images and call...

  18. The Hierarchy of Segment Reports

    Directory of Open Access Journals (Sweden)

    Danilo Dorović

    2015-05-01

    Full Text Available The article presents an attempt to find the connection between reports created for managers responsible for different business segments. With this purpose, the hierarchy of the business reporting segments is proposed. This can lead to better understanding of the expenses under common responsibility of more than one manager since these expenses should be in more than one report. The structure of cost defined per business segment hierarchy with the aim of new, unusual but relevant cost structure for management can be established. Both could potentially bring new information benefits for management in the context of profit reporting.

  19. Segmental dilatation of the ileum

    Directory of Open Access Journals (Sweden)

    Tune-Yie Shih

    2017-01-01

    Full Text Available A 2-year-old boy was sent to the emergency department with the chief problem of abdominal pain for 1 day. He was just discharged from the pediatric ward with the diagnosis of mycoplasmal pneumonia and paralytic ileus. After initial examinations and radiographic investigations, midgut volvulus was impressed. An emergency laparotomy was performed. Segmental dilatation of the ileum with volvulus was found. The operative procedure was resection of the dilated ileal segment with anastomosis. The postoperative recovery was uneventful. The unique abnormality of gastrointestinal tract – segmental dilatation of the ileum, is described in details and the literature is reviewed.

  20. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  1. Ground water and energy

    Energy Technology Data Exchange (ETDEWEB)

    1980-11-01

    This national workshop on ground water and energy was conceived by the US Department of Energy's Office of Environmental Assessments. Generally, OEA needed to know what data are available on ground water, what information is still needed, and how DOE can best utilize what has already been learned. The workshop focussed on three areas: (1) ground water supply; (2) conflicts and barriers to ground water use; and (3) alternatives or solutions to the various issues relating to ground water. (ACR)

  2. What are Segments in Google Analytics

    Science.gov (United States)

    Segments find all sessions that meet a specific condition. You can then apply this segment to any report in Google Analytics (GA). Segments are a way of identifying sessions and users while filters identify specific events, like pageviews.

  3. Automatic labeling and segmentation of vertebrae in CT images

    Science.gov (United States)

    Rasoulian, Abtin; Rohling, Robert N.; Abolmaesumi, Purang

    2014-03-01

    Labeling and segmentation of the spinal column from CT images is a pre-processing step for a range of image- guided interventions. State-of-the art techniques have focused either on image feature extraction or template matching for labeling of the vertebrae followed by segmentation of each vertebra. Recently, statistical multi- object models have been introduced to extract common statistical characteristics among several anatomies. In particular, we have created models for segmentation of the lumbar spine which are robust, accurate, and computationally tractable. In this paper, we reconstruct a statistical multi-vertebrae pose+shape model and utilize it in a novel framework for labeling and segmentation of the vertebra in a CT image. We validate our technique in terms of accuracy of the labeling and segmentation of CT images acquired from 56 subjects. The method correctly labels all vertebrae in 70% of patients and is only one level off for the remaining 30%. The mean distance error achieved for the segmentation is 2.1 +/- 0.7 mm.

  4. Segmentation of fluorescence microscopy cell images using unsupervised mining.

    Science.gov (United States)

    Du, Xian; Dua, Sumeet

    2010-05-28

    The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu's threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu's threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.

  5. IBES: A Tool for Creating Instructions Based on Event Segmentation

    Directory of Open Access Journals (Sweden)

    Katharina eMura

    2013-12-01

    Full Text Available Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, twenty participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, ten and twelve participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool.

  6. IBES: a tool for creating instructions based on event segmentation.

    Science.gov (United States)

    Mura, Katharina; Petersen, Nils; Huff, Markus; Ghose, Tandra

    2013-12-26

    Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, 20 participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, 10 and 12 participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool.

  7. Ground states of a spin-boson model

    International Nuclear Information System (INIS)

    Amann, A.

    1991-01-01

    Phase transition with respect to ground states of a spin-boson Hamiltonian are investigated. The spin-boson model under discussion consists of one spin and infinitely many bosons with a dipole-type coupling. It is shown that the order parameter of the model vanishes with respect to arbitrary ground states if it vanishes with respect to ground states obtained as (biased) temperature to zero limits of thermic equilibrium states. The ground states of the latter special type have been investigated by H. Spohn. Spohn's respective phase diagrams are therefore valid for arbitrary ground states. Furthermore, disjointness of ground states in the broken symmetry regime is examined

  8. CLG for Automatic Image Segmentation

    OpenAIRE

    Christo Ananth; S.Santhana Priya; S.Manisha; T.Ezhil Jothi; M.S.Ramasubhaeswari

    2017-01-01

    This paper proposes an automatic segmentation method which effectively combines Active Contour Model, Live Wire method and Graph Cut approach (CLG). The aim of Live wire method is to provide control to the user on segmentation process during execution. Active Contour Model provides a statistical model of object shape and appearance to a new image which are built during a training phase. In the graph cut technique, each pixel is represented as a node and the distance between those nodes is rep...

  9. Market segmentation, targeting and positioning

    OpenAIRE

    Camilleri, Mark Anthony

    2017-01-01

    Businesses may not be in a position to satisfy all of their customers, every time. It may prove difficult to meet the exact requirements of each individual customer. People do not have identical preferences, so rarely does one product completely satisfy everyone. Many companies may usually adopt a strategy that is known as target marketing. This strategy involves dividing the market into segments and developing products or services to these segments. A target marketing strategy is focused on ...

  10. Recognition Using Classification and Segmentation Scoring

    National Research Council Canada - National Science Library

    Kimball, Owen; Ostendorf, Mari; Rohlicek, Robin

    1992-01-01

    .... We describe an approach to connected word recognition that allows the use of segmental information through an explicit decomposition of the recognition criterion into classification and segmentation scoring...

  11. Polarization image segmentation of radiofrequency ablated porcine myocardial tissue.

    Directory of Open Access Journals (Sweden)

    Iftikhar Ahmad

    Full Text Available Optical polarimetry has previously imaged the spatial extent of a typical radiofrequency ablated (RFA lesion in myocardial tissue, exhibiting significantly lower total depolarization at the necrotic core compared to healthy tissue, and intermediate values at the RFA rim region. Here, total depolarization in ablated myocardium was used to segment the total depolarization image into three (core, rim and healthy zones. A local fuzzy thresholding algorithm was used for this multi-region segmentation, and then compared with a ground truth segmentation obtained from manual demarcation of RFA core and rim regions on the histopathology image. Quantitative comparison of the algorithm segmentation results was performed with evaluation metrics such as dice similarity coefficient (DSC = 0.78 ± 0.02 and 0.80 ± 0.02, sensitivity (Sn = 0.83 ± 0.10 and 0.91 ± 0.08, specificity (Sp = 0.76 ± 0.17 and 0.72 ± 0.17 and accuracy (Acc = 0.81 ± 0.09 and 0.71 ± 0.10 for RFA core and rim regions, respectively. This automatic segmentation of parametric depolarization images suggests a novel application of optical polarimetry, namely its use in objective RFA image quantification.

  12. Pulse shapes and surface effects in segmented germanium detectors

    International Nuclear Information System (INIS)

    Lenz, Daniel

    2010-01-01

    It is well established that at least two neutrinos are massive. The absolute neutrino mass scale and the neutrino hierarchy are still unknown. In addition, it is not known whether the neutrino is a Dirac or a Majorana particle. The GERmanium Detector Array (GERDA) will be used to search for neutrinoless double beta decay of 76 Ge. The discovery of this decay could help to answer the open questions. In the GERDA experiment, germanium detectors enriched in the isotope 76 Ge are used as source and detector at the same time. The experiment is planned in two phases. In the first, phase existing detectors are deployed. In the second phase, additional detectors will be added. These detectors can be segmented. A low background index around the Q value of the decay is important to maximize the sensitivity of the experiment. This can be achieved through anti-coincidences between segments and through pulse shape analysis. The background index due to radioactive decays in the detector strings and the detectors themselves was estimated, using Monte Carlo simulations for a nominal GERDA Phase II array with 18-fold segmented germanium detectors. A pulse shape simulation package was developed for segmented high-purity germanium detectors. The pulse shape simulation was validated with data taken with an 19-fold segmented high-purity germanium detector. The main part of the detector is 18-fold segmented, 6-fold in the azimuthal angle and 3-fold in the height. A 19th segment of 5mm thickness was created on the top surface of the detector. The detector was characterized and events with energy deposited in the top segment were studied in detail. It was found that the metalization close to the end of the detector is very important with respect to the length of the of the pulses observed. In addition indications for n-type and p-type surface channels were found. (orig.)

  13. Pulse shapes and surface effects in segmented germanium detectors

    Energy Technology Data Exchange (ETDEWEB)

    Lenz, Daniel

    2010-03-24

    It is well established that at least two neutrinos are massive. The absolute neutrino mass scale and the neutrino hierarchy are still unknown. In addition, it is not known whether the neutrino is a Dirac or a Majorana particle. The GERmanium Detector Array (GERDA) will be used to search for neutrinoless double beta decay of {sup 76}Ge. The discovery of this decay could help to answer the open questions. In the GERDA experiment, germanium detectors enriched in the isotope {sup 76}Ge are used as source and detector at the same time. The experiment is planned in two phases. In the first, phase existing detectors are deployed. In the second phase, additional detectors will be added. These detectors can be segmented. A low background index around the Q value of the decay is important to maximize the sensitivity of the experiment. This can be achieved through anti-coincidences between segments and through pulse shape analysis. The background index due to radioactive decays in the detector strings and the detectors themselves was estimated, using Monte Carlo simulations for a nominal GERDA Phase II array with 18-fold segmented germanium detectors. A pulse shape simulation package was developed for segmented high-purity germanium detectors. The pulse shape simulation was validated with data taken with an 19-fold segmented high-purity germanium detector. The main part of the detector is 18-fold segmented, 6-fold in the azimuthal angle and 3-fold in the height. A 19th segment of 5mm thickness was created on the top surface of the detector. The detector was characterized and events with energy deposited in the top segment were studied in detail. It was found that the metalization close to the end of the detector is very important with respect to the length of the of the pulses observed. In addition indications for n-type and p-type surface channels were found. (orig.)

  14. Explicating Validity

    Science.gov (United States)

    Kane, Michael T.

    2016-01-01

    How we choose to use a term depends on what we want to do with it. If "validity" is to be used to support a score interpretation, validation would require an analysis of the plausibility of that interpretation. If validity is to be used to support score uses, validation would require an analysis of the appropriateness of the proposed…

  15. Quantification of the efficiency of segmentation methods on medical images by means of non-euclidean distances

    International Nuclear Information System (INIS)

    Pastore, J; Moler, E; Ballarin, V

    2007-01-01

    To quantify the efficiency of a segmentation method, it is necessary to do some validation experiments, consisting generally in comparing the result obtained against the expected result. The most direct method for validation is the comparison of a simple visual inspection between the automatic segmentation and a segmentation obtained manually by a specialist, but this method does not guarantee robustness. This work presents a new similarity parameter between a segmented object and a control object, that combines a measurement of spatial similarity through the Hausdorff metrics and the difference in the contour areas based on the symmetric difference between sets

  16. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  17. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  18. Methods of evaluating segmentation characteristics and segmentation of major faults

    International Nuclear Information System (INIS)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok

    2000-03-01

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary

  19. Electrical Subsurface Grounding Analysis

    International Nuclear Information System (INIS)

    J.M. Calle

    2000-01-01

    The purpose and objective of this analysis is to determine the present grounding requirements of the Exploratory Studies Facility (ESF) subsurface electrical system and to verify that the actual grounding system and devices satisfy the requirements

  20. Automatic aortic root segmentation in CTA whole-body dataset

    Science.gov (United States)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  1. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  2. The ground based plan

    International Nuclear Information System (INIS)

    1989-01-01

    The paper presents a report of ''The Ground Based Plan'' of the United Kingdom Science and Engineering Research Council. The ground based plan is a plan for research in astronomy and planetary science by ground based techniques. The contents of the report contains a description of:- the scientific objectives and technical requirements (the basis for the Plan), the present organisation and funding for the ground based programme, the Plan, the main scientific features and the further objectives of the Plan. (U.K.)

  3. Skip segment Hirschsprung disease and Waardenburg syndrome

    Directory of Open Access Journals (Sweden)

    Erica R. Gross

    2015-04-01

    Full Text Available Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  4. U.S. Army Custom Segmentation System

    Science.gov (United States)

    2007-06-01

    segmentation is individual or intergroup differences in response to marketing - mix variables. Presumptions about segments: •different demands in a...product or service category, •respond differently to changes in the marketing mix Criteria for segments: •The segments must exist in the environment

  5. Skip segment Hirschsprung disease and Waardenburg syndrome

    OpenAIRE

    Gross, Erica R.; Geddes, Gabrielle C.; McCarrier, Julie A.; Jarzembowski, Jason A.; Arca, Marjorie J.

    2015-01-01

    Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  6. Constructivist Grounded Theory?

    Directory of Open Access Journals (Sweden)

    Barney G. Glaser, PhD, Hon. PhD

    2012-06-01

    Full Text Available AbstractI refer to and use as scholarly inspiration Charmaz’s excellent article on constructivist grounded theory as a tool of getting to the fundamental issues on why grounded theory is not constructivist. I show that constructivist data, if it exists at all, is a very, very small part of the data that grounded theory uses.

  7. Communication, concepts and grounding

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, F.

    2015-01-01

    This article discusses the relation between communication and conceptual grounding. In the brain, neurons, circuits and brain areas are involved in the representation of a concept, grounding it in perception and action. In terms of grounding we can distinguish between communication within the brain

  8. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  9. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    International Nuclear Information System (INIS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Vermandel, Maximilien; Baillet, Clio

    2015-01-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging.Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used.Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results.The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging. (paper)

  10. A systematic review of definitions and classification systems of adjacent segment pathology.

    Science.gov (United States)

    Kraemer, Paul; Fehlings, Michael G; Hashimoto, Robin; Lee, Michael J; Anderson, Paul A; Chapman, Jens R; Raich, Annie; Norvell, Daniel C

    2012-10-15

    Systematic review. To undertake a systematic review to determine how "adjacent segment degeneration," "adjacent segment disease," or clinical pathological processes that serve as surrogates for adjacent segment pathology are classified and defined in the peer-reviewed literature. Adjacent segment degeneration and adjacent segment disease are terms referring to degenerative changes known to occur after reconstructive spine surgery, most commonly at an immediately adjacent functional spinal unit. These can include disc degeneration, instability, spinal stenosis, facet degeneration, and deformity. The true incidence and clinical impact of degenerative changes at the adjacent segment is unclear because there is lack of a universally accepted classification system that rigorously addresses clinical and radiological issues. A systematic review of the English language literature was undertaken and articles were classified using the Grades of Recommendation Assessment, Development, and Evaluation criteria. RESULTS.: Seven classification systems of spinal degeneration, including degeneration at the adjacent segment, were identified. None have been evaluated for reliability or validity specific to patients with degeneration at the adjacent segment. The ways in which terms related to adjacent segment "degeneration" or "disease" are defined in the peer-reviewed literature are highly variable. On the basis of the systematic review presented in this article, no formal classification system for either cervical or thoracolumbar adjacent segment disorders currently exists. No recommendations regarding the use of current classification of degeneration at any segments can be made based on the available literature. A new comprehensive definition for adjacent segment pathology (ASP, the now preferred terminology) has been proposed in this Focus Issue, which reflects the diverse pathology observed at functional spinal units adjacent to previous spinal reconstruction and balances

  11. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  12. Document segmentation via oblique cuts

    Science.gov (United States)

    Svendsen, Jeremy; Branzan-Albu, Alexandra

    2013-01-01

    This paper presents a novel solution for the layout segmentation of graphical elements in Business Intelligence documents. We propose a generalization of the recursive X-Y cut algorithm, which allows for cutting along arbitrary oblique directions. An intermediate processing step consisting of line and solid region removal is also necessary due to presence of decorative elements. The output of the proposed segmentation is a hierarchical structure which allows for the identification of primitives in pie and bar charts. The algorithm was tested on a database composed of charts from business documents. Results are very promising.

  13. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector......, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam focusing quadrupole magnet for particle accelerators and a rotary device for magnetic refrigeration....

  14. Intercalary bone segment transport in treatment of segmental tibial defects

    International Nuclear Information System (INIS)

    Iqbal, A.; Amin, M.S.

    2002-01-01

    Objective: To evaluate the results and complications of intercalary bone segment transport in the treatment of segmental tibial defects. Design: This is a retrospective analysis of patients with segmental tibial defects who were treated with intercalary bone segment transport method. Place and Duration of Study: The study was carried out at Combined Military Hospital, Rawalpindi from September 1997 to April 2001. Subjects and methods: Thirteen patients were included in the study who had developed tibial defects either due to open fractures with bone loss or subsequent to bone debridement of infected non unions. The mean bone defect was 6.4 cms and there were eight associated soft tissue defects. Locally made unilateral 'Naseer-Awais' (NA) fixator was used for bone segment transport. The distraction was done at the rate of 1mm/day after 7-10 days of osteotomy. The patients were followed-up fortnightly during distraction and monthly thereafter. The mean follow-up duration was 18 months. Results: The mean time in external fixation was 9.4 months. The m ean healing index' was 1.47 months/cm. Satisfactory union was achieved in all cases. Six cases (46.2%) required bone grafting at target site and in one of them grafting was required at the level of regeneration as well. All the wounds healed well with no residual infection. There was no residual leg length discrepancy of more than 20 mm nd one angular deformity of more than 5 degrees. The commonest complication encountered was pin track infection seen in 38% of Shanz Screws applied. Loosening occurred in 6.8% of Shanz screws, requiring re-adjustment. Ankle joint contracture with equinus deformity and peroneal nerve paresis occurred in one case each. The functional results were graded as 'good' in seven, 'fair' in four, and 'poor' in two patients. Overall, thirteen patients had 31 (minor/major) complications with a ratio of 2.38 complications per patient. To treat the bone defects and associated complications, a mean of

  15. Semiautomatic segmentation of liver metastases on volumetric CT images

    International Nuclear Information System (INIS)

    Yan, Jiayong; Schwartz, Lawrence H.; Zhao, Binsheng

    2015-01-01

    Purpose: Accurate segmentation and quantification of liver metastases on CT images are critical to surgery/radiation treatment planning and therapy response assessment. To date, there are no reliable methods to perform such segmentation automatically. In this work, the authors present a method for semiautomatic delineation of liver metastases on contrast-enhanced volumetric CT images. Methods: The first step is to manually place a seed region-of-interest (ROI) in the lesion on an image. This ROI will (1) serve as an internal marker and (2) assist in automatically identifying an external marker. With these two markers, lesion contour on the image can be accurately delineated using traditional watershed transformation. Density information will then be extracted from the segmented 2D lesion and help determine the 3D connected object that is a candidate of the lesion volume. The authors have developed a robust strategy to automatically determine internal and external markers for marker-controlled watershed segmentation. By manually placing a seed region-of-interest in the lesion to be delineated on a reference image, the method can automatically determine dual threshold values to approximately separate the lesion from its surrounding structures and refine the thresholds from the segmented lesion for the accurate segmentation of the lesion volume. This method was applied to 69 liver metastases (1.1–10.3 cm in diameter) from a total of 15 patients. An independent radiologist manually delineated all lesions and the resultant lesion volumes served as the “gold standard” for validation of the method’s accuracy. Results: The algorithm received a median overlap, overestimation ratio, and underestimation ratio of 82.3%, 6.0%, and 11.5%, respectively, and a median average boundary distance of 1.2 mm. Conclusions: Preliminary results have shown that volumes of liver metastases on contrast-enhanced CT images can be accurately estimated by a semiautomatic segmentation

  16. Segmental Analysis of Chlorprothixene and Desmethylchlorprothixene in Postmortem Hair.

    Science.gov (United States)

    Günther, Kamilla Nyborg; Johansen, Sys Stybe; Wicktor, Petra; Banner, Jytte; Linnet, Kristian

    2018-06-26

    Analysis of drugs in hair differs from their analysis in other tissues due to the extended detection window, as well as the opportunity that segmental hair analysis offers for the detection of changes in drug intake over time. The antipsychotic drug chlorprothixene is widely used, but few reports exist on chlorprothixene concentrations in hair. In this study, we analyzed hair segments from 20 deceased psychiatric patients who had undergone chronic chlorprothixene treatment, and we report hair concentrations of chlorprothixene and its metabolite desmethylchlorprothixene. Three to six 1-cm long segments were analyzed per individual, corresponding to ~3-6 months of hair growth before death, depending on the length of the hair. We used a previously published and fully validated liquid chromatography-tandem mass spectrometry method for the hair analysis. The 10th-90th percentiles of chlorprothixene and desmethylchlorprothixene concentrations in all hair segments were 0.05-0.84 ng/mg and 0.06-0.89 ng/mg, respectively, with medians of 0.21 and 0.24 ng/mg, and means of 0.38 and 0.43 ng/mg. The estimated daily dosages ranged from 28 mg/day to 417 mg/day. We found a significant positive correlation between the concentration in hair and the estimated daily doses for both chlorprothixene (P = 0.0016, slope = 0.0044 [ng/mg hair]/[mg/day]) and the metabolite desmethylchlorprothixene (P = 0.0074). Concentrations generally decreased throughout the hair shaft from proximal to distal segments, with an average reduction in concentration from segment 1 to segment 3 of 24% for all cases, indicating that most of the individuals had been compliant with their treatment. We have provided some guidance regarding reference levels for chlorprothixene and desmethylchlorprothixene concentrations in hair from patients undergoing long-term chlorprothixene treatment.

  17. Rigour and grounded theory.

    Science.gov (United States)

    Cooney, Adeline

    2011-01-01

    This paper explores ways to enhance and demonstrate rigour in a grounded theory study. Grounded theory is sometimes criticised for a lack of rigour. Beck (1993) identified credibility, auditability and fittingness as the main standards of rigour for qualitative research methods. These criteria were evaluated for applicability to a Straussian grounded theory study and expanded or refocused where necessary. The author uses a Straussian grounded theory study (Cooney, In press) to examine how the revised criteria can be applied when conducting a grounded theory study. Strauss and Corbin (1998b) criteria for judging the adequacy of a grounded theory were examined in the context of the wider literature examining rigour in qualitative research studies in general and grounded theory studies in particular. A literature search for 'rigour' and 'grounded theory' was carried out to support this analysis. Criteria are suggested for enhancing and demonstrating the rigour of a Straussian grounded theory study. These include: cross-checking emerging concepts against participants' meanings, asking experts if the theory 'fit' their experiences, and recording detailed memos outlining all analytical and sampling decisions. IMPLICATIONS FOR RESEARCH PRACTICE: The criteria identified have been expressed as questions to enable novice researchers to audit the extent to which they are demonstrating rigour when writing up their studies. However, it should not be forgotten that rigour is built into the grounded theory method through the inductive-deductive cycle of theory generation. Care in applying the grounded theory methodology correctly is the single most important factor in ensuring rigour.

  18. Hydrophilic segmented block copolymers based on poly(ethylene oxide) and monodisperse amide segments

    NARCIS (Netherlands)

    Husken, D.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Segmented block copolymers based on poly(ethylene oxide) (PEO) flexible segments and monodisperse crystallizable bisester tetra-amide segments were made via a polycondensation reaction. The molecular weight of the PEO segments varied from 600 to 4600 g/mol and a bisester tetra-amide segment (T6T6T)

  19. TED: A Tolerant Edit Distance for segmentation evaluation.

    Science.gov (United States)

    Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew

    2017-02-15

    In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.

  20. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    Method for supervised segmentation of volumetric data. The method is trained from manual annotations, and these annotations make the method very flexible, which we demonstrate in our experiments. Our method infers label information locally by matching the pattern in a neighborhood around a voxel ...... to a dictionary, and hereby accounts for the volume texture....

  1. Multiple Segmentation of Image Stacks

    DEFF Research Database (Denmark)

    Smets, Jonathan; Jaeger, Manfred

    2014-01-01

    We propose a method for the simultaneous construction of multiple image segmentations by combining a recently proposed “convolution of mixtures of Gaussians” model with a multi-layer hidden Markov random field structure. The resulting method constructs for a single image several, alternative...

  2. Segmenting Trajectories by Movement States

    NARCIS (Netherlands)

    Buchin, M.; Kruckenberg, H.; Kölzsch, A.; Timpf, S.; Laube, P.

    2013-01-01

    Dividing movement trajectories according to different movement states of animals has become a challenge in movement ecology, as well as in algorithm development. In this study, we revisit and extend a framework for trajectory segmentation based on spatio-temporal criteria for this purpose. We adapt

  3. Segmental Colitis Complicating Diverticular Disease

    Directory of Open Access Journals (Sweden)

    Guido Ma Van Rosendaal

    1996-01-01

    Full Text Available Two cases of idiopathic colitis affecting the sigmoid colon in elderly patients with underlying diverticulosis are presented. Segmental resection has permitted close review of the histopathology in this syndrome which demonstrates considerable similarity to changes seen in idiopathic ulcerative colitis. The reported experience with this syndrome and its clinical features are reviewed.

  4. Leaf segmentation in plant phenotyping

    NARCIS (Netherlands)

    Scharr, Hanno; Minervini, Massimo; French, Andrew P.; Klukas, Christian; Kramer, David M.; Liu, Xiaoming; Luengo, Imanol; Pape, Jean Michel; Polder, Gerrit; Vukadinovic, Danijela; Yin, Xi; Tsaftaris, Sotirios A.

    2016-01-01

    Image-based plant phenotyping is a growing application area of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants, Arabidopsis and young tobacco. Although leaves do share appearance and shape

  5. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)

    2014-04-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  6. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    International Nuclear Information System (INIS)

    Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang; Chen, Ken Chung; Shen, Steve G. F.; Yan, Jin; Lee, Philip K. M.; Chow, Ben; Liu, Nancy X.; Xia, James J.; Shen, Dinggang

    2014-01-01

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT

  7. The 1981 Argentina ground data collection

    Science.gov (United States)

    Horvath, R.; Colwell, R. N. (Principal Investigator); Hicks, D.; Sellman, B.; Sheffner, E.; Thomas, G.; Wood, B.

    1981-01-01

    Over 600 fields in the corn, soybean and wheat growing regions of the Argentine pampa were categorized by crop or cover type and ancillary data including crop calendars, historical crop production statistics and certain cropping practices were also gathered. A summary of the field work undertaken is included along with a country overview, a chronology of field trip planning and field work events, and the field work inventory of selected sample segments. LANDSAT images were annotated and used as the field work base and several hundred ground and aerial photographs were taken. These items along with segment descriptions are presented. Meetings were held with officials of the State Secretariat of Agriculture (SEAG) and the National Commission on Space Investigations (CNIE), and their support to the program are described.

  8. Back Radiation Suppression through a Semitransparent Ground Plane for a mm-Wave Patch Antenna

    KAUST Repository

    Klionovski, Kirill; Shamim, Atif

    2017-01-01

    by a round semitransparent ground plane. The semitransparent ground plane has been realized using a low-cost carbon paste on a Kapton film. Experimental results match closely with those of simulations and validate the overall concept.

  9. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  10. Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Martin Längkvist

    2016-04-01

    Full Text Available The availability of high-resolution remote sensing (HRRS data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN can be applied to multispectral orthoimagery and a digital surface model (DSM of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water, and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

  11. Left ventricle segmentation in cardiac MRI images using fully convolutional neural networks

    Science.gov (United States)

    Vázquez Romaguera, Liset; Costa, Marly Guimarães Fernandes; Romero, Francisco Perdigón; Costa Filho, Cicero Ferreira Fernandes

    2017-03-01

    According to the World Health Organization, cardiovascular diseases are the leading cause of death worldwide, accounting for 17.3 million deaths per year, a number that is expected to grow to more than 23.6 million by 2030. Most cardiac pathologies involve the left ventricle; therefore, estimation of several functional parameters from a previous segmentation of this structure can be helpful in diagnosis. Manual delineation is a time consuming and tedious task that is also prone to high intra and inter-observer variability. Thus, there exists a need for automated cardiac segmentation method to help facilitate the diagnosis of cardiovascular diseases. In this work we propose a deep fully convolutional neural network architecture to address this issue and assess its performance. The model was trained end to end in a supervised learning stage from whole cardiac MRI images input and ground truth to make a per pixel classification. For its design, development and experimentation was used Caffe deep learning framework over an NVidia Quadro K4200 Graphics Processing Unit. The net architecture is: Conv64-ReLU (2x) - MaxPooling - Conv128-ReLU (2x) - MaxPooling - Conv256-ReLU (2x) - MaxPooling - Conv512-ReLu-Dropout (2x) - Conv2-ReLU - Deconv - Crop - Softmax. Training and testing processes were carried out using 5-fold cross validation with short axis cardiac magnetic resonance images from Sunnybrook Database. We obtained a Dice score of 0.92 and 0.90, Hausdorff distance of 4.48 and 5.43, Jaccard index of 0.97 and 0.97, sensitivity of 0.92 and 0.90 and specificity of 0.99 and 0.99, overall mean values with SGD and RMSProp, respectively.

  12. [Introduction to grounded theory].

    Science.gov (United States)

    Wang, Shou-Yu; Windsor, Carol; Yates, Patsy

    2012-02-01

    Grounded theory, first developed by Glaser and Strauss in the 1960s, was introduced into nursing education as a distinct research methodology in the 1970s. The theory is grounded in a critique of the dominant contemporary approach to social inquiry, which imposed "enduring" theoretical propositions onto study data. Rather than starting from a set theoretical framework, grounded theory relies on researchers distinguishing meaningful constructs from generated data and then identifying an appropriate theory. Grounded theory is thus particularly useful in investigating complex issues and behaviours not previously addressed and concepts and relationships in particular populations or places that are still undeveloped or weakly connected. Grounded theory data analysis processes include open, axial and selective coding levels. The purpose of this article was to explore the grounded theory research process and provide an initial understanding of this methodology.

  13. Ground Truth Collections at the MTI Core Sites

    International Nuclear Information System (INIS)

    Garrett, A.J.

    2001-01-01

    The Savannah River Technology Center (SRTC) selected 13 sites across the continental US and one site in the western Pacific to serve as the primary or core site for collection of ground truth data for validation of MTI science algorithms. Imagery and ground truth data from several of these sites are presented in this paper. These sites are the Comanche Peak, Pilgrim and Turkey Point power plants, Ivanpah playas, Crater Lake, Stennis Space Center and the Tropical Western Pacific ARM site on the island of Nauru. Ground truth data includes water temperatures (bulk and skin), radiometric data, meteorological data and plant operating data. The organizations that manage these sites assist SRTC with its ground truth data collections and also give the MTI project a variety of ground truth measurements that they make for their own purposes. Collectively, the ground truth data from the 14 core sites constitute a comprehensive database for science algorithm validation

  14. The Grounded Theory Bookshelf

    Directory of Open Access Journals (Sweden)

    Vivian B. Martin, Ph.D.

    2005-03-01

    Full Text Available Bookshelf will provide critical reviews and perspectives on books on theory and methodology of interest to grounded theory. This issue includes a review of Heaton’s Reworking Qualitative Data, of special interest for some of its references to grounded theory as a secondary analysis tool; and Goulding’s Grounded Theory: A practical guide for management, business, and market researchers, a book that attempts to explicate the method and presents a grounded theory study that falls a little short of the mark of a fully elaborated theory.Reworking Qualitative Data, Janet Heaton (Sage, 2004. Paperback, 176 pages, $29.95. Hardcover also available.

  15. Hot Ground Vibration Tests

    Data.gov (United States)

    National Aeronautics and Space Administration — Ground vibration tests or modal surveys are routinely conducted to support flutter analysis for subsonic and supersonic vehicles. However, vibration testing...

  16. Fast prostate segmentation for brachytherapy based on joint fusion of images and labels

    Science.gov (United States)

    Nouranian, Saman; Ramezani, Mahdi; Mahdavi, S. Sara; Spadinger, Ingrid; Morris, William J.; Salcudean, Septimiu E.; Abolmaesumi, Purang

    2014-03-01

    Brachytherapy as one of the treatment methods for prostate cancer takes place by implantation of radioactive seeds inside the gland. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate which are segmented in order to plan the appropriate seed placement. The segmentation process is usually performed either manually or semi-automatically and is associated with subjective errors because the prostate visibility is limited in ultrasound images. The current segmentation process also limits the possibility of intra-operative delineation of the prostate to perform real-time dosimetry. In this paper, we propose a computationally inexpensive and fully automatic segmentation approach that takes advantage of previously segmented images to form a joint space of images and their segmentations. We utilize joint Independent Component Analysis method to generate a model which is further employed to produce a probability map of the target segmentation. We evaluate this approach on the transrectal ultrasound volume images of 60 patients using a leave-one-out cross-validation approach. The results are compared with the manually segmented prostate contours that were used by clinicians to plan brachytherapy procedures. We show that the proposed approach is fast with comparable accuracy and precision to those found in previous studies on TRUS segmentation.

  17. Atlas-based segmentation technique incorporating inter-observer delineation uncertainty for whole breast

    International Nuclear Information System (INIS)

    Bell, L R; Pogson, E M; Metcalfe, P; Holloway, L; Dowling, J A

    2017-01-01

    Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes. (paper)

  18. Semiautomated segmentation of blood vessels using ellipse-overlap criteria: Method and comparison to manual editing

    International Nuclear Information System (INIS)

    Shiffman, Smadar; Rubin, Geoffrey D.; Schraedley-Desmond, Pamela; Napel, Sandy

    2003-01-01

    Two-dimensional intensity-based methods for the segmentation of blood vessels from computed-tomography-angiography data often result in spurious segments that originate from other objects whose intensity distributions overlap with those of the vessels. When segmented images include spurious segments, additional methods are required to select segments that belong to the target vessels. We describe a method that allows experts to select vessel segments from sequences of segmented images with little effort. Our method uses ellipse-overlap criteria to differentiate between segments that belong to different objects and are separated in plane but are connected in the through-plane direction. To validate our method, we used it to extract vessel regions from volumes that were segmented via analysis of isolabel-contour maps, and showed that the difference between the results of our method and manually-edited results was within inter-expert variability. Although the total editing duration for our method, which included user-interaction and computer processing, exceeded that of manual editing, the extent of user interaction required for our method was about a fifth of that required for manual editing

  19. Ground-Based Telescope Parametric Cost Model

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  20. Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform

    Directory of Open Access Journals (Sweden)

    Shuo-Tsung Chen

    2015-01-01

    Full Text Available Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  1. Coronary arteries segmentation based on the 3D discrete wavelet transform and 3D neutrosophic transform.

    Science.gov (United States)

    Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man

    2015-01-01

    Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  2. Superpixel-based segmentation of muscle fibers in multi-channel microscopy.

    Science.gov (United States)

    Nguyen, Binh P; Heemskerk, Hans; So, Peter T C; Tucker-Kellogg, Lisa

    2016-12-05

    Confetti fluorescence and other multi-color genetic labelling strategies are useful for observing stem cell regeneration and for other problems of cell lineage tracing. One difficulty of such strategies is segmenting the cell boundaries, which is a very different problem from segmenting color images from the real world. This paper addresses the difficulties and presents a superpixel-based framework for segmentation of regenerated muscle fibers in mice. We propose to integrate an edge detector into a superpixel algorithm and customize the method for multi-channel images. The enhanced superpixel method outperforms the original and another advanced superpixel algorithm in terms of both boundary recall and under-segmentation error. Our framework was applied to cross-section and lateral section images of regenerated muscle fibers from confetti-fluorescent mice. Compared with "ground-truth" segmentations, our framework yielded median Dice similarity coefficients of 0.92 and higher. Our segmentation framework is flexible and provides very good segmentations of multi-color muscle fibers. We anticipate our methods will be useful for segmenting a variety of tissues in confetti fluorecent mice and in mice with similar multi-color labels.

  3. Biased figure-ground assignment affects conscious object recognition in spatial neglect.

    Science.gov (United States)

    Eramudugolla, Ranmalee; Driver, Jon; Mattingley, Jason B

    2010-09-01

    Unilateral spatial neglect is a disorder of attention and spatial representation, in which early visual processes such as figure-ground segmentation have been assumed to be largely intact. There is evidence, however, that the spatial attention bias underlying neglect can bias the segmentation of a figural region from its background. Relatively few studies have explicitly examined the effect of spatial neglect on processing the figures that result from such scene segmentation. Here, we show that a neglect patient's bias in figure-ground segmentation directly influences his conscious recognition of these figures. By varying the relative salience of figural and background regions in static, two-dimensional displays, we show that competition between elements in such displays can modulate a neglect patient's ability to recognise parsed figures in a scene. The findings provide insight into the interaction between scene segmentation, explicit object recognition, and attention.

  4. Shape-specific perceptual learning in a figure-ground segregation task.

    Science.gov (United States)

    Yi, Do-Joon; Olson, Ingrid R; Chun, Marvin M

    2006-03-01

    What does perceptual experience contribute to figure-ground segregation? To study this question, we trained observers to search for symmetric dot patterns embedded in random dot backgrounds. Training improved shape segmentation, but learning did not completely transfer either to untrained locations or to untrained shapes. Such partial specificity persisted for a month after training. Interestingly, training on shapes in empty backgrounds did not help segmentation of the trained shapes in noisy backgrounds. Our results suggest that perceptual training increases the involvement of early sensory neurons in the segmentation of trained shapes, and that successful segmentation requires perceptual skills beyond shape recognition alone.

  5. FACTAR validation

    International Nuclear Information System (INIS)

    Middleton, P.B.; Wadsworth, S.L.; Rock, R.C.; Sills, H.E.; Langman, V.J.

    1995-01-01

    A detailed strategy to validate fuel channel thermal mechanical behaviour codes for use of current power reactor safety analysis is presented. The strategy is derived from a validation process that has been recently adopted industry wide. Focus of the discussion is on the validation plan for a code, FACTAR, for application in assessing fuel channel integrity safety concerns during a large break loss of coolant accident (LOCA). (author)

  6. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  7. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    Science.gov (United States)

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  8. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    2015-01-01

    We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...

  9. Compliance with Segment Disclosure Initiatives

    DEFF Research Database (Denmark)

    Arya, Anil; Frimor, Hans; Mittendorf, Brian

    2013-01-01

    Regulatory oversight of capital markets has intensified in recent years, with a particular emphasis on expanding financial transparency. A notable instance is efforts by the Financial Accounting Standards Board that push firms to identify and report performance of individual business units...... (segments). This paper seeks to address short-run and long-run consequences of stringent enforcement of and uniform compliance with these segment disclosure standards. To do so, we develop a parsimonious model wherein a regulatory agency promulgates disclosure standards and either permits voluntary...... by increasing transparency and leveling the playing field. However, our analysis also demonstrates that in the long run, if firms are unable to use discretion in reporting to maintain their competitive edge, they may seek more destructive alternatives. Accounting for such concerns, in the long run, voluntary...

  10. Segmental osteotomies of the maxilla.

    Science.gov (United States)

    Rosen, H M

    1989-10-01

    Multiple segment Le Fort I osteotomies provide the maxillofacial surgeon with the capabilities to treat complex dentofacial deformities existing in all three planes of space. Sagittal, vertical, and transverse maxillomandibular discrepancies as well as three-dimensional abnormalities within the maxillary arch can be corrected simultaneously. Accordingly, optimal aesthetic enhancement of the facial skeleton and a functional, healthy occlusion can be realized. What may be perceived as elaborate treatment plans are in reality conservative in terms of osseous stability and treatment time required. The close cooperation of an orthodontist well-versed in segmental orthodontics and orthognathic surgery is critical to the success of such surgery. With close attention to surgical detail, the complication rate inherent in such surgery can be minimized and the treatment goals achieved in a timely and predictable fashion.

  11. Individual Building Rooftop and Tree Crown Segmentation from High-Resolution Urban Aerial Optical Images

    Directory of Open Access Journals (Sweden)

    Jichao Jiao

    2016-01-01

    Full Text Available We segment buildings and trees from aerial photographs by using superpixels, and we estimate the tree’s parameters by using a cost function proposed in this paper. A method based on image complexity is proposed to refine superpixels boundaries. In order to classify buildings from ground and classify trees from grass, the salient feature vectors that include colors, Features from Accelerated Segment Test (FAST corners, and Gabor edges are extracted from refined superpixels. The vectors are used to train the classifier based on Naive Bayes classifier. The trained classifier is used to classify refined superpixels as object or nonobject. The properties of a tree, including its locations and radius, are estimated by minimizing the cost function. The shadow is used to calculate the tree height using sun angle and the time when the image was taken. Our segmentation algorithm is compared with other two state-of-the-art segmentation algorithms, and the tree parameters obtained in this paper are compared to the ground truth data. Experiments show that the proposed method can segment trees and buildings appropriately, yielding higher precision and better recall rates, and the tree parameters are in good agreement with the ground truth data.

  12. Segmented fuel and moderator rod

    International Nuclear Information System (INIS)

    Doshi, P.K.

    1987-01-01

    This patent describes a continuous segmented fuel and moderator rod for use with a water cooled and moderated nuclear fuel assembly. The rod comprises: a lower fuel region containing a column of nuclear fuel; a moderator region, disposed axially above the fuel region. The moderator region has means for admitting and passing the water moderator therethrough for moderating an upper portion of the nuclear fuel assembly. The moderator region is separated from the fuel region by a water tight separator

  13. Segmentation of sows in farrowing pens

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Karstoft, Henrik; Pedersen, Lene Juul

    2014-01-01

    The correct segmentation of a foreground object in video recordings is an important task for many surveillance systems. The development of an effective and practical algorithm to segment sows in grayscale video recordings captured under commercial production conditions is described...

  14. H-Ransac a Hybrid Point Cloud Segmentation Combining 2d and 3d Data

    Science.gov (United States)

    Adam, A.; Chatzilari, E.; Nikolopoulos, S.; Kompatsiaris, I.

    2018-05-01

    In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.

  15. Fast and robust multi-atlas segmentation of brain magnetic resonance images

    DEFF Research Database (Denmark)

    Lötjönen, Jyrki Mp; Wolz, Robin; Koikkalainen, Juha R

    2010-01-01

    We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of stand......We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead...... of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity...

  16. Automatic segmentation of closed-contour features in ophthalmic images using graph theory and dynamic programming

    Science.gov (United States)

    Chiu, Stephanie J.; Toth, Cynthia A.; Bowes Rickman, Catherine; Izatt, Joseph A.; Farsiu, Sina

    2012-01-01

    This paper presents a generalized framework for segmenting closed-contour anatomical and pathological features using graph theory and dynamic programming (GTDP). More specifically, the GTDP method previously developed for quantifying retinal and corneal layer thicknesses is extended to segment objects such as cells and cysts. The presented technique relies on a transform that maps closed-contour features in the Cartesian domain into lines in the quasi-polar domain. The features of interest are then segmented as layers via GTDP. Application of this method to segment closed-contour features in several ophthalmic image types is shown. Quantitative validation experiments for retinal pigmented epithelium cell segmentation in confocal fluorescence microscopy images attests to the accuracy of the presented technique. PMID:22567602

  17. Segmentation in local hospital markets.

    Science.gov (United States)

    Dranove, D; White, W D; Wu, L

    1993-01-01

    This study examines evidence of market segmentation on the basis of patients' insurance status, demographic characteristics, and medical condition in selected local markets in California in the years 1983 and 1989. Substantial differences exist in the probability patients may be admitted to particular hospitals based on insurance coverage, particularly Medicaid, and race. Segmentation based on insurance and race is related to hospital characteristics, but not the characteristics of the hospital's community. Medicaid patients are more likely to go to hospitals with lower costs and fewer service offerings. Privately insured patients go to hospitals offering more services, although cost concerns are increasing. Hispanic patients also go to low-cost hospitals, ceteris paribus. Results indicate little evidence of segmentation based on medical condition in either 1983 or 1989, suggesting that "centers of excellence" have yet to play an important role in patient choice of hospital. The authors found that distance matters, and that patients prefer nearby hospitals, moreso for some medical conditions than others, in ways consistent with economic theories of consumer choice.

  18. Efektivitas Instagram Common Grounds

    OpenAIRE

    Wifalin, Michelle

    2016-01-01

    Efektivitas Instagram Common Grounds merupakan rumusan masalah yang diambil dalam penelitian ini. Efektivitas Instagram diukur menggunakan Customer Response Index (CRI), dimana responden diukur dalam berbagai tingkatan, mulai dari awareness, comprehend, interest, intentions dan action. Tingkatan respons inilah yang digunakan untuk mengukur efektivitas Instagram Common Grounds. Teori-teori yang digunakan untuk mendukung penelitian ini yaitu teori marketing Public Relations, teori iklan, efekti...

  19. Pesticides in Ground Water

    DEFF Research Database (Denmark)

    Bjerg, Poul Løgstrup

    1996-01-01

    Review af: Jack E. Barbash & Elizabeth A. Resek (1996). Pesticides in Ground Water. Distribution trends and governing factors. Ann Arbor Press, Inc. Chelsea, Michigan. pp 588.......Review af: Jack E. Barbash & Elizabeth A. Resek (1996). Pesticides in Ground Water. Distribution trends and governing factors. Ann Arbor Press, Inc. Chelsea, Michigan. pp 588....

  20. Roentgenological diagnoss of central segmental lung cancer

    International Nuclear Information System (INIS)

    Gurevich, L.A.; Fedchenko, G.G.

    1984-01-01

    Basing on an analysis of the results of clinicoroentgenological examination of 268 patments roentgenological semiotics of segmental lung cancer is presented. Some peculiarities of the X-ray picture of cancer of different segments of the lungs were revealed depending on tumor site and growth type. For the syndrome of segmental darkening the comprehensive X-ray methods where the chief method is tomography of the segmental bronchi are proposed