WorldWideScience

Sample records for ground validation segment

  1. An individual and dynamic Body Segment Inertial Parameter validation method using ground reaction forces.

    Science.gov (United States)

    Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice

    2014-05-01

    Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods.

  2. The LOFT Ground Segment

    DEFF Research Database (Denmark)

    Bozzo, E.; Antonelli, A.; Argan, A.;

    2014-01-01

    targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT...... we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We describe the expected GS contributions from ESA and the LOFT consortium. A review is provided of the planned LOFT data products and the details of the data flow, archiving...

  3. The LOFT Ground Segment

    CERN Document Server

    Bozzo, E; Argan, A; Barret, D; Binko, P; Brandt, S; Cavazzuti, E; Courvoisier, T; Herder, J W den; Feroci, M; Ferrigno, C; Giommi, P; Götz, D; Guy, L; Hernanz, M; Zand, J J M in't; Klochkov, D; Kuulkers, E; Motch, C; Lumb, D; Papitto, A; Pittori, C; Rohlfs, R; Santangelo, A; Schmid, C; Schwope, A D; Smith, P J; Webb, N A; Wilms, J; Zane, S

    2014-01-01

    LOFT, the Large Observatory For X-ray Timing, was one of the ESA M3 mission candidates that completed their assessment phase at the end of 2013. LOFT is equipped with two instruments, the Large Area Detector (LAD) and the Wide Field Monitor (WFM). The LAD performs pointed observations of several targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT Burst alert System additionally identifies on-board bright impulsive events (e.g., Gamma-ray Bursts, GRBs) and broadcasts the corresponding position and trigger time to the ground using a dedicated system of ~15 VHF receivers. All WFM data are planned to be made public immediately. In this contribution we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We...

  4. The Envisat-1 ground segment

    Science.gov (United States)

    Harris, Ray; Ashton, Martin

    1995-03-01

    The European Space Agency (ESA) Earth Remote Sensing Satellite (ERS-1 and ERS-2) missions will be followed by the Polar Orbit Earth Mission (POEM) program. The first of the POEM missions will be Envisat-1. ESA has completed the design phase of the ground segment. This paper presents the main elements of that design. The main part of this paper is an overview of the Payload Data Segment (PDS) which is the core of the Envisat-1 ground segment, followed by two further sections which describe in more detail the facilities to be offered by the PDS for archiving and for user servcies. A further section describes some future issues for ground segment development. Logica was the prime contractor of a team of 18 companies which undertook the ESA financed architectural design study of the Envisat-1 ground segment. The outputs of the study included detailed specifications of the components that will acquire, process, archive and disseminate the payload data, together with the functional designs of the flight operations and user data segments.

  5. Validation of the Carotid Intima-Media Thickness Variability: Can Manual Segmentations Be Trusted as Ground Truth?

    Science.gov (United States)

    Meiburger, Kristen M; Molinari, Filippo; Wong, Justin; Aguilar, Luis; Gallo, Diego; Steinman, David A; Morbiducci, Umberto

    2016-07-01

    The common carotid artery intima-media thickness (IMT) is widely accepted and used as an indicator of atherosclerosis. Recent studies, however, have found that the irregularity of the IMT along the carotid artery wall has a stronger correlation with atherosclerosis than the IMT itself. We set out to validate IMT variability (IMTV), a parameter defined to assess IMT irregularities along the wall. In particular, we analyzed whether or not manual segmentations of the lumen-intima and media-adventitia can be considered reliable in calculation of the IMTV parameter. To do this, we used a total of 60 simulated ultrasound images with a priori IMT and IMTV values. The images, simulated using the Fast And Mechanistic Ultrasound Simulation software, presented five different morphologies, four nominal IMT values and three different levels of variability along the carotid artery wall (no variability, small variability and large variability). Three experts traced the lumen-intima (LI) and media-adventitia (MA) profiles, and two automated algorithms were employed to obtain the LI and MA profiles. One expert also re-traced the LI and MA profiles to test intra-reader variability. The average IMTV measurements of the profiles used to simulate the longitudinal B-mode images were 0.002 ± 0.002, 0.149 ± 0.035 and 0.286 ± 0.068 mm for the cases of no variability, small variability and large variability, respectively. The IMTV measurements of one of the automated algorithms were statistically similar (p > 0.05, Wilcoxon signed rank) when considering small and large variability, but non-significant when considering no variability (p truth. On the other hand, our automated algorithm was found to be more reliable, indicating how automated techniques could therefore foster analysis of the carotid artery intima-media thickness irregularity.

  6. Design of ground segments for small satellites

    Science.gov (United States)

    Mace, Guy

    1994-01-01

    New concepts must be implemented when designing a Ground Segment (GS) for small satellites to conform to their specific mission characteristics: low cost, one main instrument, spacecraft autonomy, optimized mission return, etc. This paper presents the key cost drivers of such ground segments, the main design features, and the comparison of various design options that can meet the user requirements.

  7. Figure-Ground Segmentation Using Factor Graphs.

    Science.gov (United States)

    Shen, Huiying; Coughlan, James; Ivanchenko, Volodymyr

    2009-06-04

    Foreground-background segmentation has recently been applied [26,12] to the detection and segmentation of specific objects or structures of interest from the background as an efficient alternative to techniques such as deformable templates [27]. We introduce a graphical model (i.e. Markov random field)-based formulation of structure-specific figure-ground segmentation based on simple geometric features extracted from an image, such as local configurations of linear features, that are characteristic of the desired figure structure. Our formulation is novel in that it is based on factor graphs, which are graphical models that encode interactions among arbitrary numbers of random variables. The ability of factor graphs to express interactions higher than pairwise order (the highest order encountered in most graphical models used in computer vision) is useful for modeling a variety of pattern recognition problems. In particular, we show how this property makes factor graphs a natural framework for performing grouping and segmentation, and demonstrate that the factor graph framework emerges naturally from a simple maximum entropy model of figure-ground segmentation.We cast our approach in a learning framework, in which the contributions of multiple grouping cues are learned from training data, and apply our framework to the problem of finding printed text in natural scenes. Experimental results are described, including a performance analysis that demonstrates the feasibility of the approach.

  8. SILEX ground segment control facilities and flight operations

    Science.gov (United States)

    Demelenne, Benoit; Tolker-Nielsen, Toni; Guillen, Jean-Claude

    1999-04-01

    The European Space Agency is going to conduct an inter orbit link experiment which will connect a low Earth orbiting satellite and a Geostationary satellite via optical terminals. This experiment has been called SILEX (Semiconductor Inter satellite Link Experiment). Two payloads have been built. One called PASTEL (PASsager de TELecommunication) has been embarked on the French Earth observation satellite SPOT4 which has been launched successfully in March 1998. The future European experimental data relay satellite ARTEMIS (Advanced Relay and TEchnology MISsion), which will route the data to ground, will carry the OPALE terminal (Optical Payload Experiment). The European Space Agency is responsible for the operation of both terminals. Due to the complexity and experimental character of this new optical technology, the development, preparation and validation of the ground segment control facilities required a long series of technical and operational qualification tests. This paper is presenting the operations concept and the early results of the PASTEL in orbit operations.

  9. WSO-UV ground segment for observation optimisation

    Science.gov (United States)

    Basargina, O.; Sachkov, M.; Kazakevich, Y.; Kanev, E.; Sichevskij, S.

    2016-07-01

    The World Space Observatory-Ultraviolet (WSO-UV) is a Russian-Spanish space mission born as a response to the growing up demand for UV facilities by the astronomical community. Main components of the WSO-UV Ground Segment, Mission Control Centre and Science Operation Centre, are being developed by international cooperation In this paper the fundamental components of WSO-UV ground segment are described. Also approaches to optimize observatory scheduling problem are discussed.

  10. Foveated Figure-Ground Segmentation and Its Role in Recognition

    OpenAIRE

    Björkman, Mårten; Eklundh, Jan-Olof

    2005-01-01

    Figure-ground segmentation and recognition are two interrelated processes. In this paper we present a method for foveated segmentation and evaluate it in the context of a binocular real-time recognition system. Segmentation is solved as a binary labeling problem using priors derived from the results ofa simplistic disparity method. Doing so we are able to cope with situations when the disparity range is very wide, situations that has rarely been considered, but appear frequently for narrow-fi...

  11. Reverse Classification Accuracy: Predicting Segmentation Performance in the Absence of Ground Truth.

    Science.gov (United States)

    Valindria, Vanya V; Lavdas, Ioannis; Bai, Wenjia; Kamnitsas, Konstantinos; Aboagye, Eric O; Rockall, Andrea G; Rueckert, Daniel; Glocker, Ben

    2017-08-01

    When integrating computational tools, such as automatic segmentation, into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data and, in particular, to detect when an automatic method fails. However, this is difficult to achieve due to the absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross validation, because validation data are often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared with a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA, we take the predicted segmentation from a new image to train a reverse classifier, which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as a part of large-scale image analysis studies.

  12. Comparison of algorithms for ultrasound image segmentation without ground truth

    Science.gov (United States)

    Sikka, Karan; Deserno, Thomas M.

    2010-02-01

    Image segmentation is a pre-requisite to medical image analysis. A variety of segmentation algorithms have been proposed, and most are evaluated on a small dataset or based on classification of a single feature. The lack of a gold standard (ground truth) further adds to the discrepancy in these comparisons. This work proposes a new methodology for comparing image segmentation algorithms without ground truth by building a matrix called region-correlation matrix. Subsequently, suitable distance measures are proposed for quantitative assessment of similarity. The first measure takes into account the degree of region overlap or identical match. The second considers the degree of splitting or misclassification by using an appropriate penalty term. These measures are shown to satisfy the axioms of a quasi-metric. They are applied for a comparative analysis of synthetic segmentation maps to show their direct correlation with human intuition of similar segmentation. Since ultrasound images are difficult to segment and usually lack a ground truth, the measures are further used to compare the recently proposed spectral clustering algorithm (encoding spatial and edge information) with standard k-means over abdominal ultrasound images. Improving the parameterization and enlarging the feature space for k-means steadily increased segmentation quality to that of spectral clustering.

  13. The Validity of Divergent Grounded Theory Method

    Directory of Open Access Journals (Sweden)

    Martin Nils Amsteus PhD

    2014-02-01

    Full Text Available The purpose of this article is to assess whether divergence of grounded theory method may be considered valid. A review of literature provides a basis for understanding and evaluating grounded theory. The principles and nature of grounded theory are synthesized along with theoretical and practical implications. It is deduced that for a theory to be truly grounded in empirical data, the method resulting in the theory should be the equivalent of pure induction. Therefore, detailed, specified, stepwise a priori procedures may be seen as unbidden or arbitrary. It is concluded that divergent grounded theory can be considered valid. The author argues that securing methodological transparency through the description of the actual principles and procedures employed, as well as tailoring them to the particular circumstances, is more important than adhering to predetermined stepwise procedures. A theoretical foundation is provided from which diverse theoretical developments and methodological procedures may be developed, judged, and refined based on their own merits.

  14. The IXV Ground Segment design, implementation and operations

    Science.gov (United States)

    Martucci di Scarfizzi, Giovanni; Bellomo, Alessandro; Musso, Ivano; Bussi, Diego; Rabaioli, Massimo; Santoro, Gianfranco; Billig, Gerhard; Gallego Sanz, José María

    2016-07-01

    The Intermediate eXperimental Vehicle (IXV) is an ESA re-entry demonstrator that performed, on the 11th February of 2015, a successful re-entry demonstration mission. The project objectives were the design, development, manufacturing and on ground and in flight verification of an autonomous European lifting and aerodynamically controlled re-entry system. For the IXV mission a dedicated Ground Segment was provided. The main subsystems of the IXV Ground Segment were: IXV Mission Control Center (MCC), from where monitoring of the vehicle was performed, as well as support during pre-launch and recovery phases; IXV Ground Stations, used to cover IXV mission by receiving spacecraft telemetry and forwarding it toward the MCC; the IXV Communication Network, deployed to support the operations of the IXV mission by interconnecting all remote sites with MCC, supporting data, voice and video exchange. This paper describes the concept, architecture, development, implementation and operations of the ESA Intermediate Experimental Vehicle (IXV) Ground Segment and outlines the main operations and lessons learned during the preparation and successful execution of the IXV Mission.

  15. Superpixel Cut for Figure-Ground Image Segmentation

    Science.gov (United States)

    Yang, Michael Ying; Rosenhahn, Bodo

    2016-06-01

    Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.

  16. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, L.F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  17. NASA's mobile satellite communications program; ground and space segment technologies

    Science.gov (United States)

    Naderi, F.; Weber, W. J.; Knouse, G. H.

    1984-10-01

    This paper describes the Mobile Satellite Communications Program of the United States National Aeronautics and Space Administration (NASA). The program's objectives are to facilitate the deployment of the first generation commercial mobile satellite by the private sector, and to technologically enable future generations by developing advanced and high risk ground and space segment technologies. These technologies are aimed at mitigating severe shortages of spectrum, orbital slot, and spacecraft EIRP which are expected to plague the high capacity mobile satellite systems of the future. After a brief introduction of the concept of mobile satellite systems and their expected evolution, this paper outlines the critical ground and space segment technologies. Next, the Mobile Satellite Experiment (MSAT-X) is described. MSAT-X is the framework through which NASA will develop advanced ground segment technologies. An approach is outlined for the development of conformal vehicle antennas, spectrum and power-efficient speech codecs, and modulation techniques for use in the non-linear faded channels and efficient multiple access schemes. Finally, the paper concludes with a description of the current and planned NASA activities aimed at developing complex large multibeam spacecraft antennas needed for future generation mobile satellite systems.

  18. NASA's mobile satellite communications program; ground and space segment technologies

    Science.gov (United States)

    Naderi, F.; Weber, W. J.; Knouse, G. H.

    1984-01-01

    This paper describes the Mobile Satellite Communications Program of the United States National Aeronautics and Space Administration (NASA). The program's objectives are to facilitate the deployment of the first generation commercial mobile satellite by the private sector, and to technologically enable future generations by developing advanced and high risk ground and space segment technologies. These technologies are aimed at mitigating severe shortages of spectrum, orbital slot, and spacecraft EIRP which are expected to plague the high capacity mobile satellite systems of the future. After a brief introduction of the concept of mobile satellite systems and their expected evolution, this paper outlines the critical ground and space segment technologies. Next, the Mobile Satellite Experiment (MSAT-X) is described. MSAT-X is the framework through which NASA will develop advanced ground segment technologies. An approach is outlined for the development of conformal vehicle antennas, spectrum and power-efficient speech codecs, and modulation techniques for use in the non-linear faded channels and efficient multiple access schemes. Finally, the paper concludes with a description of the current and planned NASA activities aimed at developing complex large multibeam spacecraft antennas needed for future generation mobile satellite systems.

  19. GOES-R Ground Segment Technical Reference Model

    Science.gov (United States)

    Krause, R. G.; Burnett, M.; Khanna, R.

    2012-12-01

    NOAA Geostationary Environmental Operational Satellite -R Series (GOES-R) Ground Segment Project (GSP) has developed a Technical Reference Model (TRM) to support the documentation of technologies that could form the basis for a set of requirements that could support the evolution towards a NESDIS enterprise ground system. Architecture and technologies in this TRM can be applied or extended to other ground systems for planning and development. The TRM maps GOES-R technologies to the Office of Management and Budget's (OMB) Federal Enterprise Architecture (FEA) Consolidated Reference Model (CRM) V 2.3 Technical Services Standard (TSS). The FEA TRM categories are the framework for the GOES-R TRM. This poster will present the GOES-R TRM.

  20. Ground truth delineation for medical image segmentation based on Local Consistency and Distribution Map analysis.

    Science.gov (United States)

    Cheng, Irene; Sun, Xinyao; Alsufyani, Noura; Xiong, Zhihui; Major, Paul; Basu, Anup

    2015-01-01

    Computer-aided detection (CAD) systems are being increasingly deployed for medical applications in recent years with the goal to speed up tedious tasks and improve precision. Among others, segmentation is an important component in CAD systems as a preprocessing step to help recognize patterns in medical images. In order to assess the accuracy of a CAD segmentation algorithm, comparison with ground truth data is necessary. To-date, ground truth delineation relies mainly on contours that are either manually defined by clinical experts or automatically generated by software. In this paper, we propose a systematic ground truth delineation method based on a Local Consistency Set Analysis approach, which can be used to establish an accurate ground truth representation, or if ground truth is available, to assess the accuracy of a CAD generated segmentation algorithm. We validate our computational model using medical data. Experimental results demonstrate the robustness of our approach. In contrast to current methods, our model also provides consistency information at distributed boundary pixel level, and thus is invariant to global compensation error.

  1. Precipitation Ground Validation over the Oceans

    Science.gov (United States)

    Klepp, C.; Bakan, S.

    2012-04-01

    State-of-the-art satellite derived and reanalysis based precipitation climatologies show remarkably large differences in detection, amount, variability and temporal behavior of precipitation over the oceans. The uncertainties are largest for light precipitation within the ITCZ and for cold season high-latitude precipitation including snowfall. Our HOAPS (Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite data, www.hoaps.org) precipitation retrieval exhibits fairly high accuracy in such regions compared to our ground validation data. However, the statistical basis for a conclusive validation has to be significantly improved with comprehensive ground validation efforts. However, existing in-situ instruments are not designed for precipitation measurements under high wind speeds on moving ships. To largely improve the ground validation data basis of precipitation over the oceans, especially for snow, the systematic data collection effort of the Initiative Pro Klima funded project at the KlimaCampus Hamburg uses automated shipboard optical disdrometers, called ODM470 that are capable of measuring liquid and solid precipitation on moving ships with high accuracy. The main goal of this project is to constrain the precipitation retrievals for HOAPS and the new Global Precipitation Measurement (GPM) satellite constellation. Currently, three instruments are long-term mounted on the German research icebreaker R/V Polarstern (Alfred Wegner Institut) since June 2010, on R/V Akademik Ioffe (P.P.Shirshov Institute of Oceanology, RAS, Moscow, Russia) since September 2010 and on R/V Maria S. Merian (Brise Research, University of Hamburg) since December 2011. Three more instruments will follow shortly on further ships. The core regions for these long-term precipitation measurements comprise the Arctic Ocean, the Nordic Seas, the Labrador Sea, the subtropical Atlantic trade wind regions, the Caribbean, the ITCZ, and the Southern Oceans as far south to Antarctica. This

  2. Microstrip Resonator for High Field MRI with Capacitor-Segmented Strip and Ground Plane

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Boer, Vincent; Petersen, Esben Thade

    2017-01-01

    ) segmenting stripe and ground plane of the resonator with series capacitors. The design equations for capacitors providing symmetric current distribution are derived. The performance of two types of segmented resonators are investigated experimentally. To authors’ knowledge, a microstrip resonator, where both......, strip and ground plane are capacitor-segmented, is shown here for the first time....

  3. The Galileo Ground Segment Integrity Algorithms: Design and Performance

    Directory of Open Access Journals (Sweden)

    Carlos Hernández Medel

    2008-01-01

    Full Text Available Galileo, the European Global Navigation Satellite System, will provide to its users highly accurate global positioning services and their associated integrity information. The element in charge of the computation of integrity messages within the Galileo Ground Mission Segment is the integrity processing facility (IPF, which is developed by GMV Aerospace and Defence. The main objective of this paper is twofold: to present the integrity algorithms implemented in the IPF and to show the achieved performance with the IPF software prototype, including aspects such as: implementation of the Galileo overbounding concept, impact of safety requirements on the algorithm design including the threat models for the so-called feared events, and finally the achieved performance with real GPS and simulated Galileo scenarios.

  4. Simulation of MR angiography imaging for validation of cerebral arteries segmentation algorithms.

    Science.gov (United States)

    Klepaczko, Artur; Szczypiński, Piotr; Deistung, Andreas; Reichenbach, Jürgen R; Materka, Andrzej

    2016-12-01

    Accurate vessel segmentation of magnetic resonance angiography (MRA) images is essential for computer-aided diagnosis of cerebrovascular diseases such as stenosis or aneurysm. The ability of a segmentation algorithm to correctly reproduce the geometry of the arterial system should be expressed quantitatively and observer-independently to ensure objectivism of the evaluation. This paper introduces a methodology for validating vessel segmentation algorithms using a custom-designed MRA simulation framework. For this purpose, a realistic reference model of an intracranial arterial tree was developed based on a real Time-of-Flight (TOF) MRA data set. With this specific geometry blood flow was simulated and a series of TOF images was synthesized using various acquisition protocol parameters and signal-to-noise ratios. The synthesized arterial tree was then reconstructed using a level-set segmentation algorithm available in the Vascular Modeling Toolkit (VMTK). Moreover, to present versatile application of the proposed methodology, validation was also performed for two alternative techniques: a multi-scale vessel enhancement filter and the Chan-Vese variant of the level-set-based approach, as implemented in the Insight Segmentation and Registration Toolkit (ITK). The segmentation results were compared against the reference model. The accuracy in determining the vessels centerline courses was very high for each tested segmentation algorithm (mean error rate = 5.6% if using VMTK). However, the estimated radii exhibited deviations from ground truth values with mean error rates ranging from 7% up to 79%, depending on the vessel size, image acquisition and segmentation method. We demonstrated the practical application of the designed MRA simulator as a reliable tool for quantitative validation of MRA image processing algorithms that provides objective, reproducible results and is observer independent. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    Directory of Open Access Journals (Sweden)

    Hans Supèr

    Full Text Available Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  6. Data processing and visualisation in the Rosetta Science Ground Segment

    Science.gov (United States)

    Geiger, Bernhard

    2016-09-01

    Rosetta is the first space mission to rendezvous with a comet. The spacecraft encountered its target 67P/Churyumov-Gerasimenko in 2014 and currently escorts the comet through a complete activity cycle during perihelion passage. The Rosetta Science Ground Segment (RSGS) is in charge of planning and coordinating the observations carried out by the scientific instruments on board the Rosetta spacecraft. We describe the data processing system implemented at the RSGS in order to support data analysis and science operations planning. The system automatically retrieves and processes telemetry data in near real-time. The generated products include spacecraft and instrument housekeeping parameters, scientific data for some instruments, and derived quantities. Based on spacecraft and comet trajectory information a series of geometric variables are calculated in order to assess the conditions for scheduling the observations of the scientific instruments and analyse the respective measurements obtained. Images acquired by the Rosetta Navigation Camera are processed and distributed in near real-time to the instrument team community. A quicklook web-page displaying the images allows the RSGS team to monitor the state of the comet and the correct acquisition and downlink of the images. Consolidated datasets are later delivered to the long-term archive.

  7. Is age still valid for segmenting e-shoppers?

    OpenAIRE

    Agudo Peregrina, Ángel; Hernández García, Ángel; Acquila Natale, Emiliano

    2015-01-01

    This study examines the differences in the acceptance and use of electronic commerce by end consumers, segmented in three groups according to their age. The UTAUT2 provides the theoretical framework, with the addition of three constructs from e-commerce literature: perceived risk, product risk, and perceived trust. Responses to an online survey by 817 Spanish Internet shoppers validate the research model. An omnibus test of group differences precedes the assessment of four multigroup analy...

  8. Retina Lesion and Microaneurysm Segmentation using Morphological Reconstruction Methods with Ground-Truth Data

    Energy Technology Data Exchange (ETDEWEB)

    Karnowski, Thomas Paul [ORNL; Govindaswamy, Priya [Oak Ridge National Laboratory (ORNL); Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK); Abramoff, M.D. [University of Iowa

    2008-01-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  9. Retina Lesion and Microaneurysm Segmentation using Morphological Reconstruction Methods with Ground-Truth Data

    Energy Technology Data Exchange (ETDEWEB)

    Karnowski, Thomas Paul [ORNL; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [ORNL; Muthusamy Govindasamy, Vijaya Priya [ORNL

    2009-09-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  10. Nordic Seas Precipitation Ground Validation Project

    Science.gov (United States)

    Klepp, Christian; Bumke, Karl; Bakan, Stephan; Andersson, Axel

    2010-05-01

    A thorough knowledge of global ocean precipitation is an indispensable prerequisite for the understanding of the water cycle in the global climate system. However, reliable detection of precipitation over the global oceans, especially of solid precipitation, remains a challenging task. This is true for both, passive microwave remote sensing and reanalysis based model estimates. The satellite based HOAPS (Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite Data) climatology contains fields of precipitation, evaporation and the resulting freshwater flux along with 12 additional atmospheric parameters over the global ice-free ocean between 1987 and 2005. Except for the NOAA Pathfinder SST, all basic state variables are calculated from SSM/I passive microwave radiometer measurements. HOAPS contains three main data subsets that originate from one common pixel-level data source. Gridded 0.5 degree monthly, pentad and twice daily data products are freely available from www.hoaps.org. The optical disdrometer ODM 470 is a ground validation instrument capable of measuring rain and snowfall on ships even under high wind speeds. It was used for the first time over the Nordic Seas during the LOFZY 2005 campaign. A dichotomous verification for these snowfall events resulted in a perfect score between the disdrometer, a precipitation detector and a shipboard observer's log. The disdrometer data is further point-to-area collocated against precipitation from three satellite derived climatologies, HOAPS-3, the Global Precipitation Climatology Project (GPCP) one degree daily (1DD) data set, and the Goddard Profiling algorithm, version 2004 (GPROF 2004). Only the HOAPS precipitation turns out to be overall consistent with the disdrometer data resulting in an accuracy of 0.96. The collocated data comprises light precipitation events below 1 mm/h. Therefore two LOFZY case studies with high precipitation rates are presented that still indicate plausible results. Overall, this

  11. Management of the science ground segment for the Euclid mission

    Science.gov (United States)

    Zacchei, Andrea; Hoar, John; Pasian, Fabio; Buenadicha, Guillermo; Dabin, Christophe; Gregorio, Anna; Mansutti, Oriana; Sauvage, Marc; Vuerli, Claudio

    2016-07-01

    Euclid is an ESA mission aimed at understanding the nature of dark energy and dark matter by using simultaneously two probes (weak lensing and baryon acoustic oscillations). The mission will observe galaxies and clusters of galaxies out to z 2, in a wide extra-galactic survey covering 15000 deg2, plus a deep survey covering an area of 40 deg². The payload is composed of two instruments, an imager in the visible domain (VIS) and an imager-spectrometer (NISP) covering the near-infrared. The launch is planned in Q4 of 2020. The elements of the Euclid Science Ground Segment (SGS) are the Science Operations Centre (SOC) operated by ESA and nine Science Data Centres (SDCs) in charge of data processing, provided by the Euclid Consortium (EC), formed by over 110 institutes spread in 15 countries. SOC and the EC started several years ago a tight collaboration in order to design and develop a single, cost-efficient and truly integrated SGS. The distributed nature, the size of the data set, and the needed accuracy of the results are the main challenges expected in the design and implementation of the SGS. In particular, the huge volume of data (not only Euclid data but also ground based data) to be processed in the SDCs will require distributed storage to avoid data migration across SDCs. This paper describes the management challenges that the Euclid SGS is facing while dealing with such complexity. The main aspect is related to the organisation of a geographically distributed software development team. In principle algorithms and code is developed in a large number of institutes, while data is actually processed at fewer centers (the national SDCs) where the operational computational infrastructures are maintained. The software produced for data handling, processing and analysis is built within a common development environment defined by the SGS System Team, common to SOC and ECSGS, which has already been active for several years. The code is built incrementally through

  12. Development of access-based metrics for site location of ground segment in LEO missions

    Directory of Open Access Journals (Sweden)

    Hossein Bonyan Khamseh

    2010-09-01

    Full Text Available The classical metrics of ground segment site location do not take account of the pattern of ground segment access to the satellite. In this paper, based on the pattern of access between the ground segment and the satellite, two metrics for site location of ground segments in Low Earth Orbits (LEO missions were developed. The two developed access-based metrics are total accessibility duration and longest accessibility gap in a given period of time. It is shown that repeatability cycle is the minimum necessary time interval to study the steady behavior of the two proposed metrics. System and subsystem characteristics of the satellite represented by each of the metrics are discussed. Incorporation of the two proposed metrics, along with the classical ones, in the ground segment site location process results in financial saving in satellite development phase and reduces the minimum required level of in-orbit autonomy of the satellite. To show the effectiveness of the proposed metrics, simulation results are included for illustration.

  13. Robust entropy-guided image segmentation for ground detection in GPR

    Science.gov (United States)

    Roberts, J.; Shkolnikov, Y.; Varsanik, J.; Chevalier, T.

    2013-06-01

    Identifying the ground within a ground penetrating radar (GPR) image is a critical component of automatic and assisted target detection systems. As these systems are deployed to more challenging environments they encounter rougher terrain and less-ideal data, both of which can cause standard ground detection methods to fail. This paper presents a means of improving the robustness of ground detection by adapting a technique from image processing in which images are segmented by local entropy. This segmentation provides the rough location of the air-ground interface, which can then act as a "guide" for more precise but fragile techniques. The effectiveness of this two-step "coarse/fine" entropyguided detection strategy is demonstrated on GPR data from very rough terrain, and its application beyond the realm of GPR data processing is discussed.

  14. Objective Performance Evaluation of Video Segmentation Algorithms with Ground-Truth

    Institute of Scientific and Technical Information of China (English)

    杨高波; 张兆扬

    2004-01-01

    While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation. Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy. Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames. The experimental results show the feasibility of our approach. Moreover, it is computationally more efficient than previous methods. It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.

  15. Behavior of full-scale concrete segmented pipelines under permanent ground displacements

    Science.gov (United States)

    Kim, Junhee; O'Connor, Sean; Nadukuru, Srinivasa; Lynch, Jerome P.; Michalowski, Radoslaw; Green, Russell A.; Pour-Ghaz, Mohammed; Weiss, W. Jason; Bradshaw, Aaron

    2010-03-01

    Concrete pipelines are one of the most popular underground lifelines used for the transportation of water resources. Unfortunately, this critical infrastructure system remains vulnerable to ground displacements during seismic and landslide events. Ground displacements may induce significant bending, shear, and axial forces to concrete pipelines and eventually lead to joint failures. In order to understand and model the typical failure mechanisms of concrete segmented pipelines, large-scale experimentation is necessary to explore structural and soil-structure behavior during ground faulting. This paper reports on the experimentation of a reinforced concrete segmented concrete pipeline using the unique capabilities of the NEES Lifeline Experimental and Testing Facilities at Cornell University. Five segments of a full-scale commercial concrete pressure pipe (244 cm long and 37.5 cm diameter) are constructed as a segmented pipeline under a compacted granular soil in the facility test basin (13.4 m long and 3.6 m wide). Ground displacements are simulated through translation of half of the test basin. A dense array of sensors including LVDT's, strain gages, and load cells are installed along the length of the pipeline to measure the pipeline response while the ground is incrementally displaced. Accurate measures of pipeline displacements and strains are captured up to the compressive and flexural failure of the pipeline joints.

  16. Intuitive Terrain Reconstruction Using Height Observation-Based Ground Segmentation and 3D Object Boundary Estimation

    Directory of Open Access Journals (Sweden)

    Sungdae Sim

    2012-12-01

    Full Text Available Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  17. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation.

    Science.gov (United States)

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-12-12

    Mobile robot operators must make rapid decisions based on information about the robot's surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot's array of sensors, but some upper parts of objects are beyond the sensors' measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.

  18. MONTE CARLO SIMULATION FOR MODELING THE EFFECT OF GROUND SEGMENT LOCATION ON IN-ORBIT RESPONSIVENESS OF LEO SUNSYNCHRONOUS SATELLITE S

    Institute of Scientific and Technical Information of China (English)

    M. Navabi; Hossein Bonyan Khamseh

    2011-01-01

    Responsiveness is a challenge for space systems to sustain competitive advantage over alternate non-spaceborne technologies.For a satellite in its operational orbit,in-orbit responsiveness is defined as the capability of the satellite to respond to a given demand in a timely manner.In this paper,it is shown that Average Wait Time (AWT) to pick up user demand from ground segment is the appropriate metric to evaluate the effect of ground segment location on in-orbit responsiveness of Low Earth Orbit (LEO) sunsynchronous satellites.This metric depends on pattern of ground segment access to satellite and distribution of user demands in time domain.A mathematical model is presented to determine pattern of ground segment access to satellite and concept of cumulative distribution function is used to simulate distribution of user demands for markets with different total demand scenarios.Monte Carlo simulations are employed to take account of uncertainty in distribution and total volume of user demands.Sampling error and standard deviation are used to ensure validity of AWT metric obtained from Monte Carlo simulations.Incorporation of the proposed metric in the ground segment site location process results in more responsive satellite systems which,in turn,lead to greater customer satisfaction levels and attractiveness of spaceborne systems for different applications.Finally,simulation results for a case study are presented.

  19. Software performance in segmenting ground-glass and solid components of subsolid nodules in pulmonary adenocarcinomas.

    Science.gov (United States)

    Cohen, Julien G; Goo, Jin Mo; Yoo, Roh-Eul; Park, Chang Min; Lee, Chang Hyun; van Ginneken, Bram; Chung, Doo Hyun; Kim, Young Tae

    2016-12-01

    To evaluate the performance of software in segmenting ground-glass and solid components of subsolid nodules in pulmonary adenocarcinomas. Seventy-three pulmonary adenocarcinomas manifesting as subsolid nodules were included. Two radiologists measured the maximal axial diameter of the ground-glass components on lung windows and that of the solid components on lung and mediastinal windows. Nodules were segmented using software by applying five (-850 HU to -650 HU) and nine (-130 HU to -500 HU) attenuation thresholds. We compared the manual and software measurements of ground-glass and solid components with pathology measurements of tumour and invasive components. Segmentation of ground-glass components at a threshold of -750 HU yielded mean differences of +0.06 mm (p = 0.83, 95 % limits of agreement, 4.51 to 4.67) and -2.32 mm (p software (at -350 HU) and pathology measurements and between the manual (lung and mediastinal windows) and pathology measurements were -0.12 mm (p = 0.74, -5.73 to 5.55]), 0.15 mm (p = 0.73, -6.92 to 7.22), and -1.14 mm (p Software segmentation of ground-glass and solid components in subsolid nodules showed no significant difference with pathology. • Software can effectively segment ground-glass and solid components in subsolid nodules. • Software measurements show no significant difference with pathology measurements. • Manual measurements are more accurate on lung windows than on mediastinal windows.

  20. Cross Validation of Spaceborne and Ground Polarimetric Radar Snowfall Retrievals

    Science.gov (United States)

    Wen, Y.; Hong, Y.; Cao, Q.; Kirstetter, P.; Gourley, J. J.; Zhang, J.

    2013-12-01

    Snow, as a primary contribution to regional or even global water budgets is of critical importance to our society. For large-scale weather monitoring and global climate studies, satellite-based snowfall observations with ground validations have become highly desirable. Ground-based polarimetric weather radar is the powerful validation tool that provides physical insight into the development and interpretation of spaceborne snowfall retrievals. This study aims to compare and resolve discrepancies in snowfall detection and estimation between Cloud Profiling Radar (CPR) on board NASA's Cloudsat satellite and new polarimetric National Mosaic and Multi-sensor QPE (NMQ) system (Q3) developed by OU and NOAA/NSSL scientists. The Global Precipitation Measurement Mission with its core satellite scheduled for launch in 2014 will carry active and passive microwave instrumentation anticipated to detect and estimate snowfall or snowpack. This study will potentially serve as the basis for global validation of space-based snowfall products and also invite synergistic development of coordinated space-ground multisensor snowfall products.

  1. Characters Segmentation of Cursive Handwritten Words based on Contour Analysis and Neural Network Validation

    Directory of Open Access Journals (Sweden)

    Fajri Kurniawan

    2011-04-01

    Full Text Available This paper presents a robust algorithm to identify the letter boundaries in images of unconstrained handwritten word. The proposed algorithm is based on vertical contour analysis. Proposed algorithm is performed to generate pre-segmentation by analyzing the vertical contours from right to left. The unwanted segmentation points are reduced using neural network validation to improve accuracy of segmentation. The neural network is utilized to validate segmentation points. The experiments are performed on the IAM benchmark database. The results are showing that the proposed algorithm capable to accurately locating the letter boundaries for unconstrained handwritten words.

  2. New approach for validating the segmentation of 3D data applied to individual fibre extraction

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2017-01-01

    that provide a better resolution and therefore a more accurate segmentation. The imaging modalities used for comparison are scanning electron microscopy, optical microscopy and synchrotron CT. The validation methods are applied to the asses the segmentation of individual fibres from X-ray microtomograms.......We present two approaches for validating the segmentation of 3D data. The first approach consists on comparing the amount of estimated material to a value provided by the manufacturer. The second approach consists on comparing the segmented results to those obtained from imaging modalities...

  3. Data on the verification and validation of segmentation and registration methods for diffusion MRI

    Directory of Open Access Journals (Sweden)

    Oscar Esteban

    2016-09-01

    Full Text Available The verification and validation of segmentation and registration methods is a necessary assessment in the development of new processing methods. However, verification and validation of diffusion MRI (dMRI processing methods is challenging for the lack of gold-standard data. The data described here are related to the research article entitled “Surface-driven registration method for the structure-informed segmentation of diffusion MR images” [1], in which publicly available data are used to derive golden-standard reference-data to validate and evaluate segmentation and registration methods in dMRI.

  4. Seismic fragility formulations for segmented buried pipeline systems including the impact of differential ground subsidence

    Energy Technology Data Exchange (ETDEWEB)

    Pineda Porras, Omar Andrey [Los Alamos National Laboratory; Ordaz, Mario [UNAM, MEXICO CITY

    2009-01-01

    Though Differential Ground Subsidence (DGS) impacts the seismic response of segmented buried pipelines augmenting their vulnerability, fragility formulations to estimate repair rates under such condition are not available in the literature. Physical models to estimate pipeline seismic damage considering other cases of permanent ground subsidence (e.g. faulting, tectonic uplift, liquefaction, and landslides) have been extensively reported, not being the case of DGS. The refinement of the study of two important phenomena in Mexico City - the 1985 Michoacan earthquake scenario and the sinking of the city due to ground subsidence - has contributed to the analysis of the interrelation of pipeline damage, ground motion intensity, and DGS; from the analysis of the 48-inch pipeline network of the Mexico City's Water System, fragility formulations for segmented buried pipeline systems for two DGS levels are proposed. The novel parameter PGV{sup 2}/PGA, being PGV peak ground velocity and PGA peak ground acceleration, has been used as seismic parameter in these formulations, since it has shown better correlation to pipeline damage than PGV alone according to previous studies. By comparing the proposed fragilities, it is concluded that a change in the DGS level (from Low-Medium to High) could increase the pipeline repair rates (number of repairs per kilometer) by factors ranging from 1.3 to 2.0; being the higher the seismic intensity the lower the factor.

  5. Phantom-based ground-truth generation for cerebral vessel segmentation and pulsatile deformation analysis

    Science.gov (United States)

    Schetelig, Daniel; Säring, Dennis; Illies, Till; Sedlacik, Jan; Kording, Fabian; Werner, René

    2016-03-01

    Hemodynamic and mechanical factors of the vascular system are assumed to play a major role in understanding, e.g., initiation, growth and rupture of cerebral aneurysms. Among those factors, cardiac cycle-related pulsatile motion and deformation of cerebral vessels currently attract much interest. However, imaging of those effects requires high spatial and temporal resolution and remains challenging { and similarly does the analysis of the acquired images: Flow velocity changes and contrast media inflow cause vessel intensity variations in related temporally resolved computed tomography and magnetic resonance angiography data over the cardiac cycle and impede application of intensity threshold-based segmentation and subsequent motion analysis. In this work, a flow phantom for generation of ground-truth images for evaluation of appropriate segmentation and motion analysis algorithms is developed. The acquired ground-truth data is used to illustrate the interplay between intensity fluctuations and (erroneous) motion quantification by standard threshold-based segmentation, and an adaptive threshold-based segmentation approach is proposed that alleviates respective issues. The results of the phantom study are further demonstrated to be transferable to patient data.

  6. Local Histogram of Figure/Ground Segmentations for Dynamic Background Subtraction

    Directory of Open Access Journals (Sweden)

    Bineng Zhong

    2010-01-01

    Full Text Available We propose a novel feature, local histogram of figure/ground segmentations, for robust and efficient background subtraction (BGS in dynamic scenes (e.g., waving trees, ripples in water, illumination changes, camera jitters, etc.. We represent each pixel as a local histogram of figure/ground segmentations, which aims at combining several candidate solutions that are produced by simple BGS algorithms to get a more reliable and robust feature for BGS. The background model of each pixel is constructed as a group of weighted adaptive local histograms of figure/ground segmentations, which describe the structure properties of the surrounding region. This is a natural fusion because multiple complementary BGS algorithms can be used to build background models for scenes. Moreover, the correlation of image variations at neighboring pixels is explicitly utilized to achieve robust detection performance since neighboring pixels tend to be similarly affected by environmental effects (e.g., dynamic scenes. Experimental results demonstrate the robustness and effectiveness of the proposed method by comparing with four representatives of the state of the art in BGS.

  7. Local Histogram of Figure/Ground Segmentations for Dynamic Background Subtraction

    Directory of Open Access Journals (Sweden)

    Yuan Xiaotong

    2010-01-01

    Full Text Available Abstract We propose a novel feature, local histogram of figure/ground segmentations, for robust and efficient background subtraction (BGS in dynamic scenes (e.g., waving trees, ripples in water, illumination changes, camera jitters, etc.. We represent each pixel as a local histogram of figure/ground segmentations, which aims at combining several candidate solutions that are produced by simple BGS algorithms to get a more reliable and robust feature for BGS. The background model of each pixel is constructed as a group of weighted adaptive local histograms of figure/ground segmentations, which describe the structure properties of the surrounding region. This is a natural fusion because multiple complementary BGS algorithms can be used to build background models for scenes. Moreover, the correlation of image variations at neighboring pixels is explicitly utilized to achieve robust detection performance since neighboring pixels tend to be similarly affected by environmental effects (e.g., dynamic scenes. Experimental results demonstrate the robustness and effectiveness of the proposed method by comparing with four representatives of the state of the art in BGS.

  8. Validating a health consumer segmentation model: behavioral and attitudinal differences in disease prevention-related practices.

    Science.gov (United States)

    Wolff, Lisa S; Massett, Holly A; Maibach, Edward W; Weber, Deanne; Hassmiller, Susan; Mockenhaupt, Robin E

    2010-03-01

    While researchers typically have segmented audiences by demographic or behavioral characteristics, psychobehavioral segmentation schemes may be more useful for developing targeted health information and programs. Previous research described a four segment psychobehavioral segmentation scheme-and a 10-item screening instrument used to identify the segments-based predominantly on people's orientation to their health (active vs. passive) and their degree of independence in health care decision making (independent vs. dependent). This study builds on this prior research by assessing the screening instrument's validity with an independent dataset and exploring whether people with distinct psychobehavioral orientations have different disease prevention attitudes and preferences for receiving information in the primary care setting. Data come from 1,650 respondents to a national mail panel survey. Using the screening instrument, respondents were segmented into four groups-independent actives, doctor-dependent actives, independent passives, and doctor-dependent passives. Consistent with the earlier research, there were clear differences in health-related attitudes and behaviors among the four segments. Members of three segments appear quite receptive to receiving disease prevention information and assistance from professionals in the primary care setting. Our findings provide further indication that the screening instrument and corresponding segmentation strategy may offer a simple, effective tool for targeting and tailoring information and other health programming to the unique characteristics of distinct audience segments.

  9. Image segmentation techniques for improved processing of landmine responses in ground-penetrating radar data

    Science.gov (United States)

    Torrione, Peter A.; Collins, Leslie

    2007-04-01

    As ground penetrating radar sensor phenomenology improves, more advanced statistical processing approaches become applicable to the problem of landmine detection in GPR data. Most previous studies on landmine detection in GPR data have focused on the application of statistics and physics based prescreening algorithms, new feature extraction approaches, and improved feature classification techniques. In the typical framework, prescreening algorithms provide spatial location information of anomalous responses in down-track / cross-track coordinates, and feature extraction algorithms are then tasked with generating low-dimensional information-bearing feature sets from these spatial locations. However in time-domain GPR, a significant portion of the data collected at prescreener flagged locations may be unrelated to the true anomaly responses - e.g. ground bounce response, responses either temporally "before" or "after" the anomalous response, etc. The ability to segment the information-bearing region of the GPR image from the background of the image may thus provide improved performance for feature-based processing of anomaly responses. In this work we will explore the application of Markov random fields (MRFs) to the problem of anomaly/background segmentation in GPR data. Preliminary results suggest the potential for improved feature extraction and overall performance gains via application of image segmentation approaches prior to feature extraction.

  10. Lumbar segmental instability: a criterion-related validity study of manual therapy assessment

    Directory of Open Access Journals (Sweden)

    Chapple Cathy

    2005-11-01

    Full Text Available Abstract Background Musculoskeletal physiotherapists routinely assess lumbar segmental motion during the clinical examination of a patient with low back pain. The validity of manual assessment of segmental motion has not, however, been adequately investigated. Methods In this prospective, multi-centre, pragmatic, diagnostic validity study, 138 consecutive patients with recurrent or chronic low back pain (R/CLBP were recruited. Physiotherapists with post-graduate training in manual therapy performed passive accessory intervertebral motion tests (PAIVMs and passive physiological intervertebral motion tests (PPIVMs. Consenting patients were referred for flexion-extension radiographs. Sagittal angular rotation and sagittal translation of each lumbar spinal motion segment was measured from these radiographs, and compared to a reference range derived from a study of 30 asymptomatic volunteers. Motion beyond two standard deviations from the reference mean was considered diagnostic of rotational lumbar segmental instability (LSI and translational LSI. Accuracy and validity of the clinical assessments were expressed using sensitivity, specificity, and likelihood ratio statistics with 95% confidence intervals (CI. Results Only translation LSI was found to be significantly associated with R/CLBP (p Conclusion This study provides the first evidence reporting the concurrent validity of manual tests for the detection of abnormal sagittal planar motion. PAIVMs and PPIVMs are highly specific, but not sensitive, for the detection of translation LSI. Likelihood ratios resulting from positive test results were only moderate. This research indicates that manual clinical examination procedures have moderate validity for detecting segmental motion abnormality.

  11. The potential of ground gravity measurements to validate GRACE data

    Directory of Open Access Journals (Sweden)

    D. Crossley

    2003-01-01

    Full Text Available New satellite missions are returning high precision, time-varying, satellite measurements of the Earth’s gravity field. The GRACE mission is now in its calibration/- validation phase and first results of the gravity field solutions are imminent. We consider here the possibility of external validation using data from the superconducting gravimeters in the European sub-array of the Global Geodynamics Project (GGP as ‘ground truth’ for comparison with GRACE. This is a pilot study in which we use 14 months of 1-hour data from the beginning of GGP (1 July 1997 to 30 August 1998, when the Potsdam instrument was relocated to South Africa. There are 7 stations clustered in west central Europe, and one station, Metsahovi in Finland. We remove local tides, polar motion, local and global air pressure, and instrument drift and then decimate to 6-hour samples. We see large variations in the time series of 5–10µgal between even some neighboring stations, but there are also common features that correlate well over the 427-day period. The 8 stations are used to interpolate a minimum curvature (gridded surface that extends over the geographical region. This surface shows time and spatial coherency at the level of 2– 4µgal over the first half of the data and 1–2µgal over the latter half. The mean value of the surface clearly shows a rise in European gravity of about 3µgal over the first 150 days and a fairly constant value for the rest of the data. The accuracy of this mean is estimated at 1µgal, which compares favorably with GRACE predictions for wavelengths of 500 km or less. Preliminary studies of hydrology loading over Western Europe shows the difficulty of correlating the local hydrology, which can be highly variable, with large-scale gravity variations.Key words. GRACE, satellite gravity, superconducting gravimeter, GGP, ground truth

  12. GPM ground validation via commercial cellular networks: an exploratory approach

    Science.gov (United States)

    Rios Gaona, Manuel Felipe; Overeem, Aart; Leijnse, Hidde; Brasjen, Noud; Uijlenhoet, Remko

    2016-04-01

    The suitability of commercial microwave link networks for ground validation of GPM (Global Precipitation Measurement) data is evaluated here. Two state-of-the-art rainfall products are compared over the land surface of the Netherlands for a period of 7 months, i.e., rainfall maps from commercial cellular communication networks and Integrated Multi-satellite Retrievals for GPM (IMERG). Commercial microwave link networks are nowadays the core component in telecommunications worldwide. Rainfall rates can be retrieved from measurements of attenuation between transmitting and receiving antennas. If adequately set up, these networks enable rainfall monitoring tens of meters above the ground at high spatiotemporal resolutions (temporal sampling of seconds to tens of minutes, and spatial sampling of hundreds of meters to tens of kilometers). The GPM mission is the successor of TRMM (Tropical Rainfall Measurement Mission). For two years now, IMERG offers rainfall estimates across the globe (180°W - 180°E and 60°N - 60°S) at spatiotemporal resolutions of 0.1° x 0.1° every 30 min. These two data sets are compared against a Dutch gauge-adjusted radar data set, considered to be the ground truth given its accuracy, spatiotemporal resolution and availability. The suitability of microwave link networks in satellite rainfall evaluation is of special interest, given the independent character of this technique, its high spatiotemporal resolutions and availability. These are valuable assets for water management and modeling of floods, landslides, and weather extremes; especially in places where rain gauge networks are scarce or poorly maintained, or where weather radar networks are too expensive to acquire and/or maintain.

  13. Numerical simulation and experimental validation of aircraft ground deicing model

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2016-05-01

    Full Text Available Aircraft ground deicing plays an important role of guaranteeing the aircraft safety. In practice, most airports generally use as many deicing fluids as possible to remove the ice, which causes the waste of the deicing fluids and the pollution of the environment. Therefore, the model of aircraft ground deicing should be built to establish the foundation for the subsequent research, such as the optimization of the deicing fluid consumption. In this article, the heat balance of the deicing process is depicted, and the dynamic model of the deicing process is provided based on the analysis of the deicing mechanism. In the dynamic model, the surface temperature of the deicing fluids and the ice thickness are regarded as the state parameters, while the fluid flow rate, the initial temperature, and the injection time of the deicing fluids are treated as control parameters. Ignoring the heat exchange between the deicing fluids and the environment, the simplified model is obtained. The rationality of the simplified model is verified by the numerical simulation and the impacts of the flow rate, the initial temperature and the injection time on the deicing process are investigated. To verify the model, the semi-physical experiment system is established, consisting of the low-constant temperature test chamber, the ice simulation system, the deicing fluid heating and spraying system, the simulated wing, the test sensors, and the computer measure and control system. The actual test data verify the validity of the dynamic model and the accuracy of the simulation analysis.

  14. In situ validation of segmented SAR satellite scenes of young Arctic thin landfast sea ice

    Science.gov (United States)

    Gerland, S.; Negrel, J.; Doulgeris, A. P.; Akbari, V.; Lauknes, T. R.; Rouyet, L.; Storvold, R.

    2016-12-01

    The use of satellite remote sensing techniques for the observation and monitoring of the polar regions has increased in recent years due to the ability to cover larger areas than can be covered by ground measurements, However, in situ data remain mandatory for the validation of such data. In April 2016 an Arctic fieldwork campaign was conducted at Kongsfjorden, Svalbard. Ground measurements from this campaign are used together with satellite data acquisitions to improve identification of young sea ice types from satellite data. This work was carried out in combination with Norwegian Polar Institute's long-term monitoring of Svalbard fast ice, and with partner institutes in the Center for Integrated Remote Sensing and Forecasting for Arctic operations (CIRFA). Thin ice types are generally more difficult to investigate than thicker ice, because ice of only a few centimetres thickness does not allow scientists to stand and work on it. Identifying it on radar scenes will make it easier to study and monitor. Four high resolution 25 km x 25 km Radarsat-2 quad-pol scenes were obtained, coincident in space and time with the in situ measurements. The field teams used a variety of methods, including ice thickness transects, ice salinity measurements, ground-based radar imaging from the coast and UAV-based photography, to identify the different thin ice types, their location and evolution in time. Sampling of the thinnest ice types was managed from a small boat. In addition, iceberg positions were recorded with GPS and photographed to enable us to quantify their contribution to the radar response. Thin ice from 0.02 to 0.18 m thickness was sampled on in a total nine ice stations. The ice had no or only a thin snow layer. The GPS positions and tracks and ice characteristics are then compared to the Radarsat-2 scenes, and the radar responses of the different thin ice types in the quad-pol scenes are identified. The first segmentation results of the scenes present a good

  15. Stereo visualization in the ground segment tasks of the science space missions

    Science.gov (United States)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  16. Ground validation of DPR precipitation rate over Italy using H-SAF validation methodology

    Science.gov (United States)

    Puca, Silvia; Petracca, Marco; Sebastianelli, Stefano; Vulpiani, Gianfranco

    2017-04-01

    The H-SAF project (Satellite Application Facility on support to Operational Hydrology and Water Management, funded by EUMETSAT) is aimed at retrieving key hydrological parameters such as precipitation, soil moisture and snow cover. Within the H-SAF consortium, the Product Precipitation Validation Group (PPVG) evaluate the accuracy of instantaneous and accumulated precipitation products with respect to ground radar and rain gauge data adopting the same methodology (using a Unique Common Code) throughout Europe. The adopted validation methodology can be summarized by the following few steps: (1) ground data (radar and rain gauge) quality control; (2) spatial interpolation of rain gauge measurements; (3) up-scaling of radar data to satellite native grid; (4) temporal comparison of satellite and ground-based precipitation products; and (5) production and evaluation of continuous and multi-categorical statistical scores for long time series and case studies. The statistical scores are evaluated taking into account the satellite product native grid. With the recent advent of the GPM era starting in march 2014, more new global precipitation products are available. The validation methodology developed in H-SAF can be easily applicable to different precipitation products. In this work, we have validated instantaneous precipitation data estimated from DPR (Dual-frequency Precipitation Radar) instrument onboard of the GPM-CO (Global Precipitation Measurement Core Observatory) satellite. In particular, we have analyzed the near surface and estimated precipitation fields collected in the 2A-Level for 3 different scans (NS, MS and HS). The Italian radar mosaic managed by the National Department of Civil Protection available operationally every 10 minutes is used as ground reference data. The results obtained highlight the capability of the DPR to identify properly the precipitation areas with higher accuracy in estimating the stratiform precipitation (especially for the HS). An

  17. Development and Validation of an Automatic Segmentation Algorithm for Quantification of Intracerebral Hemorrhage.

    Science.gov (United States)

    Scherer, Moritz; Cordes, Jonas; Younsi, Alexander; Sahin, Yasemin-Aylin; Götz, Michael; Möhlenbruch, Markus; Stock, Christian; Bösel, Julian; Unterberg, Andreas; Maier-Hein, Klaus; Orakcioglu, Berk

    2016-11-01

    ABC/2 is still widely accepted for volume estimations in spontaneous intracerebral hemorrhage (ICH) despite known limitations, which potentially accounts for controversial outcome-study results. The aim of this study was to establish and validate an automatic segmentation algorithm, allowing for quick and accurate quantification of ICH. A segmentation algorithm implementing first- and second-order statistics, texture, and threshold features was trained on manual segmentations with a random-forest methodology. Quantitative data of the algorithm, manual segmentations, and ABC/2 were evaluated for agreement in a study sample (n=28) and validated in an independent sample not used for algorithm training (n=30). ABC/2 volumes were significantly larger compared with either manual or algorithm values, whereas no significant differences were found between the latter (Pcorrelation coefficient 0.95 [lower 95% confidence interval 0.91]) and superior to ABC/2 (concordance correlation coefficient 0.77 [95% confidence interval 0.64]). Validation confirmed agreement in an independent sample (algorithm concordance correlation coefficient 0.99 [95% confidence interval 0.98], ABC/2 concordance correlation coefficient 0.82 [95% confidence interval 0.72]). The algorithm was closer to respective manual segmentations than ABC/2 in 52/58 cases (89.7%). An automatic segmentation algorithm for volumetric analysis of spontaneous ICH was developed and validated in this study. Algorithm measurements showed strong agreement with manual segmentations, whereas ABC/2 exhibited its limitations, yielding inaccurate overestimations of ICH volume. The refined, yet time-efficient, quantification of ICH by the algorithm may facilitate evaluation of clot volume as an outcome predictor and trigger for surgical interventions in the clinical setting. © 2016 American Heart Association, Inc.

  18. Evolution of the JPSS Ground Project Calibration and Validation System

    Science.gov (United States)

    Chander, G.; Jain, P.

    2014-12-01

    The Joint Polar Satellite System (JPSS) is the National Oceanic and Atmospheric Administration's (NOAA) next-generation operational Earth observation Program that acquires and distributes global environmental data from multiple polar-orbiting satellites. The JPSS Program plays a critical role to NOAA's mission to understand and predict changes in weather, climate, oceans, and coasts environments, which supports the nation's economy and protects lives and property. The National Aeronautics and Space Administration (NASA) is acquiring and implementing the JPSS, comprised of flight and ground systems on behalf of NOAA. The JPSS satellites are planned to fly in afternoon orbit and will provide operational continuity of satellite-based observations and products for NOAA Polar-orbiting Operational Environmental Satellites (POES) and the Suomi National Polar-orbiting Partnership (SNPP) satellite. Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) system is a NOAA system developed and deployed by JPSS Ground Project to support Calibration and Validation (Cal/Val), Algorithm Integration, Investigation, and Tuning, and Data Quality Monitoring. It is a mature, deployed system that supports SNPP mission and has been in operations since SNPP launch. This paper discusses the major re-architecture for Block 2.0 that incorporates SNPP lessons learned, architecture of the system, and demonstrates how GRAVITE has evolved as a system with increased performance. It is a robust, reliable, maintainable, scalable, and secure system that supports development, test, and production strings, replaces proprietary and custom software, uses open source software, and is compliant with NASA and NOAA standards. "[Pending NASA Goddard Applied Engineering & Technology Directorate (AETD) Approval]"

  19. Theoretical validation of ground-based microwave ozone observations

    Directory of Open Access Journals (Sweden)

    P. Ricaud

    Full Text Available Ground-based microwave measurements of the diurnal and seasonal variations of ozoneat 42±4.5 and 55±8 km are validated by comparing with results from a zero-dimensional photochemical model and a two-dimensional (2D chemical/radiative/dynamical model, respectively. O3 diurnal amplitudes measured in Bordeaux are shown to be in agreement with theory to within 5%. For the seasonal analysis of O3 variation, at 42±4.5 km, the 2D model underestimates the yearly averaged ozone concentration compared with the measurements. A double maximum oscillation (~3.5% is measured in Bordeaux with an extended maximum in September and a maximum in February, whilst the 2D model predicts only a single large maximum (17% in August and a pronounced minimum in January. Evidence suggests that dynamical transport causes the winter O3 maximum by propagation of planetary waves, phenomena which are not explicitly reproduced by the 2D model. At 55±8 km, the modeled yearly averaged O3 concentration is in very good agreement with the measured yearly average. A strong annual oscillation is both measured and modeled with differences in the amplitude shown to be exclusively linked to temperature fields.

  20. Ground Water Atlas of the United States: Segment 8, Montana, North Dakota, South Dakota, Wyoming

    Science.gov (United States)

    Whitehead, R.L.

    1996-01-01

    The States of Montana, North Dakota, South Dakota, and Wyoming compose the 392,764-square-mile area of Segment 8, which is in the north-central part of the continental United States. The area varies topographically from the high rugged mountain ranges of the Rocky Mountains in western Montana and Wyoming to the gently undulating surface of the Central Lowland in eastern North Dakota and South Dakota (fig. 1). The Black Hills in southwestern South Dakota and northeastern Wyoming interrupt the uniformity of the intervening Great Plains. Segment 8 spans the Continental Divide, which is the drainage divide that separates streams that generally flow westward from those that generally flow eastward. The area of Segment 8 is drained by the following major rivers or river systems: the Green River drains southward to join the Colorado River, which ultimately discharges to the Gulf of California; the Clark Fork and the Kootenai Rivers drain generally westward by way of the Columbia River to discharge to the Pacific Ocean; the Missouri River system and the North Platte River drain eastward and southeastward to the Mississippi River, which discharges to the Gulf of Mexico; and the Red River of the North and the Souris River drain northward through Lake Winnipeg to ultimately discharge to Hudson Bay in Canada. These rivers and their tributaries are an important source of water for public-supply, domestic and commercial, agricultural, and industrial uses. Much of the surface water has long been appropriated for agricultural use, primarily irrigation, and for compliance with downstream water pacts. Reservoirs store some of the surface water for flood control, irrigation, power generation, and recreational purposes. Surface water is not always available when and where it is needed, and ground water is the only other source of supply. Ground water is obtained primarily from wells completed in unconsolidated-deposit aquifers that consist mostly of sand and gravel, and from wells

  1. Validity of automated choroidal segmentation in SS-OCT and SD-OCT

    NARCIS (Netherlands)

    L. Zhang (Li); G.H.S. Buitendijk (Gabrielle); K. Lee (Kyungmoo); M. Sonka (Milan); H. Springelkamp (Henriët); A. Hofman (Albert); J.R. Vingerling (Hans); R.F. Mullins (Robert F.); C.C.W. Klaver (Caroline); M.D. Abràmoff (Michael)

    2015-01-01

    textabstractPURPOSE. To evaluate the validity of a novel fully automated three-dimensional (3D) method capable of segmenting the choroid from two different optical coherence tomography scanners: swept-source OCT (SS-OCT) and spectral-domain OCT (SD-OCT). METHODS. One hundred eight subjects were

  2. Ground Water Atlas of the United States: Segment 1, California, Nevada

    Science.gov (United States)

    Planert, Michael; Williams, John S.

    1995-01-01

    California and Nevada compose Segment 1 of the Ground Water Atlas of the United States. Segment 1 is a region of pronounced physiographic and climatic contrasts. From the Cascade Mountains and the Sierra Nevada of northern California, where precipitation is abundant, to the Great Basin in Nevada and the deserts of southern California, which have the most arid environments in the United States, few regions exhibit such a diversity of topography or environment. Since the discovery of gold in the mid-1800's, California has experienced a population, industrial, and agricultural boom unrivaled by that of any other State. Water needs in California are very large, and the State leads the United States in agricultural and municipal water use. The demand for water exceeds the natural water supply in many agricultural and nearly all urban areas. As a result, water is impounded by reservoirs in areas of surplus and transported to areas of scarcity by an extensive network of aqueducts. Unlike California, which has a relative abundance of water, development in Nevada has been limited by a scarcity of recoverable freshwater. The Truckee, the Carson, the Walker, the Humboldt, and the Colorado Rivers are the only perennial streams of significance in the State. The individual basin-fill aquifers, which together compose the largest known ground-water reserves, receive little annual recharge and are easily depleted. Nevada is sparsely populated, except for the Las Vegas, the Reno-Sparks, and the Carson City areas, which rely heavily on imported water for public supplies. Although important to the economy of Nevada, agriculture has not been developed to the same degree as in California due, in large part, to a scarcity of water. Some additional ground-water development might be possible in Nevada through prudent management of the basin-fill aquifers and increased utilization of ground water in the little-developed carbonate-rock aquifers that underlie the eastern one-half of the State

  3. Ground Water Atlas of the United States: Segment 3, Kansas, Missouri, Nebraska

    Science.gov (United States)

    Miller, James A.; Appel, Cynthia L.

    1997-01-01

    The three States-Kansas, Missouri, and Nebraska-that comprise Segment 3 of this Atlas are in the central part of the United States. The major rivers that drain these States are the Niobrara, the Platte, the Kansas, the Arkansas, and the Missouri; the Mississippi River is the eastern boundary of the area. These rivers supply water for many uses but ground water is the source of slightly more than one-half of the total water withdrawn for all uses within the three-State area. The aquifers that contain the water consist of consolidated sedimentary rocks and unconsolidated deposits that range in age from Cambrian through Quaternary. This chapter describes the geology and hydrology of each of the principal aquifers throughout the three-State area. Some water enters Segment 3 as inflow from rivers and aquifers that cross the segment boundaries, but precipitation, as rain and snow, is the primary source of water within the area. Average annual precipitation (1951-80) increases from west to east and ranges from about 16 to 48 inches (fig. 1). The climate of the western one-third of Kansas and Nebraska, where the average annual precipitation generally is less than 20 inches per year, is considered to be semiarid. This area receives little precipitation chiefly because it is distant from the Gulf of Mexico, which is the principal source of moisture-laden air for the entire segment, but partly because it is located in the rain shadow of the Rocky Mountains. Average annual precipitation is greatest in southeastern Missouri. Much of the precipitation is returned to the atmosphere by evapotranspiration, which is the combination of evaporation from the land surface and surface-water bodies, and transpiration from plants. Some of the precipitation either flows directly into streams as overland runoff or percolates into the soil and then moves downward into aquifers where it is stored for a time and subsequently released as base flow to streams. Average annual runoff, which is the

  4. Validating global hydrological models by ground and space gravimetry

    Institute of Scientific and Technical Information of China (English)

    ZHOU JiangCun; SUN HePing; XU JianQiao

    2009-01-01

    The long-term continuous gravity observations obtained by the superconducting gravimeters (SG) at seven globally-distributed stations are comprehensively analyzed. After removing the signals related to the Earth's tides and variations in the Earth's rotation, the gravity residuals are used to describe the seasonal fluctuations in gravity field. Meanwhile, the gravity changes due to the air pressure loading are theoretically modeled from the measurements of the local air pressure, and those due to land water and nontidal ocean loading are also calculated according to the corresponding numerical models. The numerical results show that the gravity changes due to both the air pressure and land water loading are as large as 100×10-9 m s-2 in magnitude, and about 10×10-9 m s-2 for those due to the nontidal ocean loading in the coastal area. On the other hand, the monthly-averaged gravity variations over the area surrounding the stations are derived from the spherical harmonic coefficients of the GRACE-recovered gravity fields, by using Gaussian smoothing technique in which the radius is set to be 600 km. Compared the land water induced gravity variations, the SG observations after removal of tides, polar motion effects, air pressure and nontidal ocean loading effects and the GRACE-derived gravity variations with each other, it is inferred that both the ground- and space-based gravity observations can effectively detect the seasonal gravity variations with a magnitude of 100×10-9 m s-2 induced by the land water loading. This implies that high precision gravimetry is an effective technique to validate the reliabilities of the hydrological models.

  5. General introduction on payloads, ground segment and data application of Fengyun 3A

    Institute of Scientific and Technical Information of China (English)

    Peng ZHANG; Jun YANG; Chaohua DONG; Naimeng LU; Zhongdong YANG; Jinmin SHI

    2009-01-01

    Fengyun 3 series are the second-generation polar-orbiting meteorological satellites of China. The first satellite of Fengyun 3 series, FY-3A, is a research and development satellite with 11 payloads onboard. FY-3A was launched successfully at 11 a.m. On May 27, 2008. Since the launch, FY-3A data have been applied to the services on the flood season and the Beijing 2008 Olympic Games. In this paper, the platform, payloads, and ground segment designs are introduced. Some typical images during the on-orbit commission test are rendered. Improvements of FY-3A on Earth observations are summarized at the end by comparing them with FY-1D, the last satellite of Fengyun 1 series.

  6. Construct validity of center of rotation in differentiating of lumbar segmental instability patients.

    Science.gov (United States)

    Taghipour-Darzi, Mohammad; Ebrahimi-Takamjani, Esmail; Salavati, Mahyar; Mobini, Bahram; Zekavat, Hajar; Beneck, George J

    2015-01-01

    Lumbar Segmental Instability (LSI) is a subgroup of nonspecific Low Back Pain (NSLBP) without any accepted diagnostic tool as a gold standard. Some authors emphasize on quality measure such as centre of rotation (COR) but construct validity of this measure had not been approved. Therefore the purpose of the present study was to evaluate Concurrent and Convergent validity of COR in differentiating LSI. A total of 66 volunteered males participated in three groups named control, NSLBP and LSI groups based on clinical examination. Patients were diagnosed as LSI according to screening criteria adopted by Hicks et al. Study variables included CORs of lumbar segments in sagittal plane. Three x-rays were taken in neutral, flexion and extension positions. The variables were calculated using CARA software. The ANOVA and Tukey test were utilized in statistic analysis. ANOVA results demonstrated mean differences between three groups for COR of L4 motion segment in y axis (p= 0/008) and L5 motion segment in y axis (p= 0/005) were significant. Tukey test showed significant difference for COR of L4 motion segment in y axis between LSI and healthy groups (p= 0/038) and between LSI and NSLBP groups (p= 0/009). For COR of L5 motion segment in y axis, tukey test demonstrated mean difference between LSI and healthy groups (p= 0/028) and between LSI and NSLBP groups (p= 0/007) were significant. Tukey test did't show any significant difference between NSLBP and healthy groups for COR of L4 (p= 0/852) and L5 (p= 0/871) motion segments in y-axis. The COR has ability to differentiate patients with signs and symptoms of LSI from other NSLBP and healthy subjects based on the present study results. However, more researches are needed to develop and support results of this study.

  7. A DXA validated geometric model for the calculation of body segment inertial parameters of young females.

    Science.gov (United States)

    Winter, Samantha Lee; Forrest, Sarah Michelle; Wallace, Joanne; Challis, John H

    2017-08-08

    The purpose of this study was to validate a new geometric solids model, developed to address the lack of female specific models for body segment inertial parameter estimation. A second aim was to determine the effect of reducing the number of geometric solids used to model the limb segments on model accuracy. The 'full' model comprised 56 geometric solids, the 'reduced' 31, and the 'basic' 16. Predicted whole-body inertial parameters were compared with direct measurements (reaction board, scales), and predicted segmental parameters with those estimated from whole-body DXA scans for 28 females. The percentage root mean square error (%RMSE) for whole-body volume was geometric solids are required to more accurately model the trunk.

  8. Validation of connectivity-based thalamic segmentation with direct electrophysiologic recordings from human sensory thalamus.

    Science.gov (United States)

    Elias, W Jeffrey; Zheng, Zhong A; Domer, Paul; Quigg, Mark; Pouratian, Nader

    2012-02-01

    Connectivity-based segmentation has been used to identify functional gray matter subregions that are not discernable on conventional magnetic resonance imaging. However, the accuracy and reliability of this technique has only been validated using indirect means. In order to provide direct electrophysiologic validation of connectivity-based thalamic segmentations within human subjects, we assess the correlation of atlas-based thalamic anatomy, connectivity-based thalamic maps, and somatosensory evoked thalamic potentials in two adults with medication-refractory epilepsy who were undergoing intracranial EEG monitoring with intrathalamic depth and subdural cortical strip electrodes. MRI with atlas-derived localization was used to delineate the anatomic boundaries of the ventral posterolateral (VPL) nucleus of the thalamus. Somatosensory evoked potentials with intrathalamic electrodes physiologically identified a discrete region of phase reversal in the ventrolateral thalamus. Finally, DTI was obtained so that probabilistic tractography and connectivity-based segmentation could be performed to correlate the region of thalamus linked to sensory areas of the cortex, namely the postcentral gyrus. We independently utilized these three different methods in a blinded fashion to localize the "sensory" thalamus, demonstrating a high-degree of reproducible correlation between electrophysiologic and connectivity-based maps of the thalamus. This study provides direct electrophysiologic validation of probabilistic tractography-based thalamic segmentation. Importantly, this study provides an electrophysiological basis for using connectivity-based segmentation to further study subcortical anatomy and physiology while also providing the clinical basis for targeting deep brain nuclei with therapeutic stimulation. Finally, these direct recordings from human thalamus confirm early inferences of a sensory thalamic component of the N18 waveform in somatosensory evoked potentials.

  9. Design and Experimental Validation of a Simple Controller for a Multi-Segment Magnetic Crawler Robot

    Science.gov (United States)

    2015-04-01

    X., "Development of a wall climbing robot for ship rust removal," Int. Conf. on Mechatronics and Automation (ICMA), 4610-4615 (2009). [6] Leon...Design and experimental validation of a simple controller for a multi-segment magnetic crawler robot Leah Kelley*a, Saam Ostovari**b, Aaron B...magnetic crawler robot has been designed for ship hull inspection. In its simplest version, passive linkages that provide two degrees of relative

  10. Numerical simulation and experimental validation of aircraft ground deicing model

    OpenAIRE

    2016-01-01

    Aircraft ground deicing plays an important role of guaranteeing the aircraft safety. In practice, most airports generally use as many deicing fluids as possible to remove the ice, which causes the waste of the deicing fluids and the pollution of the environment. Therefore, the model of aircraft ground deicing should be built to establish the foundation for the subsequent research, such as the optimization of the deicing fluid consumption. In this article, the heat balance of the deicing proce...

  11. Validation of model-based pelvis bone segmentation from MR images for PET/MR attenuation correction

    Science.gov (United States)

    Renisch, S.; Blaffert, T.; Tang, J.; Hu, Z.

    2012-02-01

    With the recent introduction of combined Magnetic Resonance Imaging (MRI) / Positron Emission Tomography (PET) systems, the generation of attenuation maps for PET based on MR images gained substantial attention. One approach for this problem is the segmentation of structures on the MR images with subsequent filling of the segments with respective attenuation values. Structures of particular interest for the segmentation are the pelvis bones, since those are among the most heavily absorbing structures for many applications, and they can serve at the same time as valuable landmarks for further structure identification. In this work the model-based segmentation of the pelvis bones on gradient-echo MR images is investigated. A processing chain for the detection and segmentation of the pelvic bones is introduced, and the results are evaluated using CT-generated "ground truth" data. The results indicate that a model based segmentation of the pelvis bone is feasible with moderate requirements to the pre- and postprocessing steps of the segmentation.

  12. The CryoSat-2 Payload Data Ground Segment and Data Processing

    Science.gov (United States)

    Frommknecht, Bjoern; Parrinello, Tommaso; Badessi, Stefano; Mizzi, Loretta; Torroni, Vittorio

    2016-04-01

    Selected as the first Earth Explorer Opportunity mission and following the launch failure of Cryosat-1 in 2005, the Cryosat-2 mission was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a baseline 3-year period. The main CryoSat-2 mission objectives can be summarised in the determination of the regional and basin-scale trends in perennial Arctic sea ice thickness and mass, and in the determination of regional and total contributions to global sea level of the Antarctic and Greenland Ice. Therefore, the observations made over the life time of the mission will provide conclusive evidence as to whether there is a trend towards diminishing polar ice cover and consequently improve our understanding of the relationship between ice and global climate change. Scope of this paper is to describe the Cryosat-2 Ground Segment present configuration and its main function to satisfy the Cryosat-2 mission requirements. In particular, the paper will highlight the current status of the processing of the SIRAL instrument L1b and L2 products, both for ocean and ice products, in terms of completeness and availability. Additional information will be also given on the PDGS current status and planned evolutions, including product and processor updates and associated reprocessing campaigns.

  13. Measured and estimated ground reaction forces for multi-segment foot models.

    Science.gov (United States)

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L; Richards, James G

    2010-12-01

    Accurate measurement of ground reaction forces under discrete areas of the foot is important in the development of more advanced foot models, which can improve our understanding of foot and ankle function. To overcome current equipment limitations, a few investigators have proposed combining a pressure mat with a single force platform and using a proportionality assumption to estimate subarea shear forces and free moments. In this study, two adjacent force platforms were used to evaluate the accuracy of the proportionality assumption on a three segment foot model during normal gait. Seventeen right feet were tested using a targeted walking approach, isolating two separate joints: transverse tarsal and metatarsophalangeal. Root mean square (RMS) errors in shear forces up to 6% body weight (BW) were found using the proportionality assumption, with the highest errors (peak absolute errors up to 12% BW) occurring between the forefoot and toes in terminal stance. The hallux exerted a small braking force in opposition to the propulsive force of the forefoot, which was unaccounted for by the proportionality assumption. While the assumption may be suitable for specific applications (e.g. gait analysis models), it is important to understand that some information on foot function can be lost. The results help highlight possible limitations of the assumption. Measured ensemble average subarea shear forces during normal gait are also presented for the first time.

  14. Ship Grounding on Rock - II. Validation and Application

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup

    1997-01-01

    with errors less than 10%. The rock penetration to fracture is predictedwith errors of 10-15%. The sensitivity to uncertain input parameters is discussed. Analysis of an accidental grounding that was recorded in 1975, also shows that the theoretical model canreproduce the observed damage. Finally...

  15. Characters Segmentation of Cursive Handwritten Words based on Contour Analysis and Neural Network Validation

    Directory of Open Access Journals (Sweden)

    Fajri Kurniawan

    2013-09-01

    Full Text Available This paper presents a robust algorithm to identify the letter boundaries in images of unconstrained handwritten word . The proposed algorithm is based on  vertical  contour  analysis.  Proposed  algorithm  is  performed  to  generate  presegmentation by analyzing the vertical contours from right to left. The unwanted segmentation  points  are  reduced  using  neural  network  validation  to  improve accuracy  of  segmentation.  The  neural  network  is  utilized  to  validate segmentation  points.  The  experiments  are  performed  on  the  IAM  benchmark database.  The  results  are  showing  that  the  proposed  algorithm  capable  to accurately locating the letter boundaries for unconstrained handwritten words.

  16. Automatic segmentation of ground-glass opacities in lung CT images by using Markov random field-based algorithms.

    Science.gov (United States)

    Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo

    2012-06-01

    Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.

  17. Validating Microwave-Based Satellite Rain Rate Retrievals Over TRMM Ground Validation Sites

    Science.gov (United States)

    Fisher, B. L.; Wolff, D. B.

    2008-12-01

    Multi-channel, passive microwave instruments are commonly used today to probe the structure of rain systems and to estimate surface rainfall from space. Until the advent of meteorological satellites and the development of remote sensing techniques for measuring precipitation from space, there was no observational system capable of providing accurate estimates of surface precipitation on global scales. Since the early 1970s, microwave measurements from satellites have provided quantitative estimates of surface rainfall by observing the emission and scattering processes due to the existence of clouds and precipitation in the atmosphere. This study assesses the relative performance of microwave precipitation estimates from seven polar-orbiting satellites and the TRMM TMI using four years (2003-2006) of instantaneous radar rain estimates obtained from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The seven polar orbiters include three different sensor types: SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), and AMSR-E. The TMI aboard the TRMM satellite flies in a sun asynchronous orbit between 35 S and 35 N latitudes. The rain information from these satellites are combined and used to generate several multi-satellite rain products, namely the Goddard TRMM Multi-satellite Precipitation Analysis (TMPA), NOAA's CPC Morphing Technique (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN). Instantaneous rain rates derived from each sensor were matched to the GV estimates in time and space at a resolution of 0.25 degrees. The study evaluates the measurement and error characteristics of the various satellite estimates through inter-comparisons with GV radar estimates. The GV rain observations provided an empirical ground-based reference for assessing the relative performance of each sensor and sensor

  18. Validation of a Torso-Mounted Accelerometer for Measures of Vertical Oscillation and Ground Contact Time During Treadmill Running.

    Science.gov (United States)

    Watari, Ricky; Hettinga, Blayne; Osis, Sean; Ferber, Reed

    2016-06-01

    The purpose of this study was to validate measures of vertical oscillation (VO) and ground contact time (GCT) derived from a commercially-available, torso-mounted accelerometer compared with single marker kinematics and kinetic ground reaction force (GRF) data. Twenty-two semi-elite runners ran on an instrumented treadmill while GRF data (1000 Hz) and three-dimensional kinematics (200 Hz) were collected for 60 s across 5 different running speeds ranging from 2.7 to 3.9 m/s. Measurement agreement was assessed by Bland-Altman plots with 95% limits of agreement and by concordance correlation coefficient (CCC). The accelerometer had excellent CCC agreement (> 0.97) with marker kinematics, but only moderate agreement, and overestimated measures between 16.27 mm to 17.56 mm compared with GRF VO measures. The GCT measures from the accelerometer had very good CCC agreement with GRF data, with less than 6 ms of mean bias at higher speeds. These results indicate a torso-mounted accelerometer provides valid and accurate measures of torso-segment VO, but both a marker placed on the torso and the accelerometer yield systematic overestimations of center of mass VO. Measures of GCT from the accelerometer are valid when compared with GRF data, particularly at faster running speeds.

  19. A validated active contour method driven by parabolic arc model for detection and segmentation of mitochondria.

    Science.gov (United States)

    Tasel, Serdar F; Mumcuoglu, Erkan U; Hassanpour, Reza Z; Perkins, Guy

    2016-06-01

    Recent studies reveal that mitochondria take substantial responsibility in cellular functions that are closely related to aging diseases caused by degeneration of neurons. These studies emphasize that the membrane and crista morphology of a mitochondrion should receive attention in order to investigate the link between mitochondrial function and its physical structure. Electron microscope tomography (EMT) allows analysis of the inner structures of mitochondria by providing highly detailed visual data from large volumes. Computerized segmentation of mitochondria with minimum manual effort is essential to accelerate the study of mitochondrial structure/function relationships. In this work, we improved and extended our previous attempts to detect and segment mitochondria from transmission electron microcopy (TEM) images. A parabolic arc model was utilized to extract membrane structures. Then, curve energy based active contours were employed to obtain roughly outlined candidate mitochondrial regions. Finally, a validation process was applied to obtain the final segmentation data. 3D extension of the algorithm is also presented in this paper. Our method achieved an average F-score performance of 0.84. Average Dice Similarity Coefficient and boundary error were measured as 0.87 and 14nm respectively.

  20. A validated active contour method driven by parabolic arc model for detection and segmentation of mitochondria

    Science.gov (United States)

    Tasel, Serdar F.; Mumcuoglu, Erkan U.; Hassanpour, Reza Z.; Perkins, Guy

    2017-01-01

    Recent studies reveal that mitochondria take substantial responsibility in cellular functions that are closely related to aging diseases caused by degeneration of neurons. These studies emphasize that the membrane and crista morphology of a mitochondrion should receive attention in order to investigate the link between mitochondrial function and its physical structure. Electron microscope tomography (EMT) allows analysis of the inner structures of mitochondria by providing highly detailed visual data from large volumes. Computerized segmentation of mitochondria with minimum manual effort is essential to accelerate the study of mitochondrial structure/function relationships. In this work, we improved and extended our previous attempts to detect and segment mitochondria from transmission electron microcopy (TEM) images. A parabolic arc model was utilized to extract membrane structures. Then, curve energy based active contours were employed to obtain roughly outlined candidate mitochondrial regions. Finally, a validation process was applied to obtain the final segmentation data. 3D extension of the algorithm is also presented in this paper. Our method achieved an average F-score performance of 0.84. Average Dice Similarity Coefficient and boundary error were measured as 0.87 and 14 nm respectively. PMID:26956730

  1. Ground Water Atlas of the United States: Segment 4, Oklahoma, Texas

    Science.gov (United States)

    Ryder, Paul D.

    1996-01-01

    The two States, Oklahoma and Texas, that compose Segment 4 of this Atlas are located in the south-central part of the Nation. These States are drained by numerous rivers and streams, the largest being the Arkansas, the Canadian, the Red, the Sabine, the Trinity, the Brazos, the Colorado, and the Pecos Rivers and the Rio Grande. Many of these rivers and their tributaries supply large amounts of water for human use, mostly in the eastern parts of the two States. The large perennial streams in the east with their many associated impoundments coincide with areas that have dense populations. Large metropolitan areas such as Oklahoma City and Tulsa, Okla., and Dallas, Fort Worth, Houston, and Austin, Tex., are supplied largely or entirely by surface water. However, in 1985 more than 7.5 million people, or about 42 percent of the population of the two States, depended on ground water as a source of water supply. The metropolitan areas of San Antonio and El Paso, Tex., and numerous smaller communities depend largely or entirely on ground water for their source of supply. The ground water is contained in aquifers that consist of unconsolidated deposits and consolidated sedimentary rocks. This chapter describes the geology and hydrology of each of the principal aquifers throughout the two-State area. Precipitation is the source of all the water in Oklahoma and Texas. Average annual precipitation ranges from about 8 inches per year in southwestern Texas to about 56 inches per year in southeastern Texas (fig. 1). In general, precipitation increases rather uniformly from west to east in the two States. Much of the precipitation either flows directly into rivers and streams as overland runoff or indirectly as base flow that discharges from aquifers where the water has been stored for some time. Accordingly, the areal distribution of average annual runoff from 1951 to 1980 (fig. 2) reflects that of average annual precipitation. Average annual runoff in the two-State area ranges

  2. Development and validation of intracranial thrombus segmentation on CT angiography in patients with acute ischemic stroke.

    Directory of Open Access Journals (Sweden)

    Emilie M M Santos

    Full Text Available Thrombus characterization is increasingly considered important in predicting treatment success for patients with acute ischemic stroke. The lack of intensity contrast between thrombus and surrounding tissue in CT images makes manual delineation a difficult and time consuming task. Our aim was to develop an automated method for thrombus measurement on CT angiography and validate it against manual delineation.Automated thrombus segmentation was achieved using image intensity and a vascular shape prior derived from the segmentation of the contralateral artery. In 53 patients with acute ischemic stroke due to proximal intracranial arterial occlusion, automated length and volume measurements were performed. Accuracy was assessed by comparison with inter-observer variation of manual delineations using intraclass correlation coefficients and Bland-Altman analyses.The automated method successfully segmented the thrombus for all 53 patients. The intraclass correlation of automated and manual length and volume measurements were 0.89 and 0.84. Bland-Altman analyses yielded a bias (limits of agreement of -0.4 (-8.8, 7.7 mm and 8 (-126, 141 mm3 for length and volume, respectively. This was comparable to the best interobserver agreement, with an intraclass correlation coefficients of 0.90 and 0.85 and a bias (limits of agreement of -0.1 (-11.2, 10.9 mm and -17 (-216, 185 mm3.The method facilitates automated thrombus segmentation for accurate length and volume measurements, is relatively fast and requires minimal user input, while being insensitive to high hematocrit levels and vascular calcifications. Furthermore, it has the potential to assess thrombus characteristics of low-density thrombi.

  3. How human resource organization can enhance space information acquisition and processing: the experience of the VENESAT-1 ground segment

    Science.gov (United States)

    Acevedo, Romina; Orihuela, Nuris; Blanco, Rafael; Varela, Francisco; Camacho, Enrique; Urbina, Marianela; Aponte, Luis Gabriel; Vallenilla, Leopoldo; Acuña, Liana; Becerra, Roberto; Tabare, Terepaima; Recaredo, Erica

    2009-12-01

    Built in cooperation with the P.R of China, in October 29th of 2008, the Bolivarian Republic of Venezuela launched its first Telecommunication Satellite, the so called VENESAT-1 (Simón Bolívar Satellite), which operates in C (covering Center America, The Caribbean Region and most of South America), Ku (Bolivia, Cuba, Dominican Republic, Haiti, Paraguay, Uruguay, Venezuela) and Ka bands (Venezuela). The launch of VENESAT-1 represents the starting point for Venezuela as an active player in the field of space science and technology. In order to fulfill mission requirements and to guarantee the satellite's health, local professionals must provide continuous monitoring, orbit calculation, maneuvers preparation and execution, data preparation and processing, as well as data base management at the VENESAT-1 Ground Segment, which includes both a primary and backup site. In summary, data processing and real time data management are part of the daily activities performed by the personnel at the ground segment. Using published and unpublished information, this paper presents how human resource organization can enhance space information acquisition and processing, by analyzing the proposed organizational structure for the VENESAT-1 Ground Segment. We have found that the proposed units within the organizational structure reflect 3 key issues for mission management: Satellite Operations, Ground Operations, and Site Maintenance. The proposed organization is simple (3 hierarchical levels and 7 units), and communication channels seem efficient in terms of facilitating information acquisition, processing, storage, flow and exchange. Furthermore, the proposal includes a manual containing the full description of personnel responsibilities and profile, which efficiently allocates the management and operation of key software for satellite operation such as the Real-time Data Transaction Software (RDTS), Data Management Software (DMS), and Carrier Spectrum Monitoring Software (CSM

  4. The validation index: a new metric for validation of segmentation algorithms using two or more expert outlines with application to radiotherapy planning.

    Science.gov (United States)

    Juneja, Prabhjot; Evans, Philp M; Harris, Emma J

    2013-08-01

    Validation is required to ensure automated segmentation algorithms are suitable for radiotherapy target definition. In the absence of true segmentation, algorithmic segmentation is validated against expert outlining of the region of interest. Multiple experts are used to overcome inter-expert variability. Several approaches have been studied in the literature, but the most appropriate approach to combine the information from multiple expert outlines, to give a single metric for validation, is unclear. None consider a metric that can be tailored to case-specific requirements in radiotherapy planning. Validation index (VI), a new validation metric which uses experts' level of agreement was developed. A control parameter was introduced for the validation of segmentations required for different radiotherapy scenarios: for targets close to organs-at-risk and for difficult to discern targets, where large variation between experts is expected. VI was evaluated using two simulated idealized cases and data from two clinical studies. VI was compared with the commonly used Dice similarity coefficient (DSCpair - wise) and found to be more sensitive than the DSCpair - wise to the changes in agreement between experts. VI was shown to be adaptable to specific radiotherapy planning scenarios.

  5. Subscale Validation of the Subsurface Active Filtration of Exhaust (SAFE) Approach to the NTP Ground Testing

    Science.gov (United States)

    Marshall, William M.; Borowski, Stanley K.; Bulman, Mel; Joyner, Russell; Martin, Charles R.

    2015-01-01

    Nuclear thermal propulsion (NTP) has been recognized as an enabling technology for missions to Mars and beyond. However, one of the key challenges of developing a nuclear thermal rocket is conducting verification and development tests on the ground. A number of ground test options are presented, with the Sub-surface Active Filtration of Exhaust (SAFE) method identified as a preferred path forward for the NTP program. The SAFE concept utilizes the natural soil characteristics present at the Nevada National Security Site to provide a natural filter for nuclear rocket exhaust during ground testing. A validation method of the SAFE concept is presented, utilizing a non-nuclear sub-scale hydrogen/oxygen rocket seeded with detectible radioisotopes. Additionally, some alternative ground test concepts, based upon the SAFE concept, are presented. Finally, an overview of the ongoing discussions of developing a ground test campaign are presented.

  6. Empirical Validation of Objective Functions in Feature Selection Based on Acceleration Motion Segmentation Data

    Directory of Open Access Journals (Sweden)

    Jong Gwan Lim

    2015-01-01

    Full Text Available Recent change in evaluation criteria from accuracy alone to trade-off with time delay has inspired multivariate energy-based approaches in motion segmentation using acceleration. The essence of multivariate approaches lies in the construction of highly dimensional energy and requires feature subset selection in machine learning. Due to fast process, filter methods are preferred; however, their poorer estimate is of the main concerns. This paper aims at empirical validation of three objective functions for filter approaches, Fisher discriminant ratio, multiple correlation (MC, and mutual information (MI, through two subsequent experiments. With respect to 63 possible subsets out of 6 variables for acceleration motion segmentation, three functions in addition to a theoretical measure are compared with two wrappers, k-nearest neighbor and Bayes classifiers in general statistics and strongly relevant variable identification by social network analysis. Then four kinds of new proposed multivariate energy are compared with a conventional univariate approach in terms of accuracy and time delay. Finally it appears that MC and MI are acceptable enough to match the estimate of two wrappers, and multivariate approaches are justified with our analytic procedures.

  7. Validation of Aura OMI by Aircraft and Ground-Based Measurements

    Science.gov (United States)

    McPeters, R. D.; Petropavlovskikh, I.; Kroon, M.

    2006-12-01

    Both aircraft-based and ground-based measurements have been used to validate ozone measurements by the OMI instrument on Aura. Three Aura Validation Experiment (AVE) flights have been conducted, in November 2004 and June 2005 with the NASA WB57, and in January/February 2005 with the NASA DC-8. On these flights, validation of OMI was primarily done using data from the CAFS (CCD Actinic Flux Spectroradiometer) instrument, which is used to measure total column ozone above the aircraft. These measurements are used to differentiate changes in stratospheric ozone from changes in total column ozone. Also, changes in ozone over high clouds measured by OMI were checked in a flight over tropical storm Arlene on a flight on June 11th. Ground-based measurements were made during the SAUNA campaign in Sodankyla, Finland, in March and April 2006. Both total column ozone and the ozone vertical distribution were validated.

  8. Contrast validation test for retrieval method of high frequency ground wave radar

    Institute of Scientific and Technical Information of China (English)

    WANG Hailong; GUO Peifang; HAN Shuzong; XIE Qiang; ZHOU Liangming

    2005-01-01

    In this paper, on the basis of the working principles of high frequency ground wave radar for retrieval of ocean wave and sea wind elements were used to systematically study the data obtained from contrast validation test in Zhoushan sea area of Zhejiang Province on Oct. 2000, to validate the accuracy of OSMAR2000for wave and wind parameters, and to analyze the possible error caused when using OSMAR2000 to retrieve ocean parameters.

  9. A Ground-Based Validation System of Teleoperation for a Space Robot

    OpenAIRE

    Xueqian Wang; Houde Liu; Wenfu Xu; Bin Liang; Yingchun Zhang

    2012-01-01

    Teleoperation of space robots is very important for future on‐orbit service. In order to assure the task is accomplished successfully, ground experiments are required to verify the function and validity of the teleoperation system before a space robot is launched. In this paper, a ground‐based validation subsystem is developed as a part of a teleoperation system. The subsystem is mainly composed of four parts: the input verification module, the onboard verification module, the dynamic and ima...

  10. Consolidated Ground Segment Requirements for a UHF Radar for the ESSAS

    Science.gov (United States)

    Muller, Florent; Vera, Juan

    2009-03-01

    ESA has launched a nine months long study to define the requirements associated to the ground segment of a UHF (300-3000 MHz) radar system. The study has been awarded in open competition to a consortium led by Onera, associated to the Spanish companies Indra and its sub-contractor Deimos. After a phase of consolidation of the requirements, different monostatic and bistatic concepts of radars will be proposed and evaluated. Two concepts will be selected for further design studies. ESA will then select the best one, for detailed design as well as cost and performance evaluation. The aim of this paper is to present the results of the first phase of the study concerning the consolidation of the radar system requirements. The main mission for the system is to be able to build and maintain a catalogue of the objects in low Earth orbit (apogee lower than 2000km) in an autonomous way, for different sizes of objects, depending on the future successive development phases of the project. The final step must give the capability of detecting and tracking 10cm objects, with a possible upgrade to 5 cm objects. A demonstration phase must be defined for 1 m objects. These different steps will be considered during all the phases of the study. Taking this mission and the different steps of the study as a starting point, the first phase will define a set of requirements for the radar system. It was finished at the end of January 2009. First part will describe the constraints derived from the targets and their environment. Orbiting objects have a given distribution in space, and their observability and detectability are based on it. It is also related to the location of the radar system But they are also dependant on the natural propagation phenomenon, especially ionospheric issues, and the characteristics of the objects. Second part will focus on the mission itself. To carry out the mission, objects must be detected and tracked regularly to refresh the associated orbital parameters

  11. Identifying food-related life style segments by a cross-culturally valid scaling device

    DEFF Research Database (Denmark)

    Brunsø, Karen; Grunert, Klaus G.

    1994-01-01

    We present a new view of life style, based on a cognitive perspective, which makes life style specific to certain areas of consumption. The specific area of consumption studied here is food, resulting in a concept of food-related life style. An instrument is developed that can measure food-relate...... then applied the set of scales to a fourth country, Germany, based on a representative sample of 1000 respondents. The scales had, with a fe exceptions, moderately good reliabilities. A cluster ana-ly-sis led to the identification of 5 segments, which differed on all 23 scales.......We present a new view of life style, based on a cognitive perspective, which makes life style specific to certain areas of consumption. The specific area of consumption studied here is food, resulting in a concept of food-related life style. An instrument is developed that can measure food......-related life style in a cross-culturally valid way. To this end, we have col-lected a pool of 202 items, collected data in three countries, and have con-structed scales based on cross-culturally stable patterns. These scales have then been subjected to a number of tests of reliability and vali-dity. We have...

  12. Feedback enhances feedforward figure-ground segmentation by changing firing mode

    OpenAIRE

    Hans Supèr; August Romeo

    2011-01-01

    In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-gro...

  13. Validation of Broadband Ground Motion Simulations for Japanese Crustal Earthquakes by the Recipe

    Science.gov (United States)

    Iwaki, A.; Maeda, T.; Morikawa, N.; Miyake, H.; Fujiwara, H.

    2015-12-01

    The Headquarters for Earthquake Research Promotion (HERP) of Japan has organized the broadband ground motion simulation method into a standard procedure called the "recipe" (HERP, 2009). In the recipe, the source rupture is represented by the characterized source model (Irikura and Miyake, 2011). The broadband ground motion time histories are computed by a hybrid approach: the 3-D finite-difference method (Aoi et al. 2004) and the stochastic Green's function method (Dan and Sato, 1998; Dan et al. 2000) for the long- (> 1 s) and short-period (structure model. As the engineering significance of scenario earthquake ground motion prediction is increasing, thorough verification and validation are required for the simulation methods. This study presents the self-validation of the recipe for two MW6.6 crustal events in Japan, the 2000 Tottori and 2004 Chuetsu (Niigata) earthquakes. We first compare the simulated velocity time series with the observation. Main features of the velocity waveforms, such as the near-fault pulses and the large later phases on deep sediment sites are well reproduced by the simulations. Then we evaluate 5% damped pseudo acceleration spectra (PSA) in the framework of the SCEC Broadband Platform (BBP) validation (Dreger et al. 2015). The validation results are generally acceptable in the period range 0.1 - 10 s, whereas those in the shortest period range (0.01-0.1 s) are less satisfactory. We also evaluate the simulations with the 1-D velocity structure models used in the SCEC BBP validation exercise. Although the goodness-of-fit parameters for PSA do not significantly differ from those for the 3-D velocity structure model, noticeable differences in velocity waveforms are observed. Our results suggest the importance of 1) well-constrained 3-D velocity structure model for broadband ground motion simulations and 2) evaluation of time series of ground motion as well as response spectra.

  14. Molecular species identification of Central European ground beetles (Coleoptera: Carabidae using nuclear rDNA expansion segments and DNA barcodes

    Directory of Open Access Journals (Sweden)

    Raupach Michael J

    2010-09-01

    Full Text Available Abstract Background The identification of vast numbers of unknown organisms using DNA sequences becomes more and more important in ecological and biodiversity studies. In this context, a fragment of the mitochondrial cytochrome c oxidase I (COI gene has been proposed as standard DNA barcoding marker for the identification of organisms. Limitations of the COI barcoding approach can arise from its single-locus identification system, the effect of introgression events, incomplete lineage sorting, numts, heteroplasmy and maternal inheritance of intracellular endosymbionts. Consequently, the analysis of a supplementary nuclear marker system could be advantageous. Results We tested the effectiveness of the COI barcoding region and of three nuclear ribosomal expansion segments in discriminating ground beetles of Central Europe, a diverse and well-studied invertebrate taxon. As nuclear markers we determined the 18S rDNA: V4, 18S rDNA: V7 and 28S rDNA: D3 expansion segments for 344 specimens of 75 species. Seventy-three species (97% of the analysed species could be accurately identified using COI, while the combined approach of all three nuclear markers provided resolution among 71 (95% of the studied Carabidae. Conclusion Our results confirm that the analysed nuclear ribosomal expansion segments in combination constitute a valuable and efficient supplement for classical DNA barcoding to avoid potential pitfalls when only mitochondrial data are being used. We also demonstrate the high potential of COI barcodes for the identification of even closely related carabid species.

  15. Development and validation of a segmentation-free polyenergetic algorithm for dynamic perfusion computed tomography.

    Science.gov (United States)

    Lin, Yuan; Samei, Ehsan

    2016-07-01

    Dynamic perfusion imaging can provide the morphologic details of the scanned organs as well as the dynamic information of blood perfusion. However, due to the polyenergetic property of the x-ray spectra, beam hardening effect results in undesirable artifacts and inaccurate CT values. To address this problem, this study proposes a segmentation-free polyenergetic dynamic perfusion imaging algorithm (pDP) to provide superior perfusion imaging. Dynamic perfusion usually is composed of two phases, i.e., a precontrast phase and a postcontrast phase. In the precontrast phase, the attenuation properties of diverse base materials (e.g., in a thorax perfusion exam, base materials can include lung, fat, breast, soft tissue, bone, and metal implants) can be incorporated to reconstruct artifact-free precontrast images. If patient motions are negligible or can be corrected by registration, the precontrast images can then be employed as a priori information to derive linearized iodine projections from the postcontrast images. With the linearized iodine projections, iodine perfusion maps can be reconstructed directly without the influence of various influential factors, such as iodine location, patient size, x-ray spectrum, and background tissue type. A series of simulations were conducted on a dynamic iodine calibration phantom and a dynamic anthropomorphic thorax phantom to validate the proposed algorithm. The simulations with the dynamic iodine calibration phantom showed that the proposed algorithm could effectively eliminate the beam hardening effect and enable quantitative iodine map reconstruction across various influential factors. The error range of the iodine concentration factors ([Formula: see text]) was reduced from [Formula: see text] for filtered back-projection (FBP) to [Formula: see text] for pDP. The quantitative results of the simulations with the dynamic anthropomorphic thorax phantom indicated that the maximum error of iodine concentrations can be reduced from

  16. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    Science.gov (United States)

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length)(2)/Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R(2) = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  17. Segmenting and validating brain tissue definitions in the presence of varying tissue contrast.

    Science.gov (United States)

    Bansal, Ravi; Hao, Xuejun; Peterson, Bradley S

    2017-01-01

    We propose a method for segmenting brain tissue as either gray matter or white matter in the presence of varying tissue contrast, which can derive from either differential changes in tissue water content or increasing myelin content of white matter. Our method models the spatial distribution of intensities as a Markov Random Field (MRF) and estimates the parameters for the MRF model using a maximum likelihood approach. Although previously described methods have used similar models to segment brain tissue, accurate model of the conditional probabilities of tissue intensities and adaptive estimates of tissue properties to local intensities generates tissue definitions that are accurate and robust to variations in tissue contrast with age and across illnesses. Robustness to variations in tissue contrast is important to understand normal brain development and to identify the brain bases of neurological and psychiatric illnesses. We used simulated brains of varying tissue contrast to compare both visually and quantitatively the performance of our method with the performance of prior methods. We assessed validity of the cortical definitions by associating cortical thickness with various demographic features, clinical measures, and medication use in our three large cohorts of participants who were either healthy or who had Bipolar Disorder (BD), Autism Spectrum Disorder (ASD), or familial risk for Major Depressive Disorder (MDD). We assessed validity of the tissue definitions using synthetic brains and data for three large cohort of individuals with various neuropsychiatric disorders. Visual inspection and quantitative analyses showed that our method accurately and robustly defined the cortical mantle in brain images with varying contrast. Furthermore, associating the thickness with various demographic and clinical measures generated findings that were novel and supported by histological analyses or were supported by previous MRI studies, thereby validating the cortical

  18. Technology Infusion of CodeSonar into the Space Network Ground Segment (RII07)

    Science.gov (United States)

    Benson, Markland

    2008-01-01

    The NASA Software Assurance Research Program (in part) performs studies as to the feasibility of technologies for improving the safety, quality, reliability, cost, and performance of NASA software. This study considers the application of commercial automated source code analysis tools to mission critical ground software that is in the operations and sustainment portion of the product lifecycle.

  19. Semiautomatic regional segmentation to measure orbital fat volumes in thyroid-associated ophthalmopathy. A validation study.

    Science.gov (United States)

    Comerci, M; Elefante, A; Strianese, D; Senese, R; Bonavolontà, P; Alfano, B; Bonavolontà, B; Brunetti, A

    2013-08-01

    This study was designed to validate a novel semi-automated segmentation method to measure regional intra-orbital fat tissue volume in Graves' ophthalmopathy. Twenty-four orbits from 12 patients with Graves' ophthalmopathy, 24 orbits from 12 controls, ten orbits from five MRI study simulations and two orbits from a digital model were used. Following manual region of interest definition of the orbital volumes performed by two operators with different levels of expertise, an automated procedure calculated intra-orbital fat tissue volumes (global and regional, with automated definition of four quadrants). In patients with Graves' disease, clinical activity score and degree of exophthalmos were measured and correlated with intra-orbital fat volumes. Operator performance was evaluated and statistical analysis of the measurements was performed. Accurate intra-orbital fat volume measurements were obtained with coefficients of variation below 5%. The mean operator difference in total fat volume measurements was 0.56%. Patients had significantly higher intra-orbital fat volumes than controls (p<0.001 using Student's t test). Fat volumes and clinical score were significantly correlated (p<0.001). The semi-automated method described here can provide accurate, reproducible intra-orbital fat measurements with low inter-operator variation and good correlation with clinical data.

  20. Ground Truth Observations of the Interior of a Rockglacier as Validation for Geophysical Monitoring Data Sets

    Science.gov (United States)

    Hilbich, C.; Roer, I.; Hauck, C.

    2007-12-01

    Monitoring the permafrost evolution in mountain regions is currently one of the important tasks in cryospheric studies as little data on past and present changes of the ground thermal regime and its material properties are available. In addition to recently established borehole temperature monitoring networks, techniques to determine and monitor the ground ice content have to be developed. A reliable quantification of ground ice is especially important for modelling the thermal evolution of frozen ground and for assessing the hazard potential due to thawing permafrost induced slope instability. Near surface geophysical methods are increasingly applied to detect and monitor ground ice occurrences in permafrost areas. Commonly, characteristic values of electrical resistivity and seismic velocity are used as indicators for the presence of frozen material. However, validation of the correct interpretation of the geophysical parameters can only be obtained through boreholes, and only regarding vertical temperature profiles. Ground truth of the internal structure and the ice content is usually not available. In this contribution we will present a unique data set from a recently excavated rockglacier near Zermatt/Valais in the Swiss Alps, where an approximately 5 m deep trench was cut across the rockglacier body for the construction of a ski track. Longitudinal electrical resistivity tomography (ERT) and refraction seismic tomography profiles were conducted prior to the excavation, yielding data sets for cross validation of commonly applied geophysical interpretation approaches in the context of ground ice detection. A recently developed 4-phase model was applied to calculate ice-, air- and unfrozen water contents from the geophysical data sets, which were compared to the ground truth data from the excavated trench. The obtained data sets will be discussed in the context of currently established geophysical monitoring networks in permafrost areas. In addition to the

  1. Demonstration/Validation of the Snap Sampler Passive Ground Water Sampling Device

    Science.gov (United States)

    2011-06-01

    direction, although the direction of groundwater movement locally is influenced by water -supply wells and by groundwater extraction and treatment systems...Validation of the Snap Sampler Passive Ground Water Sampling Device June 2011 i COST & PERFORMANCE REPORT Project: ER-200630 TABLE OF CONTENTS...range of analyte types. These included dissolved and total inorganics (including non-metal anions, metalloids , and metals) and four volatile organic

  2. Comparison of vertical ground reaction forces during overground and treadmill running. A validation study

    Directory of Open Access Journals (Sweden)

    Kluitenberg Bas

    2012-11-01

    Full Text Available Abstract Background One major drawback in measuring ground-reaction forces during running is that it is time consuming to get representative ground-reaction force (GRF values with a traditional force platform. An instrumented force measuring treadmill can overcome the shortcomings inherent to overground testing. The purpose of the current study was to determine the validity of an instrumented force measuring treadmill for measuring vertical ground-reaction force parameters during running. Methods Vertical ground-reaction forces of experienced runners (12 male, 12 female were obtained during overground and treadmill running at slow, preferred and fast self-selected running speeds. For each runner, 7 mean vertical ground-reaction force parameters of the right leg were calculated based on five successful overground steps and 30 seconds of treadmill running data. Intraclass correlations (ICC(3,1 and ratio limits of agreement (RLOA were used for further analysis. Results Qualitatively, the overground and treadmill ground-reaction force curves for heelstrike runners and non-heelstrike runners were very similar. Quantitatively, the time-related parameters and active peak showed excellent agreement (ICCs between 0.76 and 0.95, RLOA between 5.7% and 15.5%. Impact peak showed modest agreement (ICCs between 0.71 and 0.76, RLOA between 19.9% and 28.8%. The maximal and average loading-rate showed modest to excellent ICCs (between 0.70 and 0.89, but RLOA were higher (between 34.3% and 45.4%. Conclusions The results of this study demonstrated that the treadmill is a moderate to highly valid tool for the assessment of vertical ground-reaction forces during running for runners who showed a consistent landing strategy during overground and treadmill running. The high stride-to-stride variance during both overground and treadmill running demonstrates the importance of measuring sufficient steps for representative ground-reaction force values. Therefore, an

  3. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    Science.gov (United States)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  4. Intracranial aneurysm segmentation in 3D CT angiography: Method and quantitative validation with and without prior noise filtering

    Energy Technology Data Exchange (ETDEWEB)

    Firouzian, Azadeh, E-mail: a.firouzian@erasmusmc.nl [Department of Medical Informatics, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Department of Radiology, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Manniesing, Rashindra, E-mail: r.manniesing@erasmusmc.nl [Department of Medical Informatics, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Department of Radiology, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Flach, Zwenneke H., E-mail: zwenneke.flach@gmail.com [Department of Radiology, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Risselada, Roelof, E-mail: r.risselada@erasmusmc.nl [Department of Medical Informatics, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Kooten, Fop van, E-mail: f.vankooten@erasmusmc.nl [Department of Neurology, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Sturkenboom, Miriam C.J.M., E-mail: m.sturkenboom@erasmusmc.nl [Department of Medical Informatics, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Department of Epidemiology, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Lugt, Aad van der, E-mail: a.vanderlugt@erasmusmc.nl [Department of Radiology, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Niessen, Wiro J., E-mail: w.niessen@erasmusmc.nl [Department of Medical Informatics, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Department of Radiology, Erasmus MC, University Medical Centre Rotterdam (Netherlands); Department of Imaging Science and Technology, Faculty of Applied Sciences, Delft University of Technology (Netherlands)

    2011-08-15

    Intracranial aneurysm volume and shape are important factors for predicting rupture risk, for pre-surgical planning and for follow-up studies. To obtain these parameters, manual segmentation can be employed; however, this is a tedious procedure, which is prone to inter- and intra-observer variability. Therefore there is a need for an automated method, which is accurate, reproducible and reliable. This study aims to develop and validate an automated method for segmenting intracranial aneurysms in Computed Tomography Angiography (CTA) data. Also, it is investigated whether prior smoothing improves segmentation robustness and accuracy. The proposed segmentation method is implemented in the level set framework, more specifically Geodesic Active Surfaces, in which a surface is evolved to capture the aneurysmal wall via an energy minimization approach. The energy term is composed of three different image features, namely; intensity, gradient magnitude and intensity variance. The method requires minimal user interaction, i.e. a single seed point inside the aneurysm needs to be placed, based on which image intensity statistics of the aneurysm are derived and used in defining the energy term. The method has been evaluated on 15 aneurysms in 11 CTA data sets by comparing the results to manual segmentations performed by two expert radiologists. Evaluation measures were Similarity Index, Average Surface Distance and Volume Difference. The results show that the automated aneurysm segmentation method is reproducible, and performs in the range of inter-observer variability in terms of accuracy. Smoothing by nonlinear diffusion with appropriate parameter settings prior to segmentation, slightly improves segmentation accuracy.

  5. Space and ground segment performance of the FORMOSAT-3/COSMIC mission: four years in orbit

    Directory of Open Access Journals (Sweden)

    C.-J. Fong

    2011-01-01

    Full Text Available The FORMOSAT-3/COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate mission consisting of six Low-Earth-Orbit (LEO satellites is the world's first demonstration constellation using radio occultation signals from Global Positioning System (GPS satellites. The radio occultation signals are retrieved in near real-time for global weather/climate monitoring, numerical weather prediction, and space weather research. The mission has processed on average 1400 to 1800 high-quality atmospheric sounding profiles per day. The atmospheric radio occultation soundings data are assimilated into operational numerical weather prediction models for global weather prediction, including typhoon/hurricane/cyclone forecasts. The radio occultation data has shown a positive impact on weather predictions at many national weather forecast centers. A proposed follow-on mission transitions the program from the current experimental research system to a significantly improved real-time operational system, which will reliably provide 8000 radio occultation soundings per day. The follow-on mission as planned will consist of 12 satellites with a data latency of 45 min, which will provide greatly enhanced opportunities for operational forecasts and scientific research. This paper will address the FORMOSAT-3/COSMIC system and mission overview, the spacecraft and ground system performance after four years in orbit, the lessons learned from the encountered technical challenges and observations, and the expected design improvements for the new spacecraft and ground system.

  6. The validity of vertebral translation and rotation in differentiating patients with lumbar segmental instability.

    Science.gov (United States)

    Taghipour-Darzi, Mohammad; Takamjani, Esmail Ebrahimi; Salavati, Mahyar; Mobini, Bahram; Zekavat, Hajar

    2012-12-01

    Lumbar segmental instability (LSI) is a sub-group of non-specific low back pain (NSLBP), without any accepted diagnostic tool as a golden standard. Some authors emphasize on clinical findings, and others focus on vertebral translation and rotation, but construct validity of these measures had not been approved. Therefore, the purpose of the study was to evaluate convergent and known group validity of vertebral translation and rotation in differentiating LSI from NSLBP and control subjects. Study variables included full-range and mid-range vertebral translation and rotation in sagittal plane. Five x-rays were taken in neutral, full flexion and extension and mid-flexion and mid-extension positions of lumbar spine. The variables were calculated using Computer Aided Radiographic Analysis of Spine (CARA) software after scanning. Sixty-six volunteered males participated in three groups. Twenty-two subjects were in the control group, and 44 NSLBP were divided into LSI and not LSI groups according to the criteria adopted by Hicks et al. The ANOVA and Tukey test were used in statistic analysis. ANOVA results demonstrated differences in three groups; for full-range translation and rotation, were not significant. However, the results of ANOVA demonstrated significant difference in L4-5 mid-range translation and rotation (p < 0/05). Tukey test showed significant difference for L4-5 mid-range translation between control (2.14 mm) and LSI (1.33 mm) groups (p < 0/05). Tukey test demonstrated difference between the control (14.18°) and LSI (11.65°) groups (p < 0/05); the control and not LSI (10.80) groups (p < 0/05) were significant for L4-5 mid-range rotation. On the basis of the study results, the full-range translation and rotation cannot differentiate LSI from not LSI and control groups. Moreover, the mid-range translation only differentiates control from LSI, whereas mid-range rotation differentiates control from both LSI and not LSI. Copyright © 2012

  7. Validation of space/ground antenna control algorithms using a computer-aided design tool

    Science.gov (United States)

    Gantenbein, Rex E.

    1995-01-01

    The validation of the algorithms for controlling the space-to-ground antenna subsystem for Space Station Alpha is an important step in assuring reliable communications. These algorithms have been developed and tested using a simulation environment based on a computer-aided design tool that can provide a time-based execution framework with variable environmental parameters. Our work this summer has involved the exploration of this environment and the documentation of the procedures used to validate these algorithms. We have installed a variety of tools in a laboratory of the Tracking and Communications division for reproducing the simulation experiments carried out on these algorithms to verify that they do meet their requirements for controlling the antenna systems. In this report, we describe the processes used in these simulations and our work in validating the tests used.

  8. Validation of a training method for L2 continuous-speech segmentation

    NARCIS (Netherlands)

    Cutler, A.; Shanley, J.

    2010-01-01

    Recognising continuous speech in a second language is often unexpectedly difficult, as the operation of segmenting speech is so attuned to native-language structure. We report the initial steps in development of a novel training method for secondlanguage listening, focusing on speech segmentation an

  9. Climatological Processing and Product Development for the TRMM Ground Validation Program

    Science.gov (United States)

    Marks, D. A.; Kulie, M. S.; Robinson, M.; Silberstein, D. S.; Wolff, D. B.; Ferrier, B. S.; Amitai, E.; Fisher, B.; Wang, J.; Augustine, D.; Thiele, O.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The Tropical Rainfall Measuring Mission (TRMM) satellite was successfully launched in November 1997.The main purpose of TRMM is to sample tropical rainfall using the first active spaceborne precipitation radar. To validate TRMM satellite observations, a comprehensive Ground Validation (GV) Program has been implemented. The primary goal of TRMM GV is to provide basic validation of satellite-derived precipitation measurements over monthly climatologies for the following primary sites: Melbourne, FL; Houston, TX; Darwin, Australia- and Kwajalein Atoll, RMI As part of the TRMM GV effort, research analysts at NASA Goddard Space Flight Center (GSFC) generate standardized rainfall products using quality-controlled ground-based radar data from the four primary GV sites. This presentation will provide an overview of TRMM GV climatological processing and product generation. A description of the data flow between the primary GV sites, NASA GSFC, and the TRMM Science and Data Information System (TSDIS) will be presented. The radar quality control algorithm, which features eight adjustable height and reflectivity parameters, and its effect on monthly rainfall maps, will be described. The methodology used to create monthly, gauge-adjusted rainfall products for each primary site will also be summarized. The standardized monthly rainfall products are developed in discrete, modular steps with distinct intermediate products. A summary of recently reprocessed official GV rainfall products available for TRMM science users will be presented. Updated basic standardized product results involving monthly accumulation, Z-R relationship, and gauge statistics for each primary GV site will also be displayed.

  10. Comparisons of aerosol backscatter using satellite and ground lidars: implications for calibrating and validating spaceborne lidar

    Science.gov (United States)

    Gimmestad, Gary; Forrister, Haviland; Grigas, Tomas; O’Dowd, Colin

    2017-01-01

    The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) instrument on the polar orbiter Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) is an elastic backscatter lidar that produces a global uniformly-calibrated aerosol data set. Several Calibration/Validation (Cal/Val) studies for CALIOP conducted with ground-based lidars and CALIOP data showed large aerosol profile disagreements, both random and systematic. In an attempt to better understand these problems, we undertook a series of ground-based lidar measurements in Atlanta, Georgia, which did not provide better agreement with CALIOP data than the earlier efforts, but rather prompted us to investigate the statistical limitations of such comparisons. Meaningful Cal/Val requires intercomparison data sets with small enough uncertainties to provide a check on the maximum expected calibration error. For CALIOP total attenuated backscatter, reducing the noise to the required level requires averaging profiles along the ground track for distances of at least 1,500 km. Representative comparison profiles often cannot be acquired with ground-based lidars because spatial aerosol inhomogeneities introduce systematic error into the averages. These conclusions have implications for future satellite lidar Cal/Val efforts, because planned satellite lidars measuring aerosol backscatter, wind vector, and CO2 concentration profiles may all produce data requiring considerable along-track averaging for meaningful Cal/Val. PMID:28198389

  11. Comparisons of aerosol backscatter using satellite and ground lidars: implications for calibrating and validating spaceborne lidar

    Science.gov (United States)

    Gimmestad, Gary; Forrister, Haviland; Grigas, Tomas; O’Dowd, Colin

    2017-02-01

    The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) instrument on the polar orbiter Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) is an elastic backscatter lidar that produces a global uniformly-calibrated aerosol data set. Several Calibration/Validation (Cal/Val) studies for CALIOP conducted with ground-based lidars and CALIOP data showed large aerosol profile disagreements, both random and systematic. In an attempt to better understand these problems, we undertook a series of ground-based lidar measurements in Atlanta, Georgia, which did not provide better agreement with CALIOP data than the earlier efforts, but rather prompted us to investigate the statistical limitations of such comparisons. Meaningful Cal/Val requires intercomparison data sets with small enough uncertainties to provide a check on the maximum expected calibration error. For CALIOP total attenuated backscatter, reducing the noise to the required level requires averaging profiles along the ground track for distances of at least 1,500 km. Representative comparison profiles often cannot be acquired with ground-based lidars because spatial aerosol inhomogeneities introduce systematic error into the averages. These conclusions have implications for future satellite lidar Cal/Val efforts, because planned satellite lidars measuring aerosol backscatter, wind vector, and CO2 concentration profiles may all produce data requiring considerable along-track averaging for meaningful Cal/Val.

  12. Comparisons of aerosol backscatter using satellite and ground lidars: implications for calibrating and validating spaceborne lidar.

    Science.gov (United States)

    Gimmestad, Gary; Forrister, Haviland; Grigas, Tomas; O'Dowd, Colin

    2017-02-15

    The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) instrument on the polar orbiter Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) is an elastic backscatter lidar that produces a global uniformly-calibrated aerosol data set. Several Calibration/Validation (Cal/Val) studies for CALIOP conducted with ground-based lidars and CALIOP data showed large aerosol profile disagreements, both random and systematic. In an attempt to better understand these problems, we undertook a series of ground-based lidar measurements in Atlanta, Georgia, which did not provide better agreement with CALIOP data than the earlier efforts, but rather prompted us to investigate the statistical limitations of such comparisons. Meaningful Cal/Val requires intercomparison data sets with small enough uncertainties to provide a check on the maximum expected calibration error. For CALIOP total attenuated backscatter, reducing the noise to the required level requires averaging profiles along the ground track for distances of at least 1,500 km. Representative comparison profiles often cannot be acquired with ground-based lidars because spatial aerosol inhomogeneities introduce systematic error into the averages. These conclusions have implications for future satellite lidar Cal/Val efforts, because planned satellite lidars measuring aerosol backscatter, wind vector, and CO2 concentration profiles may all produce data requiring considerable along-track averaging for meaningful Cal/Val.

  13. Validation of the Accuracy and Reliability of Culturing Intravascular Catheter Segments

    Science.gov (United States)

    1992-11-24

    catheters over guidewire using the Seldinger technique , bedside plating of catheter segments and preparation of segments for transport to the...physician(s) responsible for the patient’s care, using strict aseptic technique . Sterile 2 gowns and gloves, sterile barriers and caps were required...CULTURES Catheter subsegments sent to the lab were cultured using semiquantitative technique described by Maki. 1 The catheter subsegments were

  14. A comparison of two commercial volumetry software programs in the analysis of pulmonary ground-glass nodules: Segmentation capability and measurement accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyung Jin; Park, Chang Min; Lee, Sang Min; Lee, Hyun Joo; Goo, Jin Mo [Dept. of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul (Korea, Republic of)

    2013-08-15

    To compare the segmentation capability of the 2 currently available commercial volumetry software programs with specific segmentation algorithms for pulmonary ground-glass nodules (GGNs) and to assess their measurement accuracy. In this study, 55 patients with 66 GGNs underwent unenhanced low-dose CT. GGN segmentation was performed by using 2 volumetry software programs (LungCARE, Siemens Healthcare; LungVCAR, GE Healthcare). Successful nodule segmentation was assessed visually and morphologic features of GGNs were evaluated to determine factors affecting segmentation by both types of software. In addition, the measurement accuracy of the software programs was investigated by using an anthropomorphic chest phantom containing simulated GGNs. The successful nodule segmentation rate was significantly higher in LungCARE (90.9%) than in LungVCAR (72.7%) (p = 0.012). Vascular attachment was a negatively influencing morphologic feature of nodule segmentation for both software programs. As for measurement accuracy, mean relative volume measurement errors in nodules ≥ 10 mm were 14.89% with LungCARE and 19.96% with LungVCAR. The mean relative attenuation measurement errors in nodules ≥ 10 mm were 3.03% with LungCARE and 5.12% with LungVCAR. LungCARE shows significantly higher segmentation success rates than LungVCAR. Measurement accuracy of volume and attenuation of GGNs is acceptable in GGNs ≥ 10 mm by both software programs.

  15. Satellite Cloud Data Validation through MAGIC Ground Observation and the S'COOL Project: Scientific Benefits grounded in Citizen Science

    Science.gov (United States)

    Crecelius, S.; Chambers, L. H.; Lewis, P. M.; Rogerson, T.

    2013-12-01

    The Students' Cloud Observation On-Line (S'COOL) Project was launched in 1997 as the Formal Education and Public Outreach arm of the Clouds and the Earth's Radiant Energy System (CERES) Mission. ROVER, the Citizen Scientist area of S'COOL, started in 2007 and allows participants to make 'roving' observations from any location as opposed to a fixed, registered classroom. The S'COOL Project aids the CERES Mission in trying to answer the research question: 'What is the Effect of Clouds on the Earth's Climate'. Participants from all 50 states, most U.S. Territories, and 63 countries have reported more than 100,500 observations to the S'COOL Project over the past 16 years. The Project is supported by an intuitive website that provides curriculum support and guidance through the observation steps; 1) Request satellite overpass schedule, 2) Observe clouds, and 3) Report cloud observations. The S'COOL Website also hosts a robust database housing all participants' observations as well as the matching satellite data. While the S'COOL observation parameters are based on the data collected by 5 satellite missions, ground observations provide a unique perspective to data validation. Specifically, low to mid level clouds can be obscured by overcast high-level clouds, or difficult to observe from a satellite's perspective due to surface cover or albedo. In these cases, ground observations play an important role in filling the data gaps and providing a better, global picture of our atmosphere and clouds. S'COOL participants, operating within the boundary layer, have an advantage when observing low-level clouds that affect the area we live in, regional weather patterns, and climate change. S'COOL's long-term data set provides a valuable resource to the scientific community in improving the "poorly characterized and poorly represented [clouds] in climate and weather prediction models'. The MAGIC Team contacted S'COOL in early 2012 about making cloud observations as part of the MAGIC

  16. Validation of Macular Choroidal Thickness Measurements from Automated SD-OCT Image Segmentation.

    Science.gov (United States)

    Twa, Michael D; Schulle, Krystal L; Chiu, Stephanie J; Farsiu, Sina; Berntsen, David A

    2016-11-01

    Spectral domain optical coherence tomography (SD-OCT) imaging permits in vivo visualization of the choroid with micron-level resolution over wide areas and is of interest for studies of ocular growth and myopia control. We evaluated the speed, repeatability, and accuracy of a new image segmentation method to quantify choroid thickness compared to manual segmentation. Two macular volumetric scans (25 × 30°) were taken from 30 eyes of 30 young adult subjects in two sessions, 1 hour apart. A single rater manually delineated choroid thickness as the distance between Bruch's membrane and sclera across three B-scans (foveal, inferior, and superior-most scan locations). Manual segmentation was compared to an automated method based on graph theory, dynamic programming, and wavelet-based texture analysis. Segmentation performance comparisons included processing speed, choroid thickness measurements across the foveal horizontal midline, and measurement repeatability (95% limits of agreement (LoA)). Subjects were healthy young adults (n = 30; 24 ± 2 years; mean ± SD; 63% female) with spherical equivalent refractive error of -3.46 ± 2.69D (range: +2.62 to -8.50D). Manual segmentation took 200 times longer than automated segmentation (780 vs. 4 seconds). Mean choroid thickness at the foveal center was 263 ± 24 μm (manual) and 259 ± 23 μm (automated), and this difference was not significant (p = 0.10). Regional segmentation errors across the foveal horizontal midline (±15°) were ≤9 μm (median) except for nasal-most regions closest to the nasal peripapillary margin-15 degrees (19 μm) and 12 degrees (16 μm) from the foveal center. Repeatability of choroidal thickness measurements had similar repeatability between segmentation methods (manual LoA: ±15 μm; automated LoA: ±14 μm). Automated segmentation of SD-OCT data by graph theory and dynamic programming is a fast, accurate, and reliable method to delineate the choroid. This approach will facilitate

  17. Analysis of a kinetic multi-segment foot model. Part I: Model repeatability and kinematic validity.

    Science.gov (United States)

    Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L

    2012-04-01

    Kinematic multi-segment foot models are still evolving, but have seen increased use in clinical and research settings. The addition of kinetics may increase knowledge of foot and ankle function as well as influence multi-segment foot model evolution; however, previous kinetic models are too complex for clinical use. In this study we present a three-segment kinetic foot model and thorough evaluation of model performance during normal gait. In this first of two companion papers, model reference frames and joint centers are analyzed for repeatability, joint translations are measured, segment rigidity characterized, and sample joint angles presented. Within-tester and between-tester repeatability were first assessed using 10 healthy pediatric participants, while kinematic parameters were subsequently measured on 17 additional healthy pediatric participants. Repeatability errors were generally low for all sagittal plane measures as well as transverse plane Hindfoot and Forefoot segments (mediansegment rigidity analysis suggested rigid body behavior for the Shank and Hindfoot, with the Forefoot violating the rigid body assumptions in terminal stance/pre-swing. Joint excursions were consistent with previously published studies.

  18. A Ground-Based Validation System of Teleoperation for a Space Robot

    Directory of Open Access Journals (Sweden)

    Xueqian Wang

    2012-10-01

    Full Text Available Teleoperation of space robots is very important for future on‐orbit service. In order to assure the task is accomplished successfully, ground experiments are required to verify the function and validity of the teleoperation system before a space robot is launched. In this paper, a ground‐based validation subsystem is developed as a part of a teleoperation system. The subsystem is mainly composed of four parts: the input verification module, the onboard verification module, the dynamic and image workstation, and the communication simulator. The input verification module, consisting of hardware and software of the master, is used to verify the input ability. The onboard verification module, consisting of the same hardware and software as the onboard processor, is used to verify the processor’s computing ability and execution schedule. In addition, the dynamic and image workstation calculates the dynamic response of the space robot and target, and generates emulated camera images, including the hand‐eye cameras, global‐ vision camera and rendezvous camera. The communication simulator provides fidelity communication conditions, i.e., time delays and communication bandwidth. Lastly, we integrated a teleoperation system and conducted many experiments on the system. Experiment results show that the ground system is very useful for verified teleoperation technology.

  19. VERIFICATION & VALIDATION OF A SEMANTIC IMAGE TAGGING FRAMEWORK VIA GENERATION OF GEOSPATIAL IMAGERY GROUND TRUTH

    Energy Technology Data Exchange (ETDEWEB)

    Gleason, Shaun Scott [ORNL; Ferrell, Regina Kay [ORNL; Cheriyadat, Anil M [ORNL; Vatsavai, Raju [ORNL; Sari-Sarraf, Hamed [ORNL; Dema, Mesfin A [ORNL

    2011-01-01

    As a result of increasing geospatial image libraries, many algorithms are being developed to automatically extract and classify regions of interest from these images. However, limited work has been done to compare, validate and verify these algorithms due to the lack of datasets with high accuracy ground truth annotations. In this paper, we present an approach to generate a large number of synthetic images accompanied by perfect ground truth annotation via learning scene statistics from few training images through Maximum Entropy (ME) modeling. The ME model [1,2] embeds a Stochastic Context Free Grammar (SCFG) to model object attribute variations with Markov Random Fields (MRF) with the final goal of modeling contextual relations between objects. Using this model, 3D scenes are generated by configuring a 3D object model to obey the learned scene statistics. Finally, these plausible 3D scenes are captured by ray tracing software to produce synthetic images with the corresponding ground truth annotations that are useful for evaluating the performance of a variety of image analysis algorithms.

  20. Testing alternative ground water models using cross-validation and other methods

    Science.gov (United States)

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  1. How Perception Guides Action: Figure-Ground Segmentation Modulates Integration of Context Features into S-R Episodes.

    Science.gov (United States)

    Frings, Christian; Rothermund, Klaus

    2017-03-23

    Perception and action are closely related. Responses are assumed to be represented in terms of their perceptual effects, allowing direct links between action and perception. In this regard, the integration of features of stimuli (S) and responses (R) into S-R bindings is a key mechanism for action control. Previous research focused on the integration of object features with response features while neglecting the context in which an object is perceived. In 3 experiments, we analyzed whether contextual features can also become integrated into S-R episodes. The data showed that a fundamental principle of visual perception, figure-ground segmentation, modulates the binding of contextual features. Only features belonging to the figure region of a context but not features forming the background were integrated with responses into S-R episodes, retrieval of which later on had an impact upon behavior. Our findings suggest that perception guides the selection of context features for integration with responses into S-R episodes. Results of our study have wide-ranging implications for an understanding of context effects in learning and behavior. (PsycINFO Database Record

  2. Modeling short wave radiation and ground surface temperature: a validation experiment in the Western Alps

    Science.gov (United States)

    Pogliotti, P.; Cremonese, E.; Dallamico, M.; Gruber, S.; Migliavacca, M.; Morra di Cella, U.

    2009-12-01

    Permafrost distribution in high-mountain areas is influenced by topography (micro-climate) and high variability of ground covers conditions. Its monitoring is very difficult due to logistical problems like accessibility, costs, weather conditions and reliability of instrumentation. For these reasons physically-based modeling of surface rock/ground temperatures (GST) is fundamental for the study of mountain permafrost dynamics. With this awareness a 1D version of GEOtop model (www.geotop.org) is tested in several high-mountain sites and its accuracy to reproduce GST and incoming short wave radiation (SWin) is evaluated using independent field measurements. In order to describe the influence of topography, both flat and near-vertical sites with different aspects are considered. Since the validation of SWin is difficult on steep rock faces (due to the lack of direct measures) and validation of GST is difficult on flat sites (due to the presence of snow) the two parameters are validated as independent experiments: SWin only on flat morphologies, GST only on the steep ones. The main purpose is to investigate the effect of: (i) distance between driving meteo station location and simulation point location, (ii) cloudiness, (iii) simulation point aspect, (iv) winter/summer period. The temporal duration of model runs is variable from 3 years for the SWin experiment to 8 years for the validation of GST. The model parameterization is constant and tuned for a common massive bedrock of crystalline rock like granite. Ground temperature profile is not initialized because rock temperature is measured at only 10cm depth. A set of 9 performance measures is used for comparing model predictions and observations (including: fractional mean bias (FB), coefficient of residual mass (CMR), mean absolute error (MAE), modelling efficiency (ME), coefficient of determination (R2)). Results are very encouraging. For both experiments the distance (Km) between location of the driving meteo

  3. Gebiss: an ImageJ plugin for the specification of ground truth and the performance evaluation of 3d segmentation algorithms

    Directory of Open Access Journals (Sweden)

    Yee Kwo

    2011-06-01

    Full Text Available Abstract Background Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss/.

  4. Multi-segment and multi-ply overlapping process of multi coupled activities based on valid information evolution

    Science.gov (United States)

    Wang, Zhiliang; Wang, Yunxia; Qiu, Shenghai

    2013-01-01

    Complex product development will inevitably face the design planning of the multi-coupled activities, and overlapping these activities could potentially reduce product development time, but there is a risk of the additional cost. Although the downstream task information dependence to the upstream task is already considered in the current researches, but the design process overall iteration caused by the information interdependence between activities is hardly discussed; especially the impact on the design process' overall iteration from the valid information accumulation process. Secondly, most studies only focus on the single overlapping process of two activities, rarely take multi-segment and multi-ply overlapping process of multi coupled activities into account; especially the inherent link between product development time and cost which originates from the overlapping process of multi coupled activities. For the purpose of solving the above problems, as to the insufficiency of the accumulated valid information in overlapping process, the function of the valid information evolution (VIE) degree is constructed. Stochastic process theory is used to describe the design information exchange and the valid information accumulation in the overlapping segment, and then the planning models of the single overlapping segment are built. On these bases, by analyzing overlapping processes and overlapping features of multi-coupling activities, multi-segment and multi-ply overlapping planning models are built; by sorting overlapping processes and analyzing the construction of these planning models, two conclusions are obtained: (1) As to multi-segment and multi-ply overlapping of multi coupled activities, the total decrement of the task set development time is the sum of the time decrement caused by basic overlapping segments, and minus the sum of the time increment caused by multiple overlapping segments; (2) the total increment of development cost is the sum of the cost

  5. FUZZY CLUSTERWISE REGRESSION IN BENEFIT SEGMENTATION - APPLICATION AND INVESTIGATION INTO ITS VALIDITY

    NARCIS (Netherlands)

    STEENKAMP, JBEM; WEDEL, M

    1993-01-01

    This article describes a new technique for benefit segmentation, fuzzy clusterwise regression analysis (FCR). It combines clustering with prediction and is based on multiattribute models of consumer behavior. FCR is especially useful when the number of observations per subject is small, when the rel

  6. Hippocampal unified multi-atlas network (HUMAN): protocol and scale validation of a novel segmentation tool

    Science.gov (United States)

    Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the

    2015-11-01

    In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.

  7. Satellite Based Soil Moisture Product Validation Using NOAA-CREST Ground and L-Band Observations

    Science.gov (United States)

    Norouzi, H.; Campo, C.; Temimi, M.; Lakhankar, T.; Khanbilvardi, R.

    2015-12-01

    Soil moisture content is among most important physical parameters in hydrology, climate, and environmental studies. Many microwave-based satellite observations have been utilized to estimate this parameter. The Advanced Microwave Scanning Radiometer 2 (AMSR2) is one of many remotely sensors that collects daily information of land surface soil moisture. However, many factors such as ancillary data and vegetation scattering can affect the signal and the estimation. Therefore, this information needs to be validated against some "ground-truth" observations. NOAA - Cooperative Remote Sensing and Technology (CREST) center at the City University of New York has a site located at Millbrook, NY with several insitu soil moisture probes and an L-Band radiometer similar to Soil Moisture Passive and Active (SMAP) one. This site is among SMAP Cal/Val sites. Soil moisture information was measured at seven different locations from 2012 to 2015. Hydra probes are used to measure six of these locations. This study utilizes the observations from insitu data and the L-Band radiometer close to ground (at 3 meters height) to validate and to compare soil moisture estimates from AMSR2. Analysis of the measurements and AMSR2 indicated a weak correlation with the hydra probes and a moderate correlation with Cosmic-ray Soil Moisture Observing System (COSMOS probes). Several differences including the differences between pixel size and point measurements can cause these discrepancies. Some interpolation techniques are used to expand point measurements from 6 locations to AMSR2 footprint. Finally, the effect of penetration depth in microwave signal and inconsistencies with other ancillary data such as skin temperature is investigated to provide a better understanding in the analysis. The results show that the retrieval algorithm of AMSR2 is appropriate under certain circumstances. This validation algorithm and similar study will be conducted for SMAP mission. Keywords: Remote Sensing, Soil

  8. Validation and modeling of earthquake strong ground motion using a composite source model

    Science.gov (United States)

    Zeng, Y.

    2001-12-01

    Zeng et al. (1994) have proposed a composite source model for synthetic strong ground motion prediction. In that model, the source is taken as a superposition of circular subevents with a constant stress drop. The number of subevents and their radius follows a power law distribution equivalent to the Gutenberg and Richter's magnitude-frequency relation for seismicity. The heterogeneous nature of the composite source model is characterized by its maximum subevent size and subevent stress drop. As rupture propagates through each subevent, it radiates a Brune's pulse or a Sato and Hirasawa's circular crack pulse. The method has been proved to be successful in generating realistic strong motion seismograms in comparison with observations from earthquakes in California, eastern US, Guerrero of Mexico, Turkey and India. The model has since been improved by including scattering waves from small scale heterogeneity structure of the earth, site specific ground motion prediction using weak motion site amplification, and nonlinear soil response using geotechnical engineering models. Last year, I have introduced an asymmetric circular rupture to improve the subevent source radiation and to provide a consistent rupture model between overall fault rupture process and its subevents. In this study, I revisit the Landers, Loma Prieta, Northridge, Imperial Valley and Kobe earthquakes using the improved source model. The results show that the improved subevent ruptures provide an improved effect of rupture directivity compared to our previous studies. Additional validation includes comparison of synthetic strong ground motions to the observed ground accelerations from the Chi-Chi, Taiwan and Izmit, Turkey earthquakes. Since the method has evolved considerably when it was first proposed, I will also compare results between each major modification of the model and demonstrate its backward compatibility to any of its early simulation procedures.

  9. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  10. Accuracy Validation of an Automated Method for Prostate Segmentation in Magnetic Resonance Imaging.

    Science.gov (United States)

    Shahedi, Maysam; Cool, Derek W; Bauman, Glenn S; Bastian-Jordan, Matthew; Fenster, Aaron; Ward, Aaron D

    2017-03-24

    Three dimensional (3D) manual segmentation of the prostate on magnetic resonance imaging (MRI) is a laborious and time-consuming task that is subject to inter-observer variability. In this study, we developed a fully automatic segmentation algorithm for T2-weighted endorectal prostate MRI and evaluated its accuracy within different regions of interest using a set of complementary error metrics. Our dataset contained 42 T2-weighted endorectal MRI from prostate cancer patients. The prostate was manually segmented by one observer on all of the images and by two other observers on a subset of 10 images. The algorithm first coarsely localizes the prostate in the image using a template matching technique. Then, it defines the prostate surface using learned shape and appearance information from a set of training images. To evaluate the algorithm, we assessed the error metric values in the context of measured inter-observer variability and compared performance to that of our previously published semi-automatic approach. The automatic algorithm needed an average execution time of ∼60 s to segment the prostate in 3D. When compared to a single-observer reference standard, the automatic algorithm has an average mean absolute distance of 2.8 mm, Dice similarity coefficient of 82%, recall of 82%, precision of 84%, and volume difference of 0.5 cm(3) in the mid-gland. Concordant with other studies, accuracy was highest in the mid-gland and lower in the apex and base. Loss of accuracy with respect to the semi-automatic algorithm was less than the measured inter-observer variability in manual segmentation for the same task.

  11. Validation of vertical ground reaction forces on individual limbs calculated from kinematics of horse locomotion.

    Science.gov (United States)

    Bobbert, Maarten F; Gómez Alvarez, Constanza B; van Weeren, P René; Roepstorff, Lars; Weishaupt, Michael A

    2007-06-01

    The purpose of this study was to determine whether individual limb forces could be calculated accurately from kinematics of trotting and walking horses. We collected kinematic data and measured vertical ground reaction forces on the individual limbs of seven Warmblood dressage horses, trotting at 3.4 m s(-1) and walking at 1.6 m s(-1) on a treadmill. First, using a segmental model, we calculated from kinematics the total ground reaction force vector and its moment arm relative to each of the hoofs. Second, for phases in which the body was supported by only two limbs, we calculated the individual reaction forces on these limbs. Third, we assumed that the distal limbs operated as linear springs, and determined their force-length relationships using calculated individual limb forces at trot. Finally, we calculated individual limb force-time histories from distal limb lengths. A good correspondence was obtained between calculated and measured individual limb forces. At trot, the average peak vertical reaction force on the forelimb was calculated to be 11.5+/-0.9 N kg(-1) and measured to be 11.7+/-0.9 N kg(-1), and for the hindlimb these values were 9.8+/-0.7 N kg(-1) and 10.0+/-0.6 N kg(-1), respectively. At walk, the average peak vertical reaction force on the forelimb was calculated to be 6.9+/-0.5 N kg(-1) and measured to be 7.1+/-0.3 N kg(-1), and for the hindlimb these values were 4.8+/-0.5 N kg(-1) and 4.7+/-0.3 N kg(-1), respectively. It was concluded that the proposed method of calculating individual limb reaction forces is sufficiently accurate to detect changes in loading reported in the literature for mild to moderate lameness at trot.

  12. Validation of automated supervised segmentation of multibeam backscatter data from the Chatham Rise, New Zealand

    Science.gov (United States)

    Hillman, Jess I. T.; Lamarche, Geoffroy; Pallentin, Arne; Pecher, Ingo A.; Gorman, Andrew R.; Schneider von Deimling, Jens

    2017-01-01

    Using automated supervised segmentation of multibeam backscatter data to delineate seafloor substrates is a relatively novel technique. Low-frequency multibeam echosounders (MBES), such as the 12-kHz EM120, present particular difficulties since the signal can penetrate several metres into the seafloor, depending on substrate type. We present a case study illustrating how a non-targeted dataset may be used to derive information from multibeam backscatter data regarding distribution of substrate types. The results allow us to assess limitations associated with low frequency MBES where sub-bottom layering is present, and test the accuracy of automated supervised segmentation performed using SonarScope® software. This is done through comparison of predicted and observed substrate from backscatter facies-derived classes and substrate data, reinforced using quantitative statistical analysis based on a confusion matrix. We use sediment samples, video transects and sub-bottom profiles acquired on the Chatham Rise, east of New Zealand. Inferences on the substrate types are made using the Generic Seafloor Acoustic Backscatter (GSAB) model, and the extents of the backscatter classes are delineated by automated supervised segmentation. Correlating substrate data to backscatter classes revealed that backscatter amplitude may correspond to lithologies up to 4 m below the seafloor. Our results emphasise several issues related to substrate characterisation using backscatter classification, primarily because the GSAB model does not only relate to grain size and roughness properties of substrate, but also accounts for other parameters that influence backscatter. Better understanding these limitations allows us to derive first-order interpretations of sediment properties from automated supervised segmentation.

  13. Intracranial aneurysm segmentation in 3D CT angiography: method and quantitative validation

    Science.gov (United States)

    Firouzian, Azadeh; Manniesing, R.; Flach, Z. H.; Risselada, R.; van Kooten, F.; Sturkenboom, M. C. J. M.; van der Lugt, A.; Niessen, W. J.

    2010-03-01

    Accurately quantifying aneurysm shape parameters is of clinical importance, as it is an important factor in choosing the right treatment modality (i.e. coiling or clipping), in predicting rupture risk and operative risk and for pre-surgical planning. The first step in aneurysm quantification is to segment it from other structures that are present in the image. As manual segmentation is a tedious procedure and prone to inter- and intra-observer variability, there is a need for an automated method which is accurate and reproducible. In this paper a novel semi-automated method for segmenting aneurysms in Computed Tomography Angiography (CTA) data based on Geodesic Active Contours is presented and quantitatively evaluated. Three different image features are used to steer the level set to the boundary of the aneurysm, namely intensity, gradient magnitude and variance in intensity. The method requires minimum user interaction, i.e. clicking a single seed point inside the aneurysm which is used to estimate the vessel intensity distribution and to initialize the level set. The results show that the developed method is reproducible, and performs in the range of interobserver variability in terms of accuracy.

  14. Validation of MOPITT carbon monoxide using ground-based Fourier transform infrared spectrometer data from NDACC

    Science.gov (United States)

    Buchholz, Rebecca R.; Deeter, Merritt N.; Worden, Helen M.; Gille, John; Edwards, David P.; Hannigan, James W.; Jones, Nicholas B.; Paton-Walsh, Clare; Griffith, David W. T.; Smale, Dan; Robinson, John; Strong, Kimberly; Conway, Stephanie; Sussmann, Ralf; Hase, Frank; Blumenstock, Thomas; Mahieu, Emmanuel; Langerock, Bavo

    2017-06-01

    The Measurements of Pollution in the Troposphere (MOPITT) satellite instrument provides the longest continuous dataset of carbon monoxide (CO) from space. We perform the first validation of MOPITT version 6 retrievals using total column CO measurements from ground-based remote-sensing Fourier transform infrared spectrometers (FTSs). Validation uses data recorded at 14 stations, that span a wide range of latitudes (80° N to 78° S), in the Network for the Detection of Atmospheric Composition Change (NDACC). MOPITT measurements are spatially co-located with each station, and different vertical sensitivities between instruments are accounted for by using MOPITT averaging kernels (AKs). All three MOPITT retrieval types are analyzed: thermal infrared (TIR-only), joint thermal and near infrared (TIR-NIR), and near infrared (NIR-only). Generally, MOPITT measurements overestimate CO relative to FTS measurements, but the bias is typically less than 10 %. Mean bias is 2.4 % for TIR-only, 5.1 % for TIR-NIR, and 6.5 % for NIR-only. The TIR-NIR and NIR-only products consistently produce a larger bias and lower correlation than the TIR-only. Validation performance of MOPITT for TIR-only and TIR-NIR retrievals over land or water scenes is equivalent. The four MOPITT detector element pixels are validated separately to account for their different uncertainty characteristics. Pixel 1 produces the highest standard deviation and lowest correlation for all three MOPITT products. However, for TIR-only and TIR-NIR, the error-weighted average that includes all four pixels often provides the best correlation, indicating compensating pixel biases and well-captured error characteristics. We find that MOPITT bias does not depend on latitude but rather is influenced by the proximity to rapidly changing atmospheric CO. MOPITT bias drift has been bound geographically to within ±0.5 % yr-1 or lower at almost all locations.

  15. Flight validation of ground-based assessment for control power requirements at high angles of attack

    Science.gov (United States)

    Ogburn, Marilyn E.; Ross, Holly M.; Foster, John V.; Pahle, Joseph W.; Sternberg, Charles A.; Traven, Ricardo; Lackey, James B.; Abbott, Troy D.

    1994-01-01

    A review is presented in viewgraph format of an ongoing NASA/U.S. Navy study to determine control power requirements at high angles of attack for the next generation high-performance aircraft. This paper focuses on recent flight test activities using the NASA High Alpha Research Vehicle (HARV), which are intended to validate results of previous ground-based simulation studies. The purpose of this study is discussed, and the overall program structure, approach, and objectives are described. Results from two areas of investigation are presented: (1) nose-down control power requirements and (2) lateral-directional control power requirements. Selected results which illustrate issues and challenges that are being addressed in the study are discussed including test methodology, comparisons between simulation and flight, and general lessons learned.

  16. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    Science.gov (United States)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  17. VALIDATION OF POINT CLOUDS SEGMENTATION ALGORITHMS THROUGH THEIR APPLICATION TO SEVERAL CASE STUDIES FOR INDOOR BUILDING MODELLING

    Directory of Open Access Journals (Sweden)

    H. Macher

    2016-06-01

    Full Text Available Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling. However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  18. Monitoring Ground Subsidence in Hong Kong via Spaceborne Radar: Experiments and Validation

    Directory of Open Access Journals (Sweden)

    Yuxiao Qin

    2015-08-01

    Full Text Available The persistent scatterers interferometry (PSI technique is gradually becoming known for its capability of providing up to millimeter accuracy of measurement on ground displacement. Nevertheless, there is still quite a good amount of doubt regarding its correctness or accuracy. In this paper, we carried out an experiment corroborating the capability of the PSI technique with the help of a traditional survey method in the urban area of Hong Kong, China. Seventy three TerraSAR-X (TSX and TanDEM-X (TDX images spanning over four years are used for the data process. There are three aims of this study. The first is to generate a displacement map of urban Hong Kong and to check for spots with possible ground movements. This information will be provided to the local surveyors so that they can check these specific locations. The second is to validate if the accuracy of the PSI technique can indeed reach the millimeter level in this real application scenario. For validating the accuracy of PSI, four corner reflectors (CR were installed at a construction site on reclaimed land in Hong Kong. They were manually moved up or down by a few to tens of millimeters, and the value derived from the PSI analysis was compared to the true value. The experiment, carried out in unideal conditions, nevertheless proved undoubtedly that millimeter accuracy can be achieved by the PSI technique. The last is to evaluate the advantages and limitations of the PSI technique. Overall, the PSI technique can be extremely useful if used in collaboration with other techniques, so that the advantages can be highlighted and the drawbacks avoided.

  19. Modelling floor heating systems using a validated two-dimensional ground coupled numerical model

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Kragh, Jesper; Roots, Peter

    2005-01-01

    the floor. This model can be used to design energy efficient houses with floor heating focusing on the heat loss through the floor construction and foundation. It is found that it is impor-tant to model the dynamics of the floor heating system to find the correct heat loss to the ground, and further......This paper presents a two-dimensional simulation model of the heat losses and tempera-tures in a slab on grade floor with floor heating which is able to dynamically model the floor heating system. The aim of this work is to be able to model, in detail, the influence from the floor construction...... and foundation on the performance of the floor heating sys-tem. The ground coupled floor heating model is validated against measurements from a single-family house. The simulation model is coupled to a whole-building energy simu-lation model with inclusion of heat losses and heat supply to the room above...

  20. A semi-automated volumetric software for segmentation and perfusion parameter quantification of brain tumors using 320-row multidetector computed tomography: a validation study

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Soo Young; Suh, Sangil; Ryoo, Inseon; Park, Arim; Seol, Hae Young [Korea University Guro Hospital, Department of Radiology, Seoul (Korea, Republic of); Noh, Kyoung Jin [Soonchunhyang University, Department of Electronic Engineering, Asan (Korea, Republic of); Shim, Hackjoon [Toshiba Medical Systems Korea Co., Seoul (Korea, Republic of)

    2017-05-15

    We developed a semi-automated volumetric software, NPerfusion, to segment brain tumors and quantify perfusion parameters on whole-brain CT perfusion (WBCTP) images. The purpose of this study was to assess the feasibility of the software and to validate its performance compared with manual segmentation. Twenty-nine patients with pathologically proven brain tumors who underwent preoperative WBCTP between August 2012 and February 2015 were included. Three perfusion parameters, arterial flow (AF), equivalent blood volume (EBV), and Patlak flow (PF, which is a measure of permeability of capillaries), of brain tumors were generated by a commercial software and then quantified volumetrically by NPerfusion, which also semi-automatically segmented tumor boundaries. The quantification was validated by comparison with that of manual segmentation in terms of the concordance correlation coefficient and Bland-Altman analysis. With NPerfusion, we successfully performed segmentation and quantified whole volumetric perfusion parameters of all 29 brain tumors that showed consistent perfusion trends with previous studies. The validation of the perfusion parameter quantification exhibited almost perfect agreement with manual segmentation, with Lin concordance correlation coefficients (ρ {sub c}) for AF, EBV, and PF of 0.9988, 0.9994, and 0.9976, respectively. On Bland-Altman analysis, most differences between this software and manual segmentation on the commercial software were within the limit of agreement. NPerfusion successfully performs segmentation of brain tumors and calculates perfusion parameters of brain tumors. We validated this semi-automated segmentation software by comparing it with manual segmentation. NPerfusion can be used to calculate volumetric perfusion parameters of brain tumors from WBCTP. (orig.)

  1. Combining 3D tracking and surgical instrumentation to determine the stiffness of spinal motion segments: a validation study.

    Science.gov (United States)

    Reutlinger, C; Gédet, P; Büchler, P; Kowal, J; Rudolph, T; Burger, J; Scheffler, K; Hasler, C

    2011-04-01

    The spine is a complex structure that provides motion in three directions: flexion and extension, lateral bending and axial rotation. So far, the investigation of the mechanical and kinematic behavior of the basic unit of the spine, a motion segment, is predominantly a domain of in vitro experiments on spinal loading simulators. Most existing approaches to measure spinal stiffness intraoperatively in an in vivo environment use a distractor. However, these concepts usually assume a planar loading and motion. The objective of our study was to develop and validate an apparatus, that allows to perform intraoperative in vivo measurements to determine both the applied force and the resulting motion in three dimensional space. The proposed setup combines force measurement with an instrumented distractor and motion tracking with an optoelectronic system. As the orientation of the applied force and the three dimensional motion is known, not only force-displacement, but also moment-angle relations could be determined. The validation was performed using three cadaveric lumbar ovine spines. The lateral bending stiffness of two motion segments per specimen was determined with the proposed concept and compared with the stiffness acquired on a spinal loading simulator which was considered to be gold standard. The mean values of the stiffness computed with the proposed concept were within a range of ±15% compared to data obtained with the spinal loading simulator under applied loads of less than 5 Nm.

  2. Identifying food-related life style segments by a cross-culturally valid scaling device

    DEFF Research Database (Denmark)

    Brunsø, Karen; Grunert, Klaus G.

    1994-01-01

    We present a new view of life style, based on a cognitive perspective, which makes life style specific to certain areas of consumption. The specific area of consumption studied here is food, resulting in a concept of food-related life style. An instrument is developed that can measure food-relate...... then applied the set of scales to a fourth country, Germany, based on a representative sample of 1000 respondents. The scales had, with a fe exceptions, moderately good reliabilities. A cluster ana-ly-sis led to the identification of 5 segments, which differed on all 23 scales....

  3. Validation of five years (2003–2007 of SCIAMACHY CO total column measurements using ground-based spectrometer observations

    Directory of Open Access Journals (Sweden)

    A. M. Poberovskii

    2010-10-01

    Full Text Available This paper presents a validation study of SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY carbon monoxide (CO total column measurements from the Iterative Maximum Likelihood Method (IMLM algorithm using ground-based spectrometer observations from twenty surface stations for the five year time period of 2003–2007. Overall we find a good agreement between SCIAMACHY and ground-based observations for both mean values as well as seasonal variations. For high-latitude Northern Hemisphere stations absolute differences between SCIAMACHY and ground-based measurements are close to or fall within the SCIAMACHY CO 2σ precision of 0.2 × 1018 molecules/cm2 (∼10% indicating that SCIAMACHY can observe CO accurately at high Northern Hemisphere latitudes. For Northern Hemisphere mid-latitude stations the validation is complicated due to the vicinity of emission sources for almost all stations, leading to higher ground-based measurements compared to SCIAMACHY CO within its typical sampling area of 8° × 8°. Comparisons with Northern Hemisphere mountain stations are hampered by elevation effects. After accounting for these effects, the validation provides satisfactory results. At Southern Hemisphere mid- to high latitudes SCIAMACHY is systematically lower than the ground-based measurements for 2003 and 2004, but for 2005 and later years the differences between SCIAMACHY and ground-based measurements fall within the SCIAMACHY precision. The 2003–2004 bias is consistent with previously reported results although its origin remains under investigation. No other systematic spatial or temporal biases could be identified based on the validation presented in this paper. Validation results are robust with regard to the choices of the instrument-noise error filter, sampling area, and time averaging required for the validation of SCIAMACHY CO total column measurements. Finally, our results show that the spatial coverage of the ground

  4. Analysis System Design of Validity of Remote Sensing Image Segmentation Unit%遥感影像有效基元分析系统设计

    Institute of Scientific and Technical Information of China (English)

    刘兴权; 李全文

    2015-01-01

    It is belongs to posteriori that the remote sensing segments’validity judgment and analysis with the existing software,which gets appropriate result through comparing different remote sensing image segment scale.It is a static split judgment.Such approach limits the full use of additional information during the dynamic segmentation process in remote sensing image,such as the number of multi-scale segment images’layers,segments and the relationship between them and so on. However,the design of many remote image segmentation tactics often involves the dynamic segmentation process information, and the existing software does not provide effective tools to evaluate the effectiveness of segments created in the dynamic segmentation process.So,we design a soft system to simulate the remote sensing images’multi-scale segmentation process with ArcGIS Engine function interface,and achieving the dynamic judgment of the validity of egmentation units;lastly,completing the multi-scale adaptive segmentation simulation and the verification of segmentation tactics.This presented approach provides a useful evaluating tool to check the reasonableness of a new segmentation.%针对现有软件对遥感影像分割过程得到的斑块不提供动态的有效性评价机制,该文使用 ArcGIS Engine 的功能接口开发模拟遥感影像的多层次分割过程,并实现分割斑块有效性的动态判断。在此基础上完成自适应多层次分割策略的模拟分析验证,为验证影像分割策略是否合理提供有效的评价工具。

  5. Validation of NH3 satellite observations by ground-based FTIR measurements

    Science.gov (United States)

    Dammers, Enrico; Palm, Mathias; Van Damme, Martin; Shephard, Mark; Cady-Pereira, Karen; Capps, Shannon; Clarisse, Lieven; Coheur, Pierre; Erisman, Jan Willem

    2016-04-01

    Global emissions of reactive nitrogen have been increasing to an unprecedented level due to human activities and are estimated to be a factor four larger than pre-industrial levels. Concentration levels of NOx are declining, but ammonia (NH3) levels are increasing around the globe. While NH3 at its current concentrations poses significant threats to the environment and human health, relatively little is known about the total budget and global distribution. Surface observations are sparse and mainly available for north-western Europe, the United States and China and are limited by the high costs and poor temporal and spatial resolution. Since the lifetime of atmospheric NH3 is short, on the order of hours to a few days, due to efficient deposition and fast conversion to particulate matter, the existing surface measurements are not sufficient to estimate global concentrations. Advanced space-based IR-sounders such as the Tropospheric Emission Spectrometer (TES), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) enable global observations of atmospheric NH3 that help overcome some of the limitations of surface observations. However, the satellite NH3 retrievals are complex requiring extensive validation. Presently there have only been a few dedicated satellite NH3 validation campaigns performed with limited spatial, vertical or temporal coverage. Recently a retrieval methodology was developed for ground-based Fourier Transform Infrared Spectroscopy (FTIR) instruments to obtain vertical concentration profiles of NH3. Here we show the applicability of retrieved columns from nine globally distributed stations with a range of NH3 pollution levels to validate satellite NH3 products.

  6. Automated cerebellar segmentation: Validation and application to detect smaller volumes in children prenatally exposed to alcohol

    Directory of Open Access Journals (Sweden)

    Valerie A. Cardenas

    2014-01-01

    Discussion: These results demonstrate excellent reliability and validity of automated cerebellar volume and mid-sagittal area measurements, compared to manual measurements. These data also illustrate that this new technology for automatically delineating the cerebellum leads to conclusions regarding the effects of prenatal alcohol exposure on the cerebellum consistent with prior studies that used labor intensive manual delineation, even with a very small sample.

  7. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    Science.gov (United States)

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.

  8. A semi-automated pipeline for the segmentation of rhesus macaque hippocampus: validation across a wide age range.

    Directory of Open Access Journals (Sweden)

    Michael R Hunsaker

    Full Text Available This report outlines a neuroimaging pipeline that allows a robust, high-throughput, semi-automated, template-based protocol for segmenting the hippocampus in rhesus macaque (Macaca mulatta monkeys ranging from 1 week to 260 weeks of age. The semiautomated component of this approach minimizes user effort while concurrently maximizing the benefit of human expertise by requiring as few as 10 landmarks to be placed on images of each hippocampus to guide registration. Any systematic errors in the normalization process are corrected using a machine-learning algorithm that has been trained by comparing manual and automated segmentations to identify systematic errors. These methods result in high spatial overlap and reliability when compared with the results of manual tracing protocols. They also dramatically reduce the time to acquire data, an important consideration in large-scale neuroradiological studies involving hundreds of MRI scans. Importantly, other than the initial generation of the unbiased template, this approach requires only modest neuroanatomical training. It has been validated for high-throughput studies of rhesus macaque hippocampal anatomy across a broad age range.

  9. A semi-automated pipeline for the segmentation of rhesus macaque hippocampus: validation across a wide age range.

    Science.gov (United States)

    Hunsaker, Michael R; Amaral, David G

    2014-01-01

    This report outlines a neuroimaging pipeline that allows a robust, high-throughput, semi-automated, template-based protocol for segmenting the hippocampus in rhesus macaque (Macaca mulatta) monkeys ranging from 1 week to 260 weeks of age. The semiautomated component of this approach minimizes user effort while concurrently maximizing the benefit of human expertise by requiring as few as 10 landmarks to be placed on images of each hippocampus to guide registration. Any systematic errors in the normalization process are corrected using a machine-learning algorithm that has been trained by comparing manual and automated segmentations to identify systematic errors. These methods result in high spatial overlap and reliability when compared with the results of manual tracing protocols. They also dramatically reduce the time to acquire data, an important consideration in large-scale neuroradiological studies involving hundreds of MRI scans. Importantly, other than the initial generation of the unbiased template, this approach requires only modest neuroanatomical training. It has been validated for high-throughput studies of rhesus macaque hippocampal anatomy across a broad age range.

  10. Validation and downscaling of Advanced Scatterometer (ASCAT) soil moisture using ground measurements in the Western Cape, South Africa

    CSIR Research Space (South Africa)

    Moller, J

    2017-09-01

    Full Text Available of Plant and Soil: DOI: 10.1080/02571862.2017.1318962 Validation and downscaling of Advanced Scatterometer (ASCAT) soil moisture using ground measurements in the Western Cape, South Africa Moller J Jovanovic N Garcia CL Bugan RDH Mazvimavi D...

  11. Ground validation of oceanic snowfall detection in satellite climatologies during LOFZY

    Science.gov (United States)

    Klepp, Christian; Bumke, Karl; Bakan, Stephan; Bauer, Peter

    2010-08-01

    A thorough knowledge of global ocean precipitation is an indispensable prerequisite for the understanding of the water cycle in the global climate system. However, reliable detection of precipitation over the global oceans, especially of solid precipitation, remains a challenging task. This is true for both, passive microwave remote sensing and reanalysis based model estimates. The optical disdrometer ODM 470 is a ground validation instrument capable of measuring rain and snowfall on ships even under high wind speeds. It was used for the first time over the Nordic Seas during the LOFZY 2005 campaign. A dichotomous verification of precipitation occurrence resulted in a perfect correspondence between the disdrometer, a precipitation detector and a shipboard observer's log. The disdrometer data is further point-to-area collocated against precipitation from the satellite based Hamburg Ocean Atmosphere Parameters and fluxes from Satellite data (HOAPS) climatology. HOAPS precipitation turns out to be overall consistent with the disdrometer data resulting in a detection accuracy of 0.96. The collocated data comprises light precipitation events below 1 mm h-1. Therefore two LOFZY case studies with high precipitation rates are presented that indicate plausible HOAPS satellite precipitation rates. Overall, this encourages longer term measurements of ship-to-satellite collocated precipitation in the near future.

  12. TEMIS UV product validation using NILU-UV ground-based measurements in Thessaloniki, Greece

    Science.gov (United States)

    Zempila, Melina-Maria; van Geffen, Jos H. G. M.; Taylor, Michael; Fountoulakis, Ilias; Koukouli, Maria-Elissavet; van Weele, Michiel; van der A, Ronald J.; Bais, Alkiviadis; Meleti, Charikleia; Balis, Dimitrios

    2017-06-01

    This study aims to cross-validate ground-based and satellite-based models of three photobiological UV effective dose products: the Commission Internationale de l'Éclairage (CIE) erythemal UV, the production of vitamin D in the skin, and DNA damage, using high-temporal-resolution surface-based measurements of solar UV spectral irradiances from a synergy of instruments and models. The satellite-based Tropospheric Emission Monitoring Internet Service (TEMIS; version 1.4) UV daily dose data products were evaluated over the period 2009 to 2014 with ground-based data from a Norsk Institutt for Luftforskning (NILU)-UV multifilter radiometer located at the northern midlatitude super-site of the Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki (LAP/AUTh), in Greece. For the NILU-UV effective dose rates retrieval algorithm, a neural network (NN) was trained to learn the nonlinear functional relation between NILU-UV irradiances and collocated Brewer-based photobiological effective dose products. Then the algorithm was subjected to sensitivity analysis and validation. The correlation of the NN estimates with target outputs was high (r = 0. 988 to 0.990) and with a very low bias (0.000 to 0.011 in absolute units) proving the robustness of the NN algorithm. For further evaluation of the NILU NN-derived products, retrievals of the vitamin D and DNA-damage effective doses from a collocated Yankee Environmental Systems (YES) UVB-1 pyranometer were used. For cloud-free days, differences in the derived UV doses are better than 2 % for all UV dose products, revealing the reference quality of the ground-based UV doses at Thessaloniki from the NILU-UV NN retrievals. The TEMIS UV doses used in this study are derived from ozone measurements by the SCIAMACHY/Envisat and GOME2/MetOp-A satellite instruments, over the European domain in combination with SEVIRI/Meteosat-based diurnal cycle of the cloud cover fraction per 0. 5° × 0. 5° (lat × long) grid cells. TEMIS

  13. Construct validity of RT3 accelerometer: A comparison of level-ground and treadmill walking at self-selected speeds

    Directory of Open Access Journals (Sweden)

    Paul Hendrick, MPhty

    2010-04-01

    Full Text Available This study examined differences in accelerometer output when subjects walked on level ground and on a treadmill. We asked 25 nondisabled participants to wear an RT3 triaxial accelerometer (StayHealthy, Inc; Monrovia, California and walk at their "normal" and "brisk" walking speeds for 10 minutes. These activities were repeated on a treadmill using the individual speeds from level-ground walking on two occasions 1 week apart. Paired t-tests found a difference in RT3 accelerometer vector magnitude (VM counts/min between the two walking speeds on both surfaces on days 1 and 2 (p 0.05, we found wide limits of agreement between level ground and treadmill walking at both speeds. Measurement and discrimination of walking intensity employing RT3 accelerometer VM counts/min on the treadmill demonstrated reasonable validity and stability over two time points compared with level-ground walking.

  14. An Asian validation of the TIMI risk score for ST-segment elevation myocardial infarction.

    Directory of Open Access Journals (Sweden)

    Sharmini Selvarajah

    Full Text Available BACKGROUND: Risk stratification in ST-elevation myocardial infarction (STEMI is important, such that the most resource intensive strategy is used to achieve the greatest clinical benefit. This is essential in developing countries with wide variation in health care facilities, scarce resources and increasing burden of cardiovascular diseases. This study sought to validate the Thrombolysis In Myocardial Infarction (TIMI risk score for STEMI in a multi-ethnic developing country. METHODS: Data from a national, prospective, observational registry of acute coronary syndromes was used. The TIMI risk score was evaluated in 4701 patients who presented with STEMI. Model discrimination and calibration was tested in the overall population and in subgroups of patients that were at higher risk of mortality; i.e., diabetics and those with renal impairment. RESULTS: Compared to the TIMI population, this study population was younger, had more chronic conditions, more severe index events and received treatment later. The TIMI risk score was strongly associated with 30-day mortality. Discrimination was good for the overall study population (c statistic 0.785 and in the high risk subgroups; diabetics (c statistic 0.764 and renal impairment (c statistic 0.761. Calibration was good for the overall study population and diabetics, with χ2 goodness of fit test p value of 0.936 and 0.983 respectively, but poor for those with renal impairment, χ2 goodness of fit test p value of 0.006. CONCLUSIONS: The TIMI risk score is valid and can be used for risk stratification of STEMI patients for better targeted treatment.

  15. Fast Method for Segmenting Indoor Obstacle with Ground%一种室内障碍物与地面分割的快速方法

    Institute of Scientific and Technical Information of China (English)

    卜燕; 王姮; 张华; 刘桂华; 李志雄

    2016-01-01

    The ground is usually used to provide environmental information of map creation and navigation for indoor mobile robots because it contains rich information. Considering the strong interference caused by light reflection, it is difficult to distinguish the ground surface under similar color environment,so the high intensity light reflection areas are defined as “defect” to be detected. By filling defect with its periphery information,the ground color uniformity can be effectively enhanced. Combining with the HSV joint density,color segmentation is conducted,and using regional characteristics of ground position, the segmentation of obstacle with ground is obtained precisely. Experiments show that the proposed approach features simple operation,wide range,high precision,and ease to implement obstacle avoidance for robot in real time.%因地面含有丰富信息,常用来为室内移动机器人提供地图创建与导航的环境信息。考虑到光线反射对地面造成的干扰较强,在相似颜色环境下地面区分的难度较大,因此将高强光反射区定义为“缺陷”进行检测。利用其周边信息填充缺陷,有效增强了地面颜色的统一性。结合HSV联合密度进行彩色分割,利用地面位置区域特性,可准确获得地面与障碍物间的分割。试验表明,提出的方法具有运算简单、范围广、准确度高、便于机器人实时避障等优点。

  16. Validation of the CrIS fast physical NH3 retrieval with ground-based FTIR

    Directory of Open Access Journals (Sweden)

    E. Dammers

    2017-07-01

    Full Text Available Presented here is the validation of the CrIS (Cross-track Infrared Sounder fast physical NH3 retrieval (CFPR column and profile measurements using ground-based Fourier transform infrared (FTIR observations. We use the total columns and profiles from seven FTIR sites in the Network for the Detection of Atmospheric Composition Change (NDACC to validate the satellite data products. The overall FTIR and CrIS total columns have a positive correlation of r  =  0.77 (N  =  218 with very little bias (a slope of 1.02. Binning the comparisons by total column amounts, for concentrations larger than 1.0  ×  1016 molecules cm−2, i.e. ranging from moderate to polluted conditions, the relative difference is on average ∼ 0–5 % with a standard deviation of 25–50 %, which is comparable to the estimated retrieval uncertainties in both CrIS and the FTIR. For the smallest total column range (< 1.0  × 1016 molecules cm−2 where there are a large number of observations at or near the CrIS noise level (detection limit the absolute differences between CrIS and the FTIR total columns show a slight positive column bias. The CrIS and FTIR profile comparison differences are mostly within the range of the single-level retrieved profile values from estimated retrieval uncertainties, showing average differences in the range of  ∼ 20 to 40 %. The CrIS retrievals typically show good vertical sensitivity down into the boundary layer which typically peaks at  ∼ 850 hPa (∼ 1.5 km. At this level the median absolute difference is 0.87 (std  =  ±0.08 ppb, corresponding to a median relative difference of 39 % (std  =  ±2 %. Most of the absolute and relative profile comparison differences are in the range of the estimated retrieval uncertainties. At the surface, where CrIS typically has lower sensitivity, it tends to overestimate in low-concentration conditions and underestimate

  17. GPS and InSAR observations of ground deformation in the northern Malawi (Nyasa) rift from the SEGMeNT project

    Science.gov (United States)

    Durkin, W. J., IV; Pritchard, M. E.; Elliott, J.; Zheng, W.; Saria, E.; Ntambila, D.; Chindandali, P. R. N.; Nooner, S. L.; Henderson, S. T.

    2016-12-01

    We describe new ground deformation observations from the SEGMeNT (Study of Extension and maGmatism in Malawi aNd Tanzania) spanning the northern sector of the Malawi (Nyasa) rift, which is one of the few places in the world suitable for a comprehensive study of early rifting processes. We installed 12 continuous GPS sensors spanning 700 km across the rift including Tanzania, Malawi, and Zambia to measure the width and gradient within the actively deforming zone. Most of these stations have 3 or more years of data now, although a few have shorter time series because of station vandalism. Spanning a smaller area, but with higher spatial resolution, we have created a time series of ground deformation using 150 interferograms from the Japanese ALOS-1 satellite spanning June 2007 to December 2010. We also present interferograms from other satellites including ERS, Envisat, and Sentinel spanning shorter time intervals. The observations include the 2009-2010 Karonga earthquake sequence and associated postseismic deformation as seen by multiple independent satellite lines-of-sight, that we model using a fault geometry determined using relocated aftershocks recorded by a local seismic array. We have not found any ground deformation at the Rungwe volcanic province from InSAR within our detection threshold ( 2 cm/yr), but we have observed localized seasonal ground movements exceeding 8 cm that are associated with subsidence in the dry season and uplift at the beginning of the wet season.

  18. Comparison of vertical ground reaction forces during overground and treadmill running. A validation study

    NARCIS (Netherlands)

    Kluitenberg, Bas; Bredeweg, Steef W.; Zijlstra, Sjouke; Zijlstra, Wiebren; Buist, Ida

    2012-01-01

    Background: One major drawback in measuring ground-reaction forces during running is that it is time consuming to get representative ground-reaction force (GRF) values with a traditional force platform. An instrumented force measuring treadmill can overcome the shortcomings inherent to overground te

  19. Comparison of vertical ground reaction forces during overground and treadmill running. A validation study

    NARCIS (Netherlands)

    Kluitenberg, Bas; Bredeweg, Steef W.; Zijlstra, Sjouke; Zijlstra, Wiebren; Buist, Ida

    2012-01-01

    Background: One major drawback in measuring ground-reaction forces during running is that it is time consuming to get representative ground-reaction force (GRF) values with a traditional force platform. An instrumented force measuring treadmill can overcome the shortcomings inherent to overground te

  20. A validation of ground ambulance pre-hospital times modeled using geographic information systems

    Directory of Open Access Journals (Sweden)

    Patel Alka B

    2012-10-01

    Full Text Available Abstract Background Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS using geographic information systems (GIS. The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. Methods The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval. The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. Results There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7–8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. Conclusions The widespread use of generalized EMS pre

  1. A validation of ground ambulance pre-hospital times modeled using geographic information systems.

    Science.gov (United States)

    Patel, Alka B; Waters, Nigel M; Blanchard, Ian E; Doig, Christopher J; Ghali, William A

    2012-10-03

    Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7-8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a

  2. Validation of Satellite AOD Data with the Ground PM10 Data over Islamabad Pakistan

    Science.gov (United States)

    Bulbul, Gufran; Shahid, Imran

    2016-07-01

    health. In this study, concentrations of PM10 will be monitored at different sites in H-12 sector and Kashmir Highway Islamabad using High volume air sampler and its chemical characterization will be done using Energy Dispersive XRF. The first application of satellite remote sensing for aerosol monitoring began in the mid-1970s to detect the desert particles above the ocean using data from Landsat, GOES, and AVHRR remote sensing satellites. Maps of Aerosol Optical Depth (AOD) over the ocean were produced using the 0.63 µm channel of Advanced Very High Resolution Radiometer (AVHRR) . Aerosols properties were retrieved using AVHRR. The useable range of wavelengths of spectrum (shorter wavelengths and the longer wavelengths) for the remote sensing of the aerosols particles is mostly restricted due to ozone and gaseous absorptions. The purpose of the study is to validate the satellite Aerosol Optical Depth (AOD) data for the regional and local scale for Pakistan Objectives • To quantify the concentration of PM10 • To investigate their elemental composition • To find out their possible sources • Validation with MODIS satellite AOD Methodology: PM10 concentration will be measured at different sites of NUST Islamabad, Pakistan using High volume air sampler an Air sampling equipment capable of sampling high volumes of air (typically 57,000 ft3 or 1,600 m3) at high flow rates (typically 1.13 m3/min or 40 ft3/min) over an extended sampling duration (typically 24 hrs). The sampling period will be of 24 hours. Particles in the PM10 size range are then collected on the filter(s) during the specified 24-h sampling period. Each sample filter will be weighed before and after sampling to determine the net weight (mass) gain of the collected PM10 sample (40 CFR Part 50, Appendix M, US EPA). Next step will be the chemical characterization. Element concentrations will be determined by energy dispersive X-ray fluorescence (ED-XRF) technique. The ED-XRF system uses an X-ray tube to

  3. Multimodal Navigation in Endoscopic Transsphenoidal Resection of Pituitary Tumors Using Image-Based Vascular and Cranial Nerve Segmentation: A Prospective Validation Study.

    Science.gov (United States)

    Dolati, Parviz; Eichberg, Daniel; Golby, Alexandra; Zamani, Amir; Laws, Edward

    2016-11-01

    Transsphenoidal surgery (TSS) is the most common approach for the treatment of pituitary tumors. However, misdirection, vascular damage, intraoperative cerebrospinal fluid leakage, and optic nerve injuries are all well-known complications, and the risk of adverse events is more likely in less-experienced hands. This prospective study was conducted to validate the accuracy of image-based segmentation coupled with neuronavigation in localizing neurovascular structures during TSS. Twenty-five patients with a pituitary tumor underwent preoperative 3-T magnetic resonance imaging (MRI), and MRI images loaded into the navigation platform were used for segmentation and preoperative planning. After patient registration and subsequent surgical exposure, each segmented neural or vascular element was validated by manual placement of the navigation probe or Doppler probe on or as close as possible to the target. Preoperative segmentation of the internal carotid artery and cavernous sinus matched with the intraoperative endoscopic and micro-Doppler findings in all cases. Excellent correspondence between image-based segmentation and the endoscopic view was also evident at the surface of the tumor and at the tumor-normal gland interfaces. Image guidance assisted the surgeons in localizing the optic nerve and chiasm in 64% of cases. The mean accuracy of the measurements was 1.20 ± 0.21 mm. Image-based preoperative vascular and neural element segmentation, especially with 3-dimensional reconstruction, is highly informative preoperatively and potentially could assist less-experienced neurosurgeons in preventing vascular and neural injury during TSS. In addition, the accuracy found in this study is comparable to previously reported neuronavigation measurements. This preliminary study is encouraging for future prospective intraoperative validation with larger numbers of patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    Herd, Dean Wyatte, Kenneth Latimer, and Randy O’Reilly Computational Cognitive Neuroscience Lab Department of Psychology University of Colorado at...figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis

  5. Denoising and Back Ground Clutter of Video Sequence using Adaptive Gaussian Mixture Model Based Segmentation for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    Shanmugapriya. K

    2014-01-01

    Full Text Available The human action recognition system first gathers images by simply querying the name of the action on a web image search engine like Google or Yahoo. Based on the assumption that the set of retrieved images contains relevant images of the queried action, we construct a dataset of action images in an incremental manner. This yields a large image set, which includes images of actions taken from multiple viewpoints in a range of environments, performed by people who have varying body proportions and different clothing. The images mostly present the “key poses” since these images try to convey the action with a single pose. In existing system to support this they first used an incremental image retrieval procedure to collect and clean up the necessary training set for building the human pose classifiers. There are challenges that come at the expense of this broad and representative data. First, the retrieved images are very noisy, since the Web is very diverse. Second, detecting and estimating the pose of humans in still images is more difficult than in videos, partly due to the background clutter and the lack of a foreground mask. In videos, foreground segmentation can exploit motion cues to great benefit. In still images, the only cue at hand is the appearance information and therefore, our model must address various challenges associated with different forms of appearance. Therefore for robust separation, in proposed work a segmentation algorithm based on Gaussian Mixture Models is proposed which is adaptive to light illuminations, shadow and white balance is proposed here. This segmentation algorithm processes the video with or without noise and sets up adaptive background models based on the characteristics also this method is a very effective technique for background modeling which classifies the pixels of a video frame either background or foreground based on probability distribution.

  6. The SCEC Broadband Platform: Open-Source Software for Strong Ground Motion Simulation and Validation

    Science.gov (United States)

    Goulet, C.; Silva, F.; Maechling, P. J.; Callaghan, S.; Jordan, T. H.

    2015-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform (BBP) is a carefully integrated collection of open-source scientific software programs that can simulate broadband (0-100Hz) ground motions for earthquakes at regional scales. The BBP scientific software modules implement kinematic rupture generation, low and high-frequency seismogram synthesis using wave propagation through 1D layered velocity structures, seismogram ground motion amplitude calculations, and goodness of fit measurements. These modules are integrated into a software system that provides user-defined, repeatable, calculation of ground motion seismograms, using multiple alternative ground motion simulation methods, and software utilities that can generate plots, charts, and maps. The BBP has been developed over the last five years in a collaborative scientific, engineering, and software development project involving geoscientists, earthquake engineers, graduate students, and SCEC scientific software developers. The BBP can run earthquake rupture and wave propagation modeling software to simulate ground motions for well-observed historical earthquakes and to quantify how well the simulated broadband seismograms match the observed seismograms. The BBP can also run simulations for hypothetical earthquakes. In this case, users input an earthquake location and magnitude description, a list of station locations, and a 1D velocity model for the region of interest, and the BBP software then calculates ground motions for the specified stations. The SCEC BBP software released in 2015 can be compiled and run on recent Linux systems with GNU compilers. It includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, updated ground motion simulation methods, and a simplified command line user interface.

  7. The Effects of Highlighting, Validity, and Feature Type on Air-to-Ground Target Acquisition Performance.

    Science.gov (United States)

    2007-11-02

    cultura I taget Target type Validity X target X leadin interaction on initial response time (highlighted trials) WRONG HIGHLIGHTING ÖU - M ea...natural - leadin cultural ndurd cultura taget I taget Target type Figure 3.10: Validity X lead-in X Target interaction Confirmation time A

  8. An Efficient Optical Observation Ground Network is the Fundamental basis for any Space Based Debris Observation Segment

    Science.gov (United States)

    Cibin, L.; Chiarini, M.; Annoni, G.; Milani, A.; Bernardi, F.; Dimare, L.; Valsecchi, G.; Rossi, A.; Ragazzoni, R.; Salinari, P.

    2013-08-01

    A matter which is strongly debated in the SSA Community, concerns the observation of Space Debris from Space [1]. This topic has been preliminary studied by our Team for LEO, MEO and GEO orbital belts, allowing to remark a fundamental concept, residing in the fact that to be suitable to provide a functionality unavailable from ground in a cost to performance perspective, any Space Based System must operate in tight collaboration with an efficient Optical Ground Observation Network. In this work an analysis of the different functionalities which can be implemented with this approach for every orbital belt is illustrated, remarking the different achievable targets in terms of population size as a function of the observed orbits. Further, a preliminary definition of the most interesting missions scenarios, together with considerations and assessments on the observation strategy and P/L characteristics are presented.

  9. Validation of CALIPSO space-borne-derived attenuated backscatter coefficient profiles using a ground-based lidar in Athens, Greece

    Directory of Open Access Journals (Sweden)

    R. E. Mamouri

    2009-09-01

    Full Text Available We present initial aerosol validation results of the space-borne lidar CALIOP -onboard the CALIPSO satellite- Level 1 attenuated backscatter coefficient profiles, using coincident observations performed with a ground-based lidar in Athens, Greece (37.9° N, 23.6° E. A multi-wavelength ground-based backscatter/Raman lidar system is operating since 2000 at the National Technical University of Athens (NTUA in the framework of the European Aerosol Research LIdar NETwork (EARLINET, the first lidar network for tropospheric aerosol studies on a continental scale. Since July 2006, a total of 40 coincidental aerosol ground-based lidar measurements were performed over Athens during CALIPSO overpasses. The ground-based measurements were performed each time CALIPSO overpasses the station location within a maximum distance of 100 km. The duration of the ground–based lidar measurements was approximately two hours, centred on the satellite overpass time. From the analysis of the ground-based/satellite correlative lidar measurements, a mean bias of the order of 22% for daytime measurements and of 8% for nighttime measurements with respect to the CALIPSO profiles was found for altitudes between 3 and 10 km. The mean bias becomes much larger for altitudes lower that 3 km (of the order of 60% which is attributed to the increase of aerosol horizontal inhomogeneity within the Planetary Boundary Layer, resulting to the observation of possibly different air masses by the two instruments. In cases of aerosol layers underlying Cirrus clouds, comparison results for aerosol tropospheric profiles become worse. This is attributed to the significant multiple scattering effects in Cirrus clouds experienced by CALIPSO which result in an attenuation which is less than that measured by the ground-based lidar.

  10. Self-organizing strategy design and validation for integrated air-ground detection swarm

    Institute of Scientific and Technical Information of China (English)

    Meiyan An; Zhaokui Wang; Yulin Zhang

    2016-01-01

    A self-organized integrated air-ground detection swarm is tentatively applied to achieve reentry vehicle landing detection, such as searching and rescuing a manned spaceship. The detec-tion swarm consists of multiple unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs). The UAVs can access a detected object quickly for high mobility, while the UGVs can comprehensively investigate the object due to the variety of car-ried equipment. In addition, the integrated air-ground detection swarm is capable of detecting from the ground and the air si-multaneously. To accomplish the coordination of the UGVs and UAVs, they are al regarded as individuals of the artificial swarm. Those individuals make control decisions independently of others based on the self-organizing strategy. The overal requirements for the detection swarm are analyzed, and the theoretical model of the self-organizing strategy based on a combined individual and environmental virtual function is established. The numerical in-vestigation proves that the self-organizing strategy is suitable and scalable to control the detection swarm. To further inspect the en-gineering reliability, an experiment set is established in laboratory, and the experimental demonstration shows that the self-organizing strategy drives the detection swarm forming a close range and mul-tiangular surveil ance configuration of a landing spot.

  11. Validity and reliability of pressure-measurement insoles for vertical ground reaction force assessment in field situations.

    Science.gov (United States)

    Koch, Markus; Lunde, Lars-Kristian; Ernst, Michael; Knardahl, Stein; Veiersted, Kaj Bo

    2016-03-01

    This study aimed to test the validity and reliability of pressure-measurement insoles (medilogic® insoles) when measuring vertical ground reaction forces in field situations. Various weights were applied to and removed from the insoles in static mechanical tests. The force values measured simultaneously by the insoles and force plates were compared for 15 subjects simulating work activities. Reliability testing during the static mechanical tests yielded an average interclass correlation coefficient of 0.998. Static loads led to a creeping pattern of the output force signal. An individual load response could be observed for each insole. The average root mean square error between the insoles and force plates ranged from 6.6% to 17.7% in standing, walking, lifting and catching trials and was 142.3% in kneeling trials. The results show that the use of insoles may be an acceptable method for measuring vertical ground reaction forces in field studies, except for kneeling positions.

  12. Survivability enhancement study for C/sup 3/I/BM (communications, command, control and intelligence/battle management) ground segments: Final report

    Energy Technology Data Exchange (ETDEWEB)

    1986-10-30

    This study involves a concept developed by the Fairchild Space Company which is directly applicable to the Strategic Defense Initiative (SDI) Program as well as other national security programs requiring reliable, secure and survivable telecommunications systems. The overall objective of this study program was to determine the feasibility of combining and integrating long-lived, compact, autonomous isotope power sources with fiber optic and other types of ground segments of the SDI communications, command, control and intelligence/battle management (C/sup 3/I/BM) system in order to significantly enhance the survivability of those critical systems, especially against the potential threats of electromagnetic pulse(s) (EMP) resulting from high altitude nuclear weapon explosion(s). 28 figs., 2 tabs.

  13. Validation and understanding of Moderate Resolution Imaging Spectroradiometer aerosol products (C5) using ground-based measurements from the handheld Sun photometer network in China

    Science.gov (United States)

    Zhanqing Li; Feng Niu; Kwon-Ho Lee; Jinyuan Xin; Wei Min Hao; Bryce L. Nordgren; Yuesi Wang; Pucai Wang

    2007-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) currently provides the most extensive aerosol retrievals on a global basis, but validation is limited to a small number of ground stations. This study presents a comprehensive evaluation of Collection 4 and 5 MODIS aerosol products using ground measurements from the Chinese Sun Hazemeter Network (CSHNET). The...

  14. Dimensionless Maps for the Validity of Analytical Ground Heat Transfer Models for GSHP Applications

    Directory of Open Access Journals (Sweden)

    Paolo Conti

    2016-10-01

    Full Text Available This article provides plain and handy expressions to decide the most suitable analytical model for the thermal analysis of the ground source in vertical ground-coupled heat pump applications. We perform a comprehensive dimensionless analysis of the reciprocal deviation among the classical infinite, finite, linear and cylindrical heat source models in purely conductive media. Besides, we complete the framework of possible boreholes model with the “hollow” finite cylindrical heat source solution, still lacking in the literature. Analytical expressions are effective tools for both design and performance assessment: they are able to provide practical and general indications on the thermal behavior of the ground with an advantageous tradeoff between calculation efforts and solution accuracy. This notwithstanding, their applicability to any specific case is always subjected to the coherence of the model assumptions, also in terms of length and time scales, with the specific case of interest. We propose several dimensionless criteria to evaluate when one model is practically equivalent to another one and handy maps that can be used for both design and performance analysis. Finally, we found that the finite line source represents the most suitable model for borehole heat exchangers (BHEs, as it is applicable to a wide range of space and time scales, practically providing the same results of more complex models.

  15. Space and ground segment performance and lessons learned of the FORMOSAT-3/COSMIC mission: four years in orbit

    Directory of Open Access Journals (Sweden)

    C.-J. Fong

    2011-06-01

    Full Text Available The FORMOSAT-3/COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate Mission consisting of six Low-Earth-Orbit (LEO satellites is the world's first demonstration constellation using radio occultation signals from Global Positioning System (GPS satellites. The atmospheric profiles derived by processing radio occultation signals are retrieved in near real-time for global weather/climate monitoring, numerical weather prediction, and space weather research. The mission has processed, on average, 1400 to 1800 high-quality atmospheric sounding profiles per day. The atmospheric radio occultation data are assimilated into operational numerical weather prediction models for global weather prediction, including typhoon/hurricane/cyclone forecasts. The radio occultation data has shown a positive impact on weather predictions at many national weather forecast centers. A follow-on mission was proposed that transitions the current experimental research mission into a significantly improved real-time operational mission, which will reliably provide 8000 radio occultation soundings per day. The follow-on mission, as planned, will consist of 12 LEO satellites (compared to 6 satellites for the current mission with data latency requirement of 45 min (compared to 3 h for the current mission, which will provide greatly enhanced opportunities for operational forecasts and scientific research. This paper will address the FORMOSAT-3/COSMIC system and mission overview, the spacecraft and ground system performance after four years in orbit, the lessons learned from the encountered technical challenges and observations, and the expected design improvements for the spacecraft and ground system for FORMOSAT-7/COSMIC-2.

  16. Development and Ground-Test Validation of Fiber Optic Sensor Attachment Techniques for Hot Structures Applications

    Science.gov (United States)

    Piazza, Anthony; Hudson, Larry D.; Richards, W. Lance

    2005-01-01

    Fiber Optic Strain Measurements: a) Successfully attached silica fiber optic sensors to both metallics and composites; b) Accomplished valid EFPI strain measurements to 1850 F; c) Successfully attached EFPI sensors to large scale hot-structures; and d) Attached and thermally validated FBG bond and epsilon(sub app). Future Development a) Improve characterization of sensors on C-C and C-SiC substrates; b) Apply application to other composites such as SiC-SiC; c) Assist development of interferometer based Sapphire sensor currently being conducted under a Phase II SBIR; and d) Complete combined thermal/mechanical testing of FBG on composite substrates in controlled laboratory environment.

  17. Objective Performance Evaluation of Video Segmentation Algorithms with Ground-Truth%一种客观的视频对象分割算法性能评价方法

    Institute of Scientific and Technical Information of China (English)

    杨高波; 张兆扬

    2004-01-01

    While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance.In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation.Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy.Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames.The experimental results show the feasibility of our approach.Moreover, it is computationally more efficient than previous methods.It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.

  18. Validation of GOME-2/Metop total column water vapour with ground-based and in situ measurements

    Science.gov (United States)

    Kalakoski, Niilo; Kujanpää, Jukka; Sofieva, Viktoria; Tamminen, Johanna; Grossi, Margherita; Valks, Pieter

    2016-04-01

    The total column water vapour product from the Global Ozone Monitoring Experiment-2 on board Metop-A and Metop-B satellites (GOME-2/Metop-A and GOME-2/Metop-B) produced by the Satellite Application Facility on Ozone and Atmospheric Chemistry Monitoring (O3M SAF) is compared with co-located radiosonde observations and global positioning system (GPS) retrievals. The validation is performed using recently reprocessed data by the GOME Data Processor (GDP) version 4.7. The time periods for the validation are January 2007-July 2013 (GOME-2A) and December 2012-July 2013 (GOME-2B). The radiosonde data are from the Integrated Global Radiosonde Archive (IGRA) maintained by the National Climatic Data Center (NCDC). The ground-based GPS observations from the COSMIC/SuomiNet network are used as the second independent data source. We find a good general agreement between the GOME-2 and the radiosonde/GPS data. The median relative difference of GOME-2 to the radiosonde observations is -2.7 % for GOME-2A and -0.3 % for GOME-2B. Against the GPS, the median relative differences are 4.9 % and 3.2 % for GOME-2A and B, respectively. For water vapour total columns below 10 kg m-2, large wet biases are observed, especially against the GPS retrievals. Conversely, at values above 50 kg m-2, GOME-2 generally underestimates both ground-based observations.

  19. Space-borne detection of volcanic carbon dioxide anomalies: The importance of ground-based validation networks

    Science.gov (United States)

    Schwandner, F. M.; Carn, S. A.; Corradini, S.; Merucci, L.; Salerno, G.; La Spina, A.

    2012-04-01

    2011 activity we compare GOSAT custom re-processed target mode observation CO2 data to SO2 data from the Ozone Monitoring Instrument (OMI), the Moderate-Resolution Imaging Spectroradiometer (MODIS), and ground-based SO2 measurements obtained by the FLAME ultraviolet scanning DOAS network, as well as ground-based multi-species measurements obtained by FTIR technique. GOSAT CO2 data show an expected seasonal pattern, because the signal is dominated by ambient atmospheric CO2. However, some possible significant variations do appear to exist before and during eruptive events. Besides cloud and aerosol effects and volcanic emission pulses, two further factors seem to also strongly affect the signal beyond seasonal variability: different altitudes ranges of sensitivity for OMI and GOSAT appear to cause inverse signal correlations when the presence of clouds allows for multiple scattering effects. The second effect is wintertime high-altitude snow cover, which enhances the reflected light yield in the suspected high-concentration column portions near the ground. The latter two effects may dominate between emission pulses and their inverse correlations stand in contrast to magmatic events, which we suspect to give rise to positive correlations. (2) Integration of space-borne and ground-based observations of volcanic CO2 emissions. Monitoring of remote terrestrial volcanic point sources of CO2 from space and using ground-based observations have advantages and disadvantages. Advantages of satellite methods include homogenous coverage potential, a single data format, and a largely unbiased, mostly global coverage potential. Advantages of ground-based observations include easier calibration and targeting, validation and spatial resolution capacity. While cost plays a strong role in either approach, ground-based methods are often hampered by available personnel to expand observations to global coverage, by a patchwork of instrumentation types, coverage, availability, quality, and

  20. Inversion model validation of ground emissivity. Contribution to the development of SMOS algorithm

    CERN Document Server

    Demontoux, François; Ruffié, Gilles; Wigneron, Jean Pierre; Grant, Jennifer; Hernandez, Daniel Medina

    2007-01-01

    SMOS (Soil Moisture and Ocean Salinity), is the second mission of 'Earth Explorer' to be developed within the program 'Living Planet' of the European Space Agency (ESA). This satellite, containing the very first 1.4GHz interferometric radiometer 2D, will carry out the first cartography on a planetary scale of the moisture of the grounds and the salinity of the oceans. The forests are relatively opaque, and the knowledge of moisture remains problematic. The effect of the vegetation can be corrected thanks a simple radiative model. Nevertheless simulations show that the effect of the litter on the emissivity of a system litter + ground is not negligible. Our objective is to highlight the effects of this layer on the total multi layer system. This will make it possible to lead to a simple analytical formulation of a model of litter which can be integrated into the calculation algorithm of SMOS. Radiometer measurements, coupled to dielectric characterizations of samples in laboratory can enable us to characterize...

  1. An Experimental Facility to Validate Ground Source Heat Pump Optimisation Models for the Australian Climate

    Directory of Open Access Journals (Sweden)

    Yuanshen Lu

    2017-01-01

    Full Text Available Ground source heat pumps (GSHPs are one of the most widespread forms of geothermal energy technology. They utilise the near-constant temperature of the ground below the frost line to achieve energy-efficiencies two or three times that of conventional air-conditioners, consequently allowing a significant offset in electricity demand for space heating and cooling. Relatively mature GSHP markets are established in Europe and North America. GSHP implementation in Australia, however, is limited, due to high capital price, uncertainties regarding optimum designs for the Australian climate, and limited consumer confidence in the technology. Existing GSHP design standards developed in the Northern Hemisphere are likely to lead to suboptimal performance in Australia where demand might be much more cooling-dominated. There is an urgent need to develop Australia’s own GSHP system optimisation principles on top of the industry standards to provide confidence to bring the GSHP market out of its infancy. To assist in this, the Queensland Geothermal Energy Centre of Excellence (QGECE has commissioned a fully instrumented GSHP experimental facility in Gatton, Australia, as a publically-accessible demonstration of the technology and a platform for systematic studies of GSHPs, including optimisation of design and operations. This paper presents a brief review on current GSHP use in Australia, the technical details of the Gatton GSHP facility, and an analysis on the observed cooling performance of this facility to date.

  2. NI-18MULTIMODAL NAVIGATION IN ENDOSCOPIC TRANS-SPHENOIDAL RESECTION OF PITUITARY TUMORS USING IMAGE-BASED VASCULAR AND CRANIAL NERVE SEGMENTATION: A PROSPECTIVE VALIDATION STUDY

    Science.gov (United States)

    Dolati, Parviz; Raber, Michael; Golby, Alexandra; Laws, Edward

    2014-01-01

    Trans-Sphenoidal surgery (TSS) is a well-known approach for treatment of pituitary tumors. However, in inexperienced hands, the risk of lateral misdirection and vascular damage, intraoperative CSF leakage, and optic nerve injury are all well-known complications of this procedure. This prospective study was conducted to validate the accuracy of image-based segmentation in localization of neurovascular structures during TSS. METHODS: Eight patients with pituitary tumor underwent preoperative 3TMRI, which included thin sectioned 3D space T2, 3D Time of Flight and MPRAGE sequences. Images were reviewed by an expert independent neuroradiologist. Imaging sequences were loaded in BrainLab iPlanNet (6/8) and Stryker (2/8) for segmentation and pre-op planning. After patient registration to the intra-op neuronavigation system and surgical exposure, each segmented neural or vascular element was validated by manual placement of the navigation probe. The pulses of the bilateral ICA were confirmed using micro-Doppler. RESULTS: Pre-operative segmentation of the ICA and cavernous sinus matched with the intra-operative endoscopic and micro-Doppler findings in all cases (Dice-coefficient =1). This information reassured surgeons regarding the lateral extent of bone removal at the sellar floor and the limits of lateral explorations. Perfect correspondence between image-based segmentation and endoscopic view was also found at the surface of the tumor and tumor-normal gland interfaces. This helped in preventing unnecessary removal of the normal pituitary gland. Image-guidance helped surgeon to localize the optic nerve and chiasm in 63% of case and Diaphragma sella in 50% of cases, which helped to determine the limits of upward exploration and decrease the risk of CSF leakage. CONCLUSION: Image-based pre-operative vascular and neural element segmentation especially with 3D reconstruction is highly informative preoperatively and helps young and inexperienced neurosurgeons to prevent

  3. Validation and Development of a New Automatic Algorithm for Time-Resolved Segmentation of the Left Ventricle in Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Jane Tufvesson

    2015-01-01

    Full Text Available Introduction. Manual delineation of the left ventricle is clinical standard for quantification of cardiovascular magnetic resonance images despite being time consuming and observer dependent. Previous automatic methods generally do not account for one major contributor to stroke volume, the long-axis motion. Therefore, the aim of this study was to develop and validate an automatic algorithm for time-resolved segmentation covering the whole left ventricle, including basal slices affected by long-axis motion. Methods. Ninety subjects imaged with a cine balanced steady state free precession sequence were included in the study (training set n=40, test set n=50. Manual delineation was reference standard and second observer analysis was performed in a subset (n=25. The automatic algorithm uses deformable model with expectation-maximization, followed by automatic removal of papillary muscles and detection of the outflow tract. Results. The mean differences between automatic segmentation and manual delineation were EDV −11 mL, ESV 1 mL, EF −3%, and LVM 4 g in the test set. Conclusions. The automatic LV segmentation algorithm reached accuracy comparable to interobserver for manual delineation, thereby bringing automatic segmentation one step closer to clinical routine. The algorithm and all images with manual delineations are available for benchmarking.

  4. Ground Water Atlas of the United States: Segment 13, Alaska, Hawaii, Puerto Rico, and the U.S. Virgin Islands

    Science.gov (United States)

    Miller, James A.; Whitehead, R.L.; Oki, Delwyn S.; Gingerich, Stephen B.; Olcott, Perry G.

    1997-01-01

    Alaska is the largest State in the Nation and has an area of about 586,400 square miles, or about one-fifth the area of the conterminous United States. The State is geologically and topographically diverse and is characterized by wild, scenic beauty. Alaska contains abundant natural resources, including ground water and surface water of chemical quality that is generally suitable for most uses.The central part of Alaska is drained by the Yukon River and its tributaries, the largest of which are the Porcupine, the Tanana, and the Koyukuk Rivers. The Yukon River originates in northwestern Canada and, like the Kuskokwim River, which drains a large part of southwestern Alaska , discharges into the Bering Sea. The Noatak River in northwestern Alaska discharges into the Chukchi Sea. Major rivers in southern Alaska include the Susitna and the Matanuska Rivers, which discharge into Cook Inlet, and the Copper River, which discharges into the Gulf of Alaska . North of the Brooks Range, the Colville and the Sagavanirktok Rivers and numerous smaller streams discharge into the Arctic Ocean.In 1990, Alaska had a population of about 552,000 and, thus , is one of the least populated States in the Nation. Most of the population is concentrated in the cities of Anchorage, Fairbanks, and Juneau, all of which are located in lowland areas. The mountains, the frozen Arctic desert, the interior plateaus, and the areas covered with glaciers lack major population centers. Large parts of Alaska are uninhabited and much of the State is public land. Ground-water development has not occurred over most of these remote areas.The Hawaiian islands are the exposed parts of the Hawaiian Ridge, which is a large volcanic mountain range on the sea floor. Most of the Hawaiian Ridge is below sea level (fig. 31) . The State of Hawaii consists of a group of 132 islands, reefs, and shoals that extend for more than 1 ,500 miles from southeast to northwest across the central Pacific Ocean between about 155

  5. Characterization of Personal Privacy Devices (PPD) radiation pattern impact on the ground and airborne segments of the local area augmentation system (LAAS) at GPS L1 frequency

    Science.gov (United States)

    Alkhateeb, Abualkair M. Khair

    Personal Privacy Devices (PPDs) are radio-frequency transmitters that intentionally transmit in a frequency band used by other devices for the intent purpose of denying service to those devices. These devices have shown the potential to interfere with the ground and air sub-systems of the Local Area Augmentation Systems (LAAS), a GPS-based navigation aids at commercial airports. The Federal Aviation Administration (FAA) is concerned by the potential impact of these devices to GPS navigation aids at airports and has commenced an activity to determine the severity of this threat. In support of this situation, the research in this dissertation has been conducted under (FAA) Cooperative Agreement 2011-G-012, to investigate the impact of these devices on the LAAS. In order to investigate the impact of PPDs Radio Frequency Interference (RFI) on the ground and air sub-systems of the LAAS, the work presented in phase one of this research is intended to characterize the vehicle's impact on the PPD's Effective Isotropic Radiated Power (EIRP). A study was conceived in this research to characterize PPD performance by examining the on-vehicle radiation patterns as a function of vehicle type, jammer type, jammer location inside a vehicle and jammer orientation at each location. Phase two was to characterize the GPS Radiation Pattern on Multipath Limiting Antenna. MLA has to meet stringent requirements for acceptable signal detection and multipath rejection. The ARL-2100 is the most recent MLA antenna proposed to be used in the LAAS ground segment. The ground-based antenna's radiation pattern was modeled. This was achieved via (HFSS) a commercial-off the shelf CAD-based modeling code with a full-wave electromagnetic software simulation package that uses the Finite Element Analysis. Phase three of this work has been conducted to study the characteristics of the GPS Radiation Pattern on Commercial Aircraft. The airborne GPS antenna was modeled and the resulting radiation pattern on

  6. AATSR Land Surface Temperature Product Validation Using Ground Measurements in China and Implications for SLSTR

    Science.gov (United States)

    Zhou, Ji; Zmuda, Andy; Desnos, Yves-Louis; Ma, Jin

    2016-08-01

    Land surface temperature (LST) is one of the most important parameters at the interface between the earth's surface and the atmosphere. It acts as a sensitive indicator of climate change and is an essential input parameter for land surface models. Because of the intense variability at different spatial and temporal scales, satellite remote sensing provides the sole opportunity to acquire LSTs over large regions. Validation of the LST products is an necessary step before their applications conducted by scientific community and it is essential for the developers to improve the LST products.

  7. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  8. Validation of VIIRS Cloud Base Heights at Night Using Ground and Satellite Measurements over Alaska

    Science.gov (United States)

    NOH, Y. J.; Miller, S. D.; Seaman, C.; Forsythe, J. M.; Brummer, R.; Lindsey, D. T.; Walther, A.; Heidinger, A. K.; Li, Y.

    2016-12-01

    Knowledge of Cloud Base Height (CBH) is critical to describing cloud radiative feedbacks in numerical models and is of practical significance to aviation communities. We have developed a new CBH algorithm constrained by Cloud Top Height (CTH) and Cloud Water Path (CWP) by performing a statistical analysis of A-Train satellite data. It includes an extinction-based method for thin cirrus. In the algorithm, cloud geometric thickness is derived with upstream CTH and CWP input and subtracted from CTH to generate the topmost layer CBH. The CBH information is a key parameter for an improved Cloud Cover/Layers product. The algorithm has been applied to the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi NPP spacecraft. Nighttime cloud optical properties for CWP are retrieved from the nighttime lunar cloud optical and microphysical properties (NLCOMP) algorithm based on a lunar reflectance model for the VIIRS Day/Night Band (DNB) measuring nighttime visible light such as moonlight. The DNB has innovative capabilities to fill the polar winter and nighttime gap of cloud observations which has been an important shortfall from conventional radiometers. The CBH products have been intensively evaluated against CloudSat data. The results showed the new algorithm yields significantly improved performance over the original VIIRS CBH algorithm. However, since CloudSat is now operational during daytime only due to a battery anomaly, the nighttime performance has not been fully assessed. This presentation will show our approach to assess the performance of the CBH algorithm at night. VIIRS CBHs are retrieved over the Alaska region from October 2015 to April 2016 using the Clouds from AVHRR Extended (CLAVR-x) processing system. Ground-based measurements from ceilometer and micropulse lidar at the Atmospheric Radiation Measurement (ARM) site on the North Slope of Alaska are used for the analysis. Local weather conditions are checked using temperature and precipitation

  9. Distributed Disdrometer and Rain Gauge Measurement Infrastructure Developed for GPM Ground Validation

    Science.gov (United States)

    Petersen, W. A.; Bringi, V.; Carey, L. D.; Gatlin, P. N.; Phillips, D.; Schwaller, M.; Tokay, A.; Wingo, M.; Wolff, D. B.

    2010-12-01

    Global Precipitation Mission (GPM)retrieval algorithm validation requires datasets characterizing the 4-D structure, variability, and correlation properties of hydrometeor particle size distributions (PSD) and accumulations over satellite fields of view (FOV;tropospheric sounding data to refine GPM snowfall retrievals. The gauge and disdrometer instruments are being developed to operate autonomously when necessary using solar power and wireless communications. These systems will be deployed in numerous field campaigns through 2016. Planned deployment of these systems include field campaigns in Finland (2010), Oklahoma (2011), Canada (2012) and North Carolina (2013). GPM will also deploy 20 pairs of TBRGs within a 25 km2 region along the Virginia coast under NASA NPOL radar coverage in order to quantify errors in point-area rainfall measurements.

  10. Validation of ground-motion simulations for historical events using SDoF systems

    Science.gov (United States)

    Galasso, C.; Zareian, F.; Iervolino, I.; Graves, R.W.

    2012-01-01

    The study presented in this paper is among the first in a series of studies toward the engineering validation of the hybrid broadband ground‐motion simulation methodology by Graves and Pitarka (2010). This paper provides a statistical comparison between seismic demands of single degree of freedom (SDoF) systems subjected to past events using simulations and actual recordings. A number of SDoF systems are selected considering the following: (1) 16 oscillation periods between 0.1 and 6 s; (2) elastic case and four nonlinearity levels, from mildly inelastic to severely inelastic systems; and (3) two hysteretic behaviors, in particular, nondegrading–nonevolutionary and degrading–evolutionary. Demand spectra are derived in terms of peak and cyclic response, as well as their statistics for four historical earthquakes: 1979 Mw 6.5 Imperial Valley, 1989 Mw 6.8 Loma Prieta, 1992 Mw 7.2 Landers, and 1994 Mw 6.7 Northridge.

  11. Pathology-based validation of FDG PET segmentation tools for volume assessment of lymph node metastases from head and neck cancer

    Energy Technology Data Exchange (ETDEWEB)

    Schinagl, Dominic A.X. [Radboud University Nijmegen Medical Centre, Department of Radiation Oncology, Nijmegen (Netherlands); Radboud University Nijmegen Medical Centre, Department of Radiation Oncology (874), P.O. Box 9101, Nijmegen (Netherlands); Span, Paul N.; Kaanders, Johannes H.A.M. [Radboud University Nijmegen Medical Centre, Department of Radiation Oncology, Nijmegen (Netherlands); Hoogen, Frank J.A. van den [Radboud University Nijmegen Medical Centre, Department of Otorhinolaryngology, Head and Neck Surgery, Nijmegen (Netherlands); Merkx, Matthias A.W. [Radboud University Nijmegen Medical Centre, Department of Oral and Maxillofacial Surgery, Nijmegen (Netherlands); Slootweg, Piet J. [Radboud University Nijmegen Medical Centre, Department of Pathology, Nijmegen (Netherlands); Oyen, Wim J.G. [Radboud University Nijmegen Medical Centre, Department of Nuclear Medicine, Nijmegen (Netherlands)

    2013-12-15

    FDG PET is increasingly incorporated into radiation treatment planning of head and neck cancer. However, there are only limited data on the accuracy of radiotherapy target volume delineation by FDG PET. The purpose of this study was to validate FDG PET segmentation tools for volume assessment of lymph node metastases from head and neck cancer against the pathological method as the standard. Twelve patients with head and neck cancer and 28 metastatic lymph nodes eligible for therapeutic neck dissection underwent preoperative FDG PET/CT. The metastatic lymph nodes were delineated on CT (Node{sub CT}) and ten PET segmentation tools were used to assess FDG PET-based nodal volumes: interpreting FDG PET visually (PET{sub VIS}), applying an isocontour at a standardized uptake value (SUV) of 2.5 (PET{sub SUV}), two segmentation tools with a fixed threshold of 40 % and 50 %, and two adaptive threshold based methods. The latter four tools were applied with the primary tumour as reference and also with the lymph node itself as reference. Nodal volumes were compared with the true volume as determined by pathological examination. Both Node{sub CT} and PET{sub VIS} showed good correlations with the pathological volume. PET segmentation tools using the metastatic node as reference all performed well but not better than PET{sub VIS}. The tools using the primary tumour as reference correlated poorly with pathology. PET{sub SUV} was unsatisfactory in 35 % of the patients due to merging of the contours of adjacent nodes. FDG PET accurately estimates metastatic lymph node volume, but beyond the detection of lymph node metastases (staging), it has no added value over CT alone for the delineation of routine radiotherapy target volumes. If FDG PET is used in radiotherapy planning, treatment adaptation or response assessment, we recommend an automated segmentation method for purposes of reproducibility and interinstitutional comparison. (orig.)

  12. Ozone columns obtained by ground-based remote sensing in Kiev for Aura Ozone Measuring Instrument validation

    Science.gov (United States)

    Shavrina, A. V.; Pavlenko, Y. V.; Veles, A.; Syniavskyi, I.; Kroon, M.

    2007-12-01

    Ground-based observations with a Fourier transform spectrometer in the infrared region (FTIR) were performed in Kiev (Ukraine) during the time frames August-October 2005 and June-October 2006 within the Ozone Monitoring Instrument (OMI) validation project 2907 entitled "OMI validation by ground based remote sensing: ozone columns and profiles" in the frame of the international European Space Agency/Netherlands Agency for Aerospace Programmes/Royal Dutch Meteorological Institute OMI Announcement of Opportunity effort. Ozone column data for 2005 were obtained by modeling the ozone spectral band at 9.6 μm with the radiative transfer code MODTRAN3.5. Our total ozone column values were found to be lower than OMI Differential Optical Absorption Spectroscopy (DOAS) total ozone column data by 8-10 Dobson units (DU, 1 DU = 0.001 atm cm) on average, while our observations have a relatively small standard error of about 2 DU. Improved modeling of the ozone spectral band, now based on HITRAN-2004 spectral data as calculated by us, moves our results toward better agreement with the OMI DOAS total ozone column data. The observations made during 2006 with a modernized FTIR spectrometer and higher signal-to-noise ratio were simulated by the MODTRAN4 model computations. For ozone column estimates the Aqua Atmospheric Infrared Sounder satellite water vapor and temperature profiles were combined with the Aura Microwave Limb Sounder stratospheric ozone profiles and Tropospheric Emission Monitoring Internet Service-Koninklijk Nederlands Meteorologisch Instituut climatological profiles to create a priori input files for spectral modeling. The MODTRAN4 estimates of ozone columns from the 2006 observations compare rather well with the OMI total ozone column data: standard errors are of 1.11 DU and 0.68 DU, standard deviation are of 8.77 DU and 5.37 DU for OMI DOAS and OMI Total Ozone Mapping Spectrometer, respectively.

  13. LLNL Calibration Program: Data Collection, Ground Truth Validation, and Regional Coda Magnitude

    Energy Technology Data Exchange (ETDEWEB)

    Myers, S C; Mayeda, K; Walter, C; Schultz, C; O' Boyle, J; Hofstetter, A; Rodgers, A; Ruppert, S

    2001-08-28

    Lawrence Livermore National Laboratory (LLNL) integrates and collects data for use in calibration of seismic detection, location, and identification. Calibration data is collected by (1) numerous seismic field efforts, many conducted under NNSA (ROA) and DTRA (PRDA) contracts, and (2) permanent seismic stations that are operated by national and international organizations. Local-network operators and international organizations (e.g. International Seismic Center) provide location and other source characterization (collectively referred to as source parameters) to LLNL, or LLNL determines these parameters from raw data. For each seismic event, LLNL rigorously characterizes the uncertainty of source parameters. This validation process is used to identify events whose source parameters are accurate enough for use in calibration. LLNL has developed criteria for determining the accuracy of seismic locations and methods to characterize the covariance of calibration datasets. Although the most desirable calibration events are chemical and nuclear explosions with highly accurate locations and origin times, catalogues of naturally occurring earthquakes offer needed geographic coverage that is not provided by man made sources. The issue in using seismically determined locations for calibration is validating the location accuracy. Sweeney (1998) presented a 50/90 teleseismic, network-coverage criterion (50 defining phases and 90{sup o} maximum azimuthal gap) that generally results in 15-km maximum epicenter error. We have also conducted tests of recently proposed local/regional criteria and found that 10-km accuracy can be achieved by applying a 20/90 criteria. We continue to conduct tests that may validate less stringent criteria (which will produce more calibration events) while maintaining desirable location accuracy. Lastly, we examine methods of characterizing the covariance structure of calibration datasets. Each dataset is likely to be effected by distinct error

  14. LLNL Calibration Program: Data Collection, Ground Truth Validation, and Regional Coda Magnitude

    Energy Technology Data Exchange (ETDEWEB)

    Myers, S C; Mayeda, K; Walter, C; Schultz, C; O' Boyle, J; Hofstetter, A; Rodgers, A; Ruppert, S

    2001-08-28

    Lawrence Livermore National Laboratory (LLNL) integrates and collects data for use in calibration of seismic detection, location, and identification. Calibration data is collected by (1) numerous seismic field efforts, many conducted under NNSA (ROA) and DTRA (PRDA) contracts, and (2) permanent seismic stations that are operated by national and international organizations. Local-network operators and international organizations (e.g. International Seismic Center) provide location and other source characterization (collectively referred to as source parameters) to LLNL, or LLNL determines these parameters from raw data. For each seismic event, LLNL rigorously characterizes the uncertainty of source parameters. This validation process is used to identify events whose source parameters are accurate enough for use in calibration. LLNL has developed criteria for determining the accuracy of seismic locations and methods to characterize the covariance of calibration datasets. Although the most desirable calibration events are chemical and nuclear explosions with highly accurate locations and origin times, catalogues of naturally occurring earthquakes offer needed geographic coverage that is not provided by man made sources. The issue in using seismically determined locations for calibration is validating the location accuracy. Sweeney (1998) presented a 50/90 teleseismic, network-coverage criterion (50 defining phases and 90{sup o} maximum azimuthal gap) that generally results in 15-km maximum epicenter error. We have also conducted tests of recently proposed local/regional criteria and found that 10-km accuracy can be achieved by applying a 20/90 criteria. We continue to conduct tests that may validate less stringent criteria (which will produce more calibration events) while maintaining desirable location accuracy. Lastly, we examine methods of characterizing the covariance structure of calibration datasets. Each dataset is likely to be effected by distinct error

  15. Validation of stratospheric temperature profiles from a ground-based microwave radiometer with other techniques

    Science.gov (United States)

    Navas, Francisco; Kämpfer, Niklaus; Haefele, Alexander; Keckhut, Philippe; Hauchecorne, Alain

    2016-04-01

    Vertical profiles of atmospheric temperature trends has become recognized as an important indicator of climate change, because different climate forcing mechanisms exhibit distinct vertical warming and cooling patterns. For example, the cooling of the stratosphere is an indicator for climate change as it provides evidence of natural and anthropogenic climate forcing just like surface warming. Despite its importance, our understanding of the observed stratospheric temperature trend and our ability to test simulations of the stratospheric response to emissions of greenhouse gases and ozone depleting substances remains limited. One of the main reason is because stratospheric long-term datasets are sparse and obtained trends differ from one another. Different techniques allow to measure stratospheric temperature profiles as radiosonde, lidar or satellite. The main advantage of microwave radiometers against these other instruments is a high temporal resolution with a reasonable good spatial resolution. Moreover, the measurement at a fixed location allows to observe local atmospheric dynamics over a long time period, which is crucial for climate research. This study presents an evaluation of the stratospheric temperature profiles from a newly ground-based microwave temperature radiometer (TEMPERA) which has been built and designed at the University of Bern. The measurements from TEMPERA are compared with the ones from other different techniques such as in-situ (radiosondes), active remote sensing (lidar) and passive remote sensing on board of Aura satellite (MLS) measurements. In addition a statistical analysis of the stratospheric temperature obtained from TEMPERA measurements during four years of data has been performed. This analysis evidenced the capability of TEMPERA radiometer to monitor the temperature in the stratosphere for a long-term. The detection of some singular sudden stratospheric warming (SSW) during the analyzed period shows the necessity of these

  16. Problems and possibilities of astronauts—Ground communication content analysis validity check

    Science.gov (United States)

    Kanas, Nick; Gushin, Vadim; Yusupova, Anna

    The analysis of space crew's communication with mission control (MC) is the standard operational procedure of the psychological support group in the Institute for Biomedical problems, Russia. For more than 20 years it is used for the monitoring of the behavioral health of Russian crewmembers in space. We apply quantitative speech content analysis to reveal relationship dynamics within the group and between the crew and MC. We suggest that the features of individual communicative style reflect psychological emotional status and individuality of communicator, his coping strategy, fixed by POMS. Moreover, the appearance of certain psychological complexities would become apparent both in POMS profile change and in communicative style change. As a result of the validity check we arrived to a new objective method of crews' dynamic psychological monitoring. This method would not take any of astronauts' time, would not need any on-board equipment, and at the same time it is based on real performance in space, i.e. astronauts communication with MC.

  17. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    Science.gov (United States)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  18. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    Science.gov (United States)

    Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.

    2014-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results

  19. Assessing the capability of numerical methods to predict earthquake ground motion: the Euroseistest verification and validation project

    Science.gov (United States)

    Chaljub, E. O.; Bard, P.; Tsuno, S.; Kristek, J.; Moczo, P.; Franek, P.; Hollender, F.; Manakou, M.; Raptakis, D.; Pitilakis, K.

    2009-12-01

    During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predict earthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions

  20. Validation of ACE-FTS measurements of CFC-11, CFC-12, and HCFC-22 using ground-based FTIR spectrometers

    Science.gov (United States)

    Kolonjari, F.; Walker, K. A.; Mahieu, E.; Batchelor, R. L.; Bernath, P. F.; Boone, C.; Conway, S. A.; Dan, L.; Griffin, D.; Harrett, A.; Kasai, Y.; Kagawa, A.; Lindenmaier, R.; Strong, K.; Whaley, C.

    2013-12-01

    Satellite datasets can be an effective global monitoring tool for long-lived compounds in the atmosphere. The Atmospheric Chemistry Experiment (ACE) is a mission on-board the Canadian satellite SCISAT-1. The primary instrument on SCISAT-1 is a high-resolution infrared Fourier transform spectrometer (ACE-FTS) which is capable of measuring a range of gases including key chlorofluorocarbon (CFC) and hydrochlorofluorocarbon (HCFC) species. These families of species are of interest because of their significant contribution to anthropogenic ozone depletion and to global warming. To assess the quality of data derived from satellite measurements, validation using other data sources is essential. Ground-based Fourier transform infrared (FTIR) spectrometers are particularly useful for this purpose. In this study, five FTIR spectrometers located at four sites around the world are used to validate the CFC-11 (CCl3F), CFC-12 (CCl2F2), and HCFC-22 (CHClF2) retrieved profiles from ACE-FTS measurements. These species are related because HCFC-22 was the primary replacement for CFC-11 and CFC-12 in refrigerant and propellant applications. The FTIR spectrometers used in this study record solar absorption spectra at Eureka (Canada), Jungfraujoch (Switzerland), Poker Flat (USA), and Toronto (Canada). The retrieval of CFC-11, CFC-12, and HCFC-22 are not standard products for many of these instruments, and as such, a harmonization of retrieval parameters between the sites has been conducted. The retrievals of these species from the FTIR spectra are sensitive from the surface to approximately 20 km, while the ACE-FTS profiles extend from approximately 6 to 30 km. For each site, partial column comparisons between coincident measurements of the three species and a validation of the observed trends will be discussed.

  1. Validation of ACE and OSIRIS ozone and NO2 measurements using ground-based instruments at 80° N

    Directory of Open Access Journals (Sweden)

    A. Pazmino

    2012-05-01

    Full Text Available The Optical Spectrograph and Infra-Red Imager System (OSIRIS and the Atmospheric Chemistry Experiment (ACE have been taking measurements from space since 2001 and 2003, respectively. This paper presents intercomparisons between ozone and NO2 measured by the ACE and OSIRIS satellite instruments and by ground-based instruments at the Polar Environment Atmospheric Research Laboratory (PEARL, which is located at Eureka, Canada (80° N, 86° W and is operated by the Canadian Network for the Detection of Atmospheric Change (CANDAC. The ground-based instruments included in this study are four zenith-sky differential optical absorption spectroscopy (DOAS instruments, one Bruker Fourier transform infrared spectrometer (FTIR and four Brewer spectrophotometers. Ozone total columns measured by the DOAS instruments were retrieved using new Network for the Detection of Atmospheric Composition Change (NDACC guidelines and agree to within 3.2%. The DOAS ozone columns agree with the Brewer spectrophotometers with mean relative differences that are smaller than 1.5%. This suggests that for these instruments the new NDACC data guidelines were successful in producing a homogenous and accurate ozone dataset at 80° N. Satellite 14–52 km ozone and 17–40 km NO2 partial columns within 500 km of PEARL were calculated for ACE-FTS Version 2.2 (v2.2 plus updates, ACE-FTS v3.0, ACE-MAESTRO (Measurements of Aerosol Extinction in the Stratosphere and Troposphere Retrieved by Occultation v1.2 and OSIRIS SaskMART v5.0x ozone and Optimal Estimation v3.0 NO2 data products. The new ACE-FTS v3.0 and the validated ACE-FTS v2.2 partial columns are nearly identical, with mean relative differences of 0.0 ± 0.2% and −0.2 ± 0.1% for v2.2 minus v3.0 ozone and NO2, respectively. Ozone columns were constructed from 14–52 km satellite and 0–14 km ozonesonde partial columns and compared with the ground-based total column measurements. The satellite-plus-sonde measurements agree

  2. Validation of TRMM Precipitation Radar Through Comparison of its Multi-Year Measurements to Ground-Based Radar

    Science.gov (United States)

    Liao, Liang; Meneghini, Robert

    2010-01-01

    A procedure to accurately resample spaceborne and ground-based radar data is described, and then applied to the measurements taken from the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) and the ground-based Weather Surveillance Radar-1988 Doppler (WSR-88D or WSR) for the validation of the PR measurements and estimates. Through comparisons with the well-calibrated, non-attenuated WSR at Melbourne, Florida for the period 1998-2007, the calibration of the Precipitation Radar (PR) aboard the TRMM satellite is checked using measurements near the storm top. Analysis of the results indicates that the PR, after taking into account differences in radar reflectivity factors between the PR and WSR, has a small positive bias of 0.8 dB relative to the WSR, implying a soundness of the PR calibration in view of the uncertainties involved in the comparisons. Comparisons between the PR and WSR reflectivities are also made near the surface for evaluation of the attenuation-correction procedures used in the PR algorithms. It is found that the PR attenuation is accurately corrected in stratiform rain but is underestimated in convective rain, particularly in heavy rain. Tests of the PR estimates of rainfall rate are conducted through comparisons in the overlap area between the TRMM overpass and WSR scan. Analyses of the data are made both on a conditional basis, in which the instantaneous rain rates are compared only at those pixels where both the PR and WSR detect rain, and an unconditional basis, in which the area-averaged rain rates are estimated independently for the PR and WSR. Results of the conditional rain comparisons show that the PR-derived rain is about 9% greater and 19% less than the WSR estimates for stratiform and convective storms, respectively. Overall, the PR tends to underestimate the conditional mean rain rate by 8% for all rain categories, a finding that conforms to the results of the area-averaged rain (unconditional) comparisons.

  3. Assessing the Relative Performance of Microwave-Based Satellite Rain Rate Retrievals Using TRMM Ground Validation Data

    Science.gov (United States)

    Wolff, David B.; Fisher, Brad L.

    2011-01-01

    Space-borne microwave sensors provide critical rain information used in several global multi-satellite rain products, which in turn are used for a variety of important studies, including landslide forecasting, flash flood warning, data assimilation, climate studies, and validation of model forecasts of precipitation. This study employs four years (2003-2006) of satellite data to assess the relative performance and skill of SSM/I (F13, F14 and F15), AMSU-B (N15, N16 and N17), AMSR-E (Aqua) and the TRMM Microwave Imager (TMI) in estimating surface rainfall based on direct instantaneous comparisons with ground-based rain estimates from Tropical Rainfall Measuring Mission (TRMM) Ground Validation (GV) sites at Kwajalein, Republic of the Marshall Islands (KWAJ) and Melbourne, Florida (MELB). The relative performance of each of these satellite estimates is examined via comparisons with space- and time-coincident GV radar-based rain rate estimates. Because underlying surface terrain is known to affect the relative performance of the satellite algorithms, the data for MELB was further stratified into ocean, land and coast categories using a 0.25deg terrain mask. Of all the satellite estimates compared in this study, TMI and AMSR-E exhibited considerably higher correlations and skills in estimating/observing surface precipitation. While SSM/I and AMSU-B exhibited lower correlations and skills for each of the different terrain categories, the SSM/I absolute biases trended slightly lower than AMSR-E over ocean, where the observations from both emission and scattering channels were used in the retrievals. AMSU-B exhibited the least skill relative to GV in all of the relevant statistical categories, and an anomalous spike was observed in the probability distribution functions near 1.0 mm/hr. This statistical artifact appears to be related to attempts by algorithm developers to include some lighter rain rates, not easily detectable by its scatter-only frequencies. AMSU

  4. Metrology of ground-based satellite validation: co-location mismatch and smoothing issues of total ozone comparisons

    Directory of Open Access Journals (Sweden)

    T. Verhoelst

    2015-12-01

    Full Text Available Comparisons with ground-based correlative measurements constitute a key component in the validation of satellite data on atmospheric composition. The error budget of these comparisons contains not only the measurement errors but also several terms related to differences in sampling and smoothing of the inhomogeneous and variable atmospheric field. A versatile system for Observing System Simulation Experiments (OSSEs, named OSSSMOSE, is used here to quantify these terms. Based on the application of pragmatic observation operators onto high-resolution atmospheric fields, it allows a simulation of each individual measurement, and consequently, also of the differences to be expected from spatial and temporal field variations between both measurements making up a comparison pair. As a topical case study, the system is used to evaluate the error budget of total ozone column (TOC comparisons between GOME-type direct fitting (GODFITv3 satellite retrievals from GOME/ERS2, SCIAMACHY/Envisat, and GOME-2/MetOp-A, and ground-based direct-sun and zenith–sky reference measurements such as those from Dobsons, Brewers, and zenith-scattered light (ZSL-DOAS instruments, respectively. In particular, the focus is placed on the GODFITv3 reprocessed GOME-2A data record vs. the ground-based instruments contributing to the Network for the Detection of Atmospheric Composition Change (NDACC. The simulations are found to reproduce the actual measurements almost to within the measurement uncertainties, confirming that the OSSE approach and its technical implementation are appropriate. This work reveals that many features of the comparison spread and median difference can be understood as due to metrological differences, even when using strict co-location criteria. In particular, sampling difference errors exceed measurement uncertainties regularly at most mid- and high-latitude stations, with values up to 10 % and more in extreme cases. Smoothing difference errors only

  5. Ground based measurements on reflectance towards validating atmospheric correction algorithms on IRS-P6 AWiFS data

    Science.gov (United States)

    Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.

    In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with

  6. Metrology of ground-based satellite validation: co-location mismatch and smoothing issues of total ozone comparisons

    Science.gov (United States)

    Verhoelst, T.; Granville, J.; Hendrick, F.; Köhler, U.; Lerot, C.; Pommereau, J.-P.; Redondas, A.; Van Roozendael, M.; Lambert, J.-C.

    2015-12-01

    Comparisons with ground-based correlative measurements constitute a key component in the validation of satellite data on atmospheric composition. The error budget of these comparisons contains not only the measurement errors but also several terms related to differences in sampling and smoothing of the inhomogeneous and variable atmospheric field. A versatile system for Observing System Simulation Experiments (OSSEs), named OSSSMOSE, is used here to quantify these terms. Based on the application of pragmatic observation operators onto high-resolution atmospheric fields, it allows a simulation of each individual measurement, and consequently, also of the differences to be expected from spatial and temporal field variations between both measurements making up a comparison pair. As a topical case study, the system is used to evaluate the error budget of total ozone column (TOC) comparisons between GOME-type direct fitting (GODFITv3) satellite retrievals from GOME/ERS2, SCIAMACHY/Envisat, and GOME-2/MetOp-A, and ground-based direct-sun and zenith-sky reference measurements such as those from Dobsons, Brewers, and zenith-scattered light (ZSL-)DOAS instruments, respectively. In particular, the focus is placed on the GODFITv3 reprocessed GOME-2A data record vs. the ground-based instruments contributing to the Network for the Detection of Atmospheric Composition Change (NDACC). The simulations are found to reproduce the actual measurements almost to within the measurement uncertainties, confirming that the OSSE approach and its technical implementation are appropriate. This work reveals that many features of the comparison spread and median difference can be understood as due to metrological differences, even when using strict co-location criteria. In particular, sampling difference errors exceed measurement uncertainties regularly at most mid- and high-latitude stations, with values up to 10 % and more in extreme cases. Smoothing difference errors only play a role in the

  7. Quantitative evaluation of six graph based semi-automatic liver tumor segmentation techniques using multiple sets of reference segmentation

    Science.gov (United States)

    Su, Zihua; Deng, Xiang; Chefd'hotel, Christophe; Grady, Leo; Fei, Jun; Zheng, Dong; Chen, Ning; Xu, Xiaodong

    2011-03-01

    Graph based semi-automatic tumor segmentation techniques have demonstrated great potential in efficiently measuring tumor size from CT images. Comprehensive and quantitative validation is essential to ensure the efficacy of graph based tumor segmentation techniques in clinical applications. In this paper, we present a quantitative validation study of six graph based 3D semi-automatic tumor segmentation techniques using multiple sets of expert segmentation. The six segmentation techniques are Random Walk (RW), Watershed based Random Walk (WRW), LazySnapping (LS), GraphCut (GHC), GrabCut (GBC), and GrowCut (GWC) algorithms. The validation was conducted using clinical CT data of 29 liver tumors and four sets of expert segmentation. The performance of the six algorithms was evaluated using accuracy and reproducibility. The accuracy was quantified using Normalized Probabilistic Rand Index (NPRI), which takes into account of the variation of multiple expert segmentations. The reproducibility was evaluated by the change of the NPRI from 10 different sets of user initializations. Our results from the accuracy test demonstrated that RW (0.63) showed the highest NPRI value, compared to WRW (0.61), GWC (0.60), GHC (0.58), LS (0.57), GBC (0.27). The results from the reproducibility test indicated that GBC is more sensitive to user initialization than the other five algorithms. Compared to previous tumor segmentation validation studies using one set of reference segmentation, our evaluation methods use multiple sets of expert segmentation to address the inter or intra rater variability issue in ground truth annotation, and provide quantitative assessment for comparing different segmentation algorithms.

  8. Rainfall measurements from cellular networks microwave links : an alternative ground reference for satellite validation and hydrology in Africa .

    Science.gov (United States)

    Gosset, Marielle; cazenave, frederic; Zougmore, françois; Doumounia, Ali; kacou, Modeste

    2015-04-01

    In many part of the Tropics the ground based gauge networks are sparse, often degrading and accessing this data for monitoring rainfall or for validating satellite products is sometime difficult. Here, an alternative rainfall measuring technique is proposed and tested in West Africa. It is based on using commercial microwave links from cellular telephone networks to detect and quantify rainfall. Rainfall monitoring based on commercial terrestrial microwave links has been tested for the first time in Burkina Faso, in Sahel. The rainfall regime is characterized by intense rainfall intensities brought by mesoscale Convective systems (MCS), generated by deep organized convection. The region is subjected to drought as well as dramatic floods associated with the intense rainfall provided by a few MCSs. The hydrometeorological risk is increasing and need to be monitored. In collaboration with the national cellular phone operator, Telecel Faso, the attenuation on 29 km long microwave links operating at 7 GHz was monitored at 1s time rate for the monsoon season 2012. The time series of attenuation is transformed into rain rates and compared with rain gauge data. The method is successful in quantifying rainfall: 95% of the rainy days are detected. The correlation with the daily raingauge series is 0.8 and the season bias is 5%. The correlation at the 5 min time step within each event is also high. We will present the quantitative results, discuss the uncertainties and compare the time series and the 2D maps with those derived from a polarimetric radar. The results demonstrate the potential interest of exploiting national and regional wireless telecommunication networks to provide rainfall maps for various applications : urban hydrology, agro-hydrological risk monitoring, satellite validation and development of combined rainfall products. We will also present the outcome of the first international Rain Cell Africa workshop held in Ouagadougou early 2015.

  9. Segmentation of Color Images Based on Different Segmentation Techniques

    Directory of Open Access Journals (Sweden)

    Purnashti Bhosale

    2013-03-01

    Full Text Available In this paper, we propose an Color image segmentation algorithm based on different segmentation techniques. We recognize the background objects such as the sky, ground, and trees etc based on the color and texture information using various methods of segmentation. The study of segmentation techniques by using different threshold methods such as global and local techniques and they are compared with one another so as to choose the best technique for threshold segmentation. Further segmentation is done by using clustering method and Graph cut method to improve the results of segmentation.

  10. Validation of the Cooray‐Rubinstein (C‐R) formula for a rough ground surface by using three‐dimensional (3‐D) FDTD

    National Research Council Canada - National Science Library

    Li, Dongshuai; Zhang, Qilin; Liu, Tao; Wang, Zhenhui

    2013-01-01

    In this paper, we have extended the Cooray‐Rubinstein (C‐R) approximate formula into the fractal rough ground surface and then validate its accuracy by using three‐dimensional (3‐D) finite‐difference time‐domain (FDTD...

  11. Validation of Atmosphere/Ionosphere Signals Associated with Major Earthquakes by Multi-Instrument Space-Borne and Ground Observations

    Science.gov (United States)

    Ouzounov, Dimitar; Pulinets, Sergey; Hattori, Katsumi; Parrot, Michel; Liu, J. Y.; Yang, T. F.; Arellano-Baeza, Alonso; Kafatos, M.; Taylor, Patrick

    2012-01-01

    regions of the atmosphere and the modifications, by dc electric fields, in the ionosphere-atmosphere electric circuit. We retrospectively analyzed temporal and spatial variations of four different physical parameters (gas/radon counting rate, lineaments change, long-wave radiation transitions and ionospheric electron density/plasma variations) characterizing the state of the lithosphere/atmosphere coupling several days before the onset of the earthquakes. Validation processes consist in two phases: A. Case studies for seven recent major earthquakes: Japan (M9.0, 2011), China (M7.9, 2008), Italy (M6.3, 2009), Samoa (M7, 2009), Haiti (M7.0, 2010) and, Chile (M8.8, 2010) and B. A continuous retrospective analysis was preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009. Satellite, ground surface, and troposphere data were obtained from Terra/ASTER, Aqua/AIRS, POES and ionospheric variations from DEMETER and COSMIC-I data. Radon and GPS/TEC were obtaining from monitoring sites in Taiwan, Japan and Italy and from global ionosphere maps (GIM) respectively. Our analysis of ground and satellite data during the occurrence of 7 global earthquakes has shown the presence of anomalies in the atmosphere. Our results for Tohoku M9.0 earthquake show that on March 7th, 2011 (4 days before the main shock and 1 day before the M7.2 foreshock of March 8, 2011) a rapid increase of emitted infrared radiation was observed by the satellite data and an anomaly was developed near the epicenter. The GPS/TEC data indicate an increase and variation in electron density reaching a maximum value on March 8. From March 3 to 11 a large increase in electron concentration was recorded at all four Japanese ground-based ionosondes, which returned to normal after the main earthquake. Similar approach for analyzing atmospheric and ionospheric parameters has been applied for China (M7.9, 2008), Italy (M6.3, 2009), Samoa (M7, 2009), Haiti (M7.0, 2010) and Chile (M8.8, 2010

  12. Development and Experimental Validation of a TRNSYS Dynamic Tool for Design and Energy Optimization of Ground Source Heat Pump Systems

    Directory of Open Access Journals (Sweden)

    Félix Ruiz-Calvo

    2017-09-01

    Full Text Available Ground source heat pump (GSHP systems stand for an efficient technology for renewable heating and cooling in buildings. To optimize not only the design but also the operation of the system, a complete dynamic model becomes a highly useful tool, since it allows testing any design modifications and different optimization strategies without actually implementing them at the experimental facility. Usually, this type of systems presents strong dynamic operating conditions. Therefore, the model should be able to predict not only the steady-state behavior of the system but also the short-term response. This paper presents a complete GSHP system model based on an experimental facility, located at Universitat Politècnica de València. The installation was constructed in the framework of a European collaborative project with title GeoCool. The model, developed in TRNSYS, has been validated against experimental data, and it accurately predicts both the short- and long-term behavior of the system.

  13. Validating MODIS and Sentinel-2 NDVI Products at a Temperate Deciduous Forest Site Using Two Independent Ground-Based Sensors.

    Science.gov (United States)

    Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel

    2017-08-11

    Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.

  14. Segmentation and classification models validation area mapping of peat lands as initial value of Fuzzy Kohonen Clustering Network

    Science.gov (United States)

    Erwin; Saparudin; Fachrurrozi, Muhammad

    2017-04-01

    Ogan Komering Ilir (OKI) is located at the eastern of South Sumatra Province, 2030‧-4015‧ latitude and 104020‧-106000‧ longitude. Digital image of land was captured from Landsat 8 satellite path 124/row 062. Landsat 8 is new generation satellite which has two sensors, Operation Land Manager (OLI) and Thermal Infra-Red Sensor (TIRS). In pre-processing step, there are a geometric correction, radiometric correction, and cropping of the digital images which resulting coordinated geography. Classification uses maximum likelihood estimator algorithm. In segmentation process and classification, grey value spread is into evenly after applying histogram technique. The results of entropy value are7.42 which is the highest of result image classification, then the smallest entropy value in the result of correction mapping are 6.39. The three of them prove that they have enough high entropy value. Then the result of peatlands classification is given overall accuracy value = = 94.0012% and overall kappa value = 0.9230 so the result of classification can be considered to be right.

  15. Validation of ENVISAT/SCIAMACHY columnar methane by solar FTIR spectrometry at the Ground-Truthing Station Zugspitze

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2005-04-01

    Full Text Available Methane total-vertical column retrievals from ground-based solar FTIR measurements at the Permanent Ground-Truthing Station Zugspitze (47.42° N, 10.98° E, 2964 m a.s.l., Germany are used to validate column averaged methane retrieved from ENVISAT/SCIAMACHY spectra by WFM-DOAS (WFMD version 0.4 and 0.41 for 153 days in 2003. Smoothing errors are estimated to be below 0.10% for FTIR and 0.14% for SCIAMACHY-WFMD retrievals and can be neglected for the assessment of observed bias and day-to-day-scatter. In order to minimize the altitude-difference effect, dry-air column averaged mixing ratios (XCH4 have been utilized. From the FTIR-time series of XCH4 an atmospheric day-to-day variability of 1% was found, and a sinusoidal annual cycle with a ≈1.6% amplitude. To obtain the WFMD bias, a polynomial fitted to the FTIR series was used as a reference. The result is WFMD v0.4/FTIR=1.008±0.019 and WFMD v0.41/FTIR=1.058±0.008. WFMD v0.41 was significantly improved by a time-dependent bias correction. It can still not capture the natural day-to-day variability, i.e., the standard deviation calculated from the daily-mean values is 2.4% using averages within a 2000-km radius, and 2.7% for a 1000-km radius. These numbers are dominated by a residual time-dependent bias in the order of 3%/month. The latter can be reduced, e.g., from 2.4% to 1.6% as shown by an empirical time-dependent bias correction. Standard deviations of the daily means, calculated from the individual measurements of each day, are excluding time-dependent biases, thus showing the potential precision of WFMD daily means, i.e., 0.3% for a 2000-km selection radius, and 0.6% for a 1000-km selection radius. Therefore, the natural variability could be captured under the prerequisite of further advanced time-dependent bias corrections, or the use of other channels, where the icing issue is less prominent.

  16. Metrology of ground-based satellite validation: co-location mismatch and smoothing issues of total ozone comparisons

    Directory of Open Access Journals (Sweden)

    T. Verhoelst

    2015-08-01

    Full Text Available Comparisons with ground-based correlative measurements constitute a key component in the validation of satellite data on atmospheric composition. The error budget of these comparisons contains not only the measurement uncertainties but also several terms related to differences in sampling and smoothing of the inhomogeneous and variable atmospheric field. A versatile system for Observing System Simulation Experiments (OSSEs, named OSSSMOSE, is used here to quantify these terms. Based on the application of pragmatic observation operators onto high-resolution atmospheric fields, it allows a simulation of each individual measurement, and consequently also of the differences to be expected from spatial and temporal field variations between both measurements making up a comparison pair. As a topical case study, the system is used to evaluate the error budget of total ozone column (TOC comparisons between on the one hand GOME-type direct fitting (GODFITv3 satellite retrievals from GOME/ERS2, SCIAMACHY/Envisat, and GOME-2/MetOp-A, and on the other hand direct-sun and zenith-sky reference measurements such as from Dobsons, Brewers, and zenith scattered light (ZSL-DOAS instruments respectively. In particular, the focus is placed on the GODFITv3 reprocessed GOME-2A data record vs. the ground-based instruments contributing to the Network for the Detection of Atmospheric Composition Change (NDACC. The simulations are found to reproduce the actual measurements almost to within the measurement uncertainties, confirming that the OSSE approach and its technical implementation are appropriate. This work reveals that many features of the comparison spread and median difference can be understood as due to metrological differences, even when using strict co-location criteria. In particular, sampling difference errors exceed measurement uncertainties regularly at most mid- and high-latitude stations, with values up to 10 % and more in extreme cases. Smoothing

  17. Automatic segmentation of human cortical layer-complexes and architectural areas using diffusion MRI and its validation

    Directory of Open Access Journals (Sweden)

    Matteo Bastiani

    2016-11-01

    Full Text Available Recently, several magnetic resonance imaging contrast mechanisms have been shown to distinguish cortical substructure corresponding to selected cortical layers. Here, we investigate cortical layer and area differentiation by automatized unsupervised clustering of high resolution diffusion MRI data. Several groups of adjacent layers could be distinguished in human primary motor and premotor cortex. We then used the signature of diffusion MRI signals along cortical depth as a criterion to detect area boundaries and find borders at which the signature changes abruptly. We validate our clustering results by histological analysis of the same tissue. These results confirm earlier studies which show that diffusion MRI can probe layer-specific intracortical fiber organization and, moreover, suggests that it contains enough information to automatically classify architecturally distinct cortical areas. We discuss the strengths and weaknesses of the automatic clustering approach and its appeal for MR-based cortical histology.

  18. OMI/Aura UV product validation using NILU-UV ground-based measurements in Thessaloniki, Greece

    Science.gov (United States)

    Zempila, Melina-Maria; Koukouli, Maria-Elissavet; Bais, Alkiviadis; Fountoulakis, Ilias; Arola, Antti; Kouremeti, Natalia; Balis, Dimitris

    2016-09-01

    The main aim of this work is to evaluate the NASA EOS AURA Ozone Monitoring Instrument (OMI) UV irradiance estimates through ground-based measurements performed by a NILU-UV multichannel radiometer (NILU-UV) operating in Thessaloniki, Greece, for the time period between January 2005 and December 2014. NILU-UV multi-filter radiometers can provide measurements at 5 UV wavelength bands with full width at half maximum (FWHM) of 10 nm approximately and a time analysis of 1 min. An additional channel measuring the Photosynthetically Active Radiation (PAR) is also incorporated to the instrument and is used for the stringent characterization of the cloud free instances. The OMI instrument estimates solar UV irradiances at four wavelengths close to those of the NILU-UV in Thessaloniki. Clear and all-sky overpass-time, as well as solar local-noon time, UV estimates are provided by the NASA Aura Data Validation Center. Spectra measured from a collocated MKIII Brewer spectrophotometer with serial number 086 (Brewer #086) were utilized for the whole period (2005-2014) in order to estimate the NILU-UV irradiances at the OMI wavelength irradiances and therefore provide a direct comparison and validation to the NILU UV measurements provided by OMI. For the nominal comparisons, using un-flagged OMI data within a 50 km radius from Thessaloniki, the linear determination coefficient, R2, ranges between 0.91 and 0.97 for the 305 nm and between 0.75 and 0.92 for 380 nm depending on the choice of overpass or local-noon time data and the cloudiness flags. The best agreement is found for the clear-sky overpass-time comparisons as well as the both PAR- and satellite algorithm-deduced clear-sky overpass and local-noon comparisons for all wavelengths. The OMI irradiances were found to overestimate the NILU-UV observations in Thessaloniki between ∼4.5% and 13.5% for the 305 nm and between ∼1.5% and ∼10.0% for the 310 nm wavelength depending on the choice of time [overpass vs local noon

  19. Design of a white-light interferometric measuring system for co-phasing the primary mirror segments of the next generation of ground-based telescope

    Science.gov (United States)

    Song, Helun; Xian, Hao; Jiang, Wenhan; Rao, Changhui; Wang, Shengqian

    2007-12-01

    With the increase of telescope size, the manufacture of monolithic primaries becomes increasingly difficult. Instead, the use of segmented mirrors, where many individual mirrors (the segments) work together to provide good image quality and an aperture equivalent to that of a large monolithic mirror, is considered a more appropriate strategy. But, with the introduction of large telescope mirror comprised of many individual segments, the problem of insuring a smooth continuous mirror surface (co-phased mirrors) becomes critical. One of the main problems arising in the co-phasing of the segmented mirrors telescope is the problem of measurements of the vertical displacements between the individual segments (piston errors). Because of such mirrors to exhibit diffraction-limited performance, a phasing process is required in order to guarantee that the segments have to be positioned with an accuracy of a fraction of a wavelength of the incoming light.The measurements become especially complicated when the piston error is in order of wavelength fractions. To meet the performance capabilities, a novel method for phasing the segmented mirrors optical system is described. The phasing method is based on a high-aperture Michelson interferometer. The use of an interferometric technique allows the measurement of segment misalignment during daytime with high accuracy, which is a major design guideline. The innovation introduced in the optical design of the interferometer is the simultaneous use of both monochromatic and white-light sources that allows the system to measure the piston error with an uncertainty of 6nm in 50µm range. The description about the expected monochromatic and white-light illumination interferograms and the feasibility of the phasing method are presented here.

  20. Multi-centre validation of an automatic algorithm for fast 4D myocardial segmentation in cine CMR datasets.

    Science.gov (United States)

    Queirós, Sandro; Barbosa, Daniel; Engvall, Jan; Ebbers, Tino; Nagel, Eike; Sarvari, Sebastian I; Claus, Piet; Fonseca, Jaime C; Vilaça, João L; D'hooge, Jan

    2016-10-01

    Quantitative analysis of cine cardiac magnetic resonance (CMR) images for the assessment of global left ventricular morphology and function remains a routine task in clinical cardiology practice. To date, this process requires user interaction and therefore prolongs the examination (i.e. cost) and introduces observer variability. In this study, we sought to validate the feasibility, accuracy, and time efficiency of a novel framework for automatic quantification of left ventricular global function in a clinical setting. Analyses of 318 CMR studies, acquired at the enrolment of patients in a multi-centre imaging trial (DOPPLER-CIP), were performed automatically, as well as manually. For comparative purposes, intra- and inter-observer variability was also assessed in a subset of patients. The extracted morphological and functional parameters were compared between both analyses, and time efficiency was evaluated. The automatic analysis was feasible in 95% of the cases (302/318) and showed a good agreement with manually derived reference measurements, with small biases and narrow limits of agreement particularly for end-diastolic volume (-4.08 ± 8.98 mL), end-systolic volume (1.18 ± 9.74 mL), and ejection fraction (-1.53 ± 4.93%). These results were comparable with the agreement between two independent observers. A complete automatic analysis took 5.61 ± 1.22 s, which is nearly 150 times faster than manual contouring (14 ± 2 min, P cine CMR images. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  1. Validation of mathematical models for Salmonella growth in raw ground beef under dynamic temperature conditions representing loss of refrigeration.

    Science.gov (United States)

    McConnell, Jennifer A; Schaffner, Donald W

    2014-07-01

    Temperature is a primary factor in controlling the growth of microorganisms in food. The current U. S. Food and Drug Administration Model Food Code guidelines state that food can be kept out of temperature control for up to 4 h without qualifiers, or up to 6 h, if the food product starts at an initial 41 °F (5 °C) temperature and does not exceed 70 °F (21 °C) at 6 h. This project validates existing ComBase computer models for Salmonella growth under changing temperature conditions modeling scenarios using raw ground beef as a model system. A cocktail of Salmonella serovars isolated from different meat products ( Salmonella Copenhagen, Salmonella Montevideo, Salmonella Typhimurium, Salmonella Saintpaul, and Salmonella Heidelberg) was made rifampin resistant and used for all experiments. Inoculated samples were held in a programmable water bath at 4.4 °C (40 °F) and subjected to linear temperature changes to different final temperatures over various lengths of time and then returned to 4.4 °C (40 °F). Maximum temperatures reached were 15.6, 26.7, or 37.8 °C (60, 80, or 100 °F), and the temperature increases took place over 4, 6, and 8 h, with varying cooling times. Our experiments show that when maximum temperatures were lower (15.6 or 26.7 °C), there was generally good agreement between the ComBase models and experiments: when temperature increases of 15.6 or 26.7 °C occurred over 8 h, experimental data were within 0.13 log CFU of the model predictions. When maximum temperatures were 37 °C, predictive models were fail-safe. Overall bias of the models was 1.11. and accuracy was 2.11. Our experiments show the U.S. Food and Drug Administration Model Food Code guidelines for holding food out of temperature control are quite conservative. Our research also shows that the ComBase models for Salmonella growth are accurate or fail-safe for dynamic temperature conditions as might be observed due to power loss from natural disasters or during transport out of

  2. Bridging Ground Validation and Algorithms: Using Scattering and Integral Tables to Incorporate Observed DSD Correlations into Satellite Algorithms

    Science.gov (United States)

    Williams, C. R.

    2012-12-01

    The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V

  3. Calculation of broadband time histories of ground motion: Comparison of methods and validation using strong-ground motion from the 1994 Northridge earthquake

    Science.gov (United States)

    Hartzell, S.; Harmsen, S.; Frankel, A.; Larsen, S.

    1999-01-01

    This article compares techniques for calculating broadband time histories of ground motion in the near field of a finite fault by comparing synthetics with the strong-motion data set for the 1994 Northridge earthquake. Based on this comparison, a preferred methodology is presented. Ground-motion-simulation techniques are divided into two general methods: kinematic- and composite-fault models. Green's functions of three types are evaluated: stochastic, empirical, and theoretical. A hybrid scheme is found to give the best fit to the Northridge data. Low frequencies ( 1 Hz) are calculated using a composite-fault model with a fractal subevent size distribution and stochastic, bandlimited, white-noise Green's functions. At frequencies below 1 Hz, theoretical elastic-wave-propagation synthetics introduce proper seismic-phase arrivals of body waves and surface waves. The 3D velocity structure more accurately reproduces record durations for the deep sedimentary basin structures found in the Los Angeles region. At frequencies above 1 Hz, scattering effects become important and wave propagation is more accurately represented by stochastic Green's functions. A fractal subevent size distribution for the composite fault model ensures an ??-2 spectral shape over the entire frequency band considered (0.1-20 Hz).

  4. Validation of a method of automatic segmentation for delineation of volumes in PET imaging for radiotherapy; Validacion de un metodo de segmentacion automatica para delineacion de volumenes en imagenes PET para radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Latorre Musoll, A.; Eudaldo Puell, T.; Ruiz Martinez, A.; Fernandez Leon, A.; Carrasco de Fez, P.; Jornet Sala, N.; Ribas Morales, M.

    2011-07-01

    Prior to clinical use of PET imaging for the delineation of BTV, has made a preliminary study on model, to validate the automatic segmentation tools based on thresholds of activity concentration, which implement both PET-CT equipment as the Eclipse planning system.

  5. Validation of the Cooray-Rubinstein (C-R) formula for a rough ground surface by using three-dimensional (3-D) FDTD

    Science.gov (United States)

    Li, Dongshuai; Zhang, Qilin; Liu, Tao; Wang, Zhenhui

    2013-11-01

    this paper, we have extended the Cooray-Rubinstein (C-R) approximate formula into the fractal rough ground surface and then validate its accuracy by using three-dimensional (3-D) finite-difference time-domain (FDTD) method at distances of 50 m and 100 m from the lightning channel. The results show that the extended C-R formula has an accepted accuracy for predicting the lightning-radiated horizontal electric field above the fractal rough and conducting ground, and its accuracy increases a little better with the higher of the earth conductivity. For instance, when the conductivity of the rough ground is 0.1 S/m, the error of the peak value predicted by the extended C-R formula is less than about 2.3%, while its error is less than about 6.7% for the conductivity of 0.01 S/m. The rough ground has much effect on the lightning horizontal field, and the initial peak value of the horizontal field obviously decreases with the increase of the root-mean-square height of the rough ground at early times (within several microseconds of the beginning of return stroke).

  6. A cross validation study of deep brain stimulation targeting: from experts to atlas-based, segmentation-based and automatic registration algorithms.

    Science.gov (United States)

    Castro, F Javier Sanchez; Pollo, Claudio; Meuli, Reto; Maeder, Philippe; Cuisenaire, Olivier; Cuadra, Meritxell Bach; Villemure, Jean-Guy; Thiran, Jean-Philippe

    2006-11-01

    Validation of image registration algorithms is a difficult task and open-ended problem, usually application-dependent. In this paper, we focus on deep brain stimulation (DBS) targeting for the treatment of movement disorders like Parkinson's disease and essential tremor. DBS involves implantation of an electrode deep inside the brain to electrically stimulate specific areas shutting down the disease's symptoms. The subthalamic nucleus (STN) has turned out to be the optimal target for this kind of surgery. Unfortunately, the STN is in general not clearly distinguishable in common medical imaging modalities. Usual techniques to infer its location are the use of anatomical atlases and visible surrounding landmarks. Surgeons have to adjust the electrode intraoperatively using electrophysiological recordings and macrostimulation tests. We constructed a ground truth derived from specific patients whose STNs are clearly visible on magnetic resonance (MR) T2-weighted images. A patient is chosen as atlas both for the right and left sides. Then, by registering each patient with the atlas using different methods, several estimations of the STN location are obtained. Two studies are driven using our proposed validation scheme. First, a comparison between different atlas-based and nonrigid registration algorithms with a evaluation of their performance and usability to locate the STN automatically. Second, a study of which visible surrounding structures influence the STN location. The two studies are cross validated between them and against expert's variability. Using this scheme, we evaluated the expert's ability against the estimation error provided by the tested algorithms and we demonstrated that automatic STN targeting is possible and as accurate as the expert-driven techniques currently used. We also show which structures have to be taken into account to accurately estimate the STN location.

  7. Validation of a ground motion synthesis and prediction methodology for the 1988, M=6.0, Saguenay Earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Jarpe, S.; Kasameyer, P.; Foxall, W.

    1998-01-01

    We model the 1988, M=6.0, Saguenay earthquake. We utilize an approach that has been developed to predict strong ground motion. this approach involves developing a set of rupture scenarios based upon bounds on rupture parameters. rupture parameters include rupture geometry, hypocenter, rupture roughness, rupture velocity, healing velocity (rise times), slip distribution, asperity size and location, and slip vector. Scenario here refers to specific values of these parameters for an hypothesized earthquake. Synthetic strong ground motion are then generated for each rupture scenario. A sufficient number of scenarios are run to span the variability in strong ground motion due to the source uncertainties. By having a suite of rupture scenarios of hazardous earthquakes for a fixed magnitude and identifying the hazard to the site from the one standard deviation value of engineering parameters we have introduced a probabilistic component to the deterministic hazard calculation, For this study we developed bounds on rupture scenarios from previous research on this earthquake. The time history closest to the observed ground motion was selected as a model for the Saguenay earthquake.

  8. Validation of the AGDISP model for predicting airborne atrazine spray drift: a South African ground application case study

    CSIR Research Space (South Africa)

    Nsibande, SA

    2015-06-01

    Full Text Available monitoring data in order for them to be employed with confidence, especially when they are used to implement regulatory measures or to evaluate potential human exposure levels. In this case study, off-target pesticide drift was monitored during ground...

  9. Functional Validation of an Alpha-Actinin-4 Mutation as a Potential Cause of an Aggressive Presentation of Adolescent Focal Segmental Glomerulosclerosis: Implications for Genetic Testing

    Science.gov (United States)

    Steinke, Julia M.; Krishnan, Ramaswamy; Birrane, Gabriel; Pollak, Martin R.

    2016-01-01

    Genetic testing in the clinic and research lab is becoming more routinely used to identify rare genetic variants. However, attributing these rare variants as the cause of disease in an individual patient remains challenging. Here, we report a patient who presented with nephrotic syndrome and focal segmental glomerulosclerosis (FSGS) with collapsing features at age 14. Despite treatment, her kidney disease progressed to end-stage within a year of diagnosis. Through genetic testing, an Y265H variant with unknown clinical significance in alpha-actinin-4 gene (ACTN4) was identified. This variant has not been seen previously in FSGS patients nor is it present in genetic databases. Her clinical presentation is different from previous descriptions of ACTN4 mediated FSGS, which is characterized by sub-nephrotic proteinuria and slow progression to end stage kidney disease. We performed in vitro and cellular assays to characterize this novel ACTN4 variant before attributing causation. We found that ACTN4 with either Y265H or K255E (a known disease-causing mutation) increased the actin bundling activity of ACTN4 in vitro, was associated with the formation of intracellular aggregates, and increased podocyte contractile force. Despite the absence of a familial pattern of inheritance, these similar biological changes caused by the Y265H and K255E amino acid substitutions suggest that this new variant is potentially the cause of FSGS in this patient. Our studies highlight that functional validation in complement with genetic testing may be required to confirm the etiology of rare disease, especially in the setting of unusual clinical presentations. PMID:27977723

  10. In-Situ Load System for Calibrating and Validating Aerodynamic Properties of Scaled Aircraft in Ground-Based Aerospace Testing Applications

    Science.gov (United States)

    Commo, Sean A. (Inventor); Lynn, Keith C. (Inventor); Landman, Drew (Inventor); Acheson, Michael J. (Inventor)

    2016-01-01

    An In-Situ Load System for calibrating and validating aerodynamic properties of scaled aircraft in ground-based aerospace testing applications includes an assembly having upper and lower components that are pivotably interconnected. A test weight can be connected to the lower component to apply a known force to a force balance. The orientation of the force balance can be varied, and the measured forces from the force balance can be compared to applied loads at various orientations to thereby develop calibration factors.

  11. The Validation Activities of the APhoRISM EC 7FP Project, Aimed at Post Seismic Damage Mapping, Through a Combined Use of EOS and Ground Data

    Science.gov (United States)

    Devanthery, N.; Luzi, G.; Stramondo, S.; Bignami, C.; Pierdicca, N.; Wegmuller, U.; Romaniello, V.; Anniballe, R.; Piscini, A.; Albano, M.; Moro, M.; Crosetto, M.

    2016-08-01

    The estimate of damage after an earthquake using spaceborne remote sensing data is one of the main application of the change detection methodologies widely discussed in literature. APhoRISM - Advanced PRocedures for volcanIc and Seismic Monitoring is a collaborative European Commission project (FP7-SP ACE- 2013-1) addressing the development of innovative methods, using space and ground sensors to support the management and mitigation of the seismic and the volcanic risk. In this paper a novel approach aimed at damage assessment based on the use of a priori information derived by different sources in a preparedness phase is described and a preliminary validation is shown.

  12. A presentation of ATR processing chain validation procedure of IR terminal guidance version of the AASM modular air-to-ground weapon

    Science.gov (United States)

    Duclos, D.; Quinquis, N.; Broda, G.; Galmiche, F.; Oudyi, F.; Coulon, N.; Cordier, D.; Sonier, C.

    2009-05-01

    Developed by Sagem (SAFRAN Group), the AASM is a modular Air-To-Ground "Fire and Forget" weapon designed to be able to neutralise a large range of targets under all conditions. The AASM is composed of guidance and range enhancement kits that give bombs, already in service, new operational capabilities. AASM Guidance kit exists in two different versions. The IMU/GPS guidance version is able to achieve "ten-meter class" accuracy on target in all weather conditions. The IMU/GPS/IR guidance version is able to achieve "meter class" accuracy on target with poor precision geographic designation or in GPS-denied flight context, thanks to a IR sensor and a complex image processing chain. In this night/day IMU/GPS/IR version, the terminal guidance phase adjusts the missile navigation to the true target by matching the image viewed through the infrared sensor with a target model stored in the missile memory. This model will already have been drawn up on the ground using a mission planning system and, for example, a satellite image. This paper will present the main steps of the procedure applied to qualify the complete image processing chain of the AASM IMU/GPS/IR version, including open-loop validation of ATR algorithms on real and synthetic images, and closed-loop validation using AASM simulation reference model.

  13. Multi-body simulation of a canine hind limb: model development, experimental validation and calculation of ground reaction forces

    Directory of Open Access Journals (Sweden)

    Wefstaedt Patrick

    2009-11-01

    Full Text Available Abstract Background Among other causes the long-term result of hip prostheses in dogs is determined by aseptic loosening. A prevention of prosthesis complications can be achieved by an optimization of the tribological system which finally results in improved implant duration. In this context a computerized model for the calculation of hip joint loadings during different motions would be of benefit. In a first step in the development of such an inverse dynamic multi-body simulation (MBS- model we here present the setup of a canine hind limb model applicable for the calculation of ground reaction forces. Methods The anatomical geometries of the MBS-model have been established using computer tomography- (CT- and magnetic resonance imaging- (MRI- data. The CT-data were collected from the pelvis, femora, tibiae and pads of a mixed-breed adult dog. Geometric information about 22 muscles of the pelvic extremity of 4 mixed-breed adult dogs was determined using MRI. Kinematic and kinetic data obtained by motion analysis of a clinically healthy dog during a gait cycle (1 m/s on an instrumented treadmill were used to drive the model in the multi-body simulation. Results and Discussion As a result the vertical ground reaction forces (z-direction calculated by the MBS-system show a maximum deviation of 1.75%BW for the left and 4.65%BW for the right hind limb from the treadmill measurements. The calculated peak ground reaction forces in z- and y-direction were found to be comparable to the treadmill measurements, whereas the curve characteristics of the forces in y-direction were not in complete alignment. Conclusion In conclusion, it could be demonstrated that the developed MBS-model is suitable for simulating ground reaction forces of dogs during walking. In forthcoming investigations the model will be developed further for the calculation of forces and moments acting on the hip joint during different movements, which can be of help in context with the in

  14. Validation of middle atmospheric campaign-based water vapour measured by the ground-based microwave radiometer MIAWARA-C

    Directory of Open Access Journals (Sweden)

    B. Tschanz

    2013-02-01

    Full Text Available Middle atmospheric water vapour can be used as a tracer for dynamical processes. It is mainly measured by satellite instruments and ground-based microwave radiometers. Ground-based instruments capable of measuring middle atmospheric water vapour are sparse but valuable as they complement satellite measurements, are relatively easy to maintain and have a long lifetime. MIAWARA-C is a ground-based microwave radiometer for middle atmospheric water vapour designed for use on measurement campaigns for both atmospheric case studies and instrument intercomparisons. MIAWARA-C's retrieval version 1.1 (v1.1 is set up in a way to provide a consistent data set even if the instrument is operated from different locations on a campaign basis. The sensitive altitude range for v1.1 extends from 4 hPa (37 km to 0.017 hPa (75 km. MIAWARA-C measures two polarisations of the incident radiation in separate receiver channels and can therefore provide two independent measurements of the same air mass. The standard deviation of the difference between the profiles obtained from the two polarisations is in excellent agreement with the estimated random error of v1.1. In this paper, the quality of v1.1 data is assessed during two measurement campaigns: (1 five months of measurements in the Arctic (Sodankylä, 67.37° N/26.63° E and (2 nine months of measurements at mid-latitudes (Zimmerwald, 46.88° N/7.46° E. For both campaigns MIAWARA-C's profiles are compared to measurements from the satellite experiments Aura MLS and MIPAS. In addition, comparisons to ACE-FTS and SOFIE are presented for the Arctic and to the ground-based radiometer MIAWARA for the mid-latitudinal campaign. In general all intercomparisons show high correlation coefficients, above 0.5 at altitudes above 45 km, confirming the ability of MIAWARA-C to monitor temporal variations on the order of days. The biases are generally below 10% and within the estimated systematic uncertainty of MIAWARA-C. No

  15. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... and analysed possible segments in the market. Results show that the statistical model used identified two segments - a segment of so-called "fish lovers" and another segment called "traditionalists". The "fish lovers" are very fond of eating fish and they actually prefer fish to other dishes...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...

  16. Development and validation of ultrasound-assisted solid-liquid extraction of phenolic compounds from waste spent coffee grounds.

    Science.gov (United States)

    Al-Dhabi, Naif Abdullah; Ponmurugan, Karuppiah; Maran Jeganathan, Prakash

    2017-01-01

    In this current work, Box-Behnken statistical experimental design (BBD) was adopted to evaluate and optimize USLE (ultrasound-assisted solid-liquid extraction) of phytochemicals from spent coffee grounds. Factors employed in this study are ultrasonic power, temperature, time and solid-liquid (SL) ratio. Individual and interactive effect of independent variables over the extraction yield was depicted through mathematical models, which are generated from the experimental data. Determined optimum process conditions are 244W of ultrasonic power, 40°C of temperature, 34min of time and 1:17g/ml of SL ratio. The predicted values were in correlation with experimental values with 95% confidence level, under the determined optimal conditions. This indicates the significance of selected method for USLE of phytochemicals from SCG. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Validation of clinical acceptability of an atlas-based segmentation algorithm for the delineation of organs at risk in head and neck cancer

    Energy Technology Data Exchange (ETDEWEB)

    Hoang Duc, Albert K., E-mail: albert.hoangduc.ucl@gmail.com; McClelland, Jamie; Modat, Marc; Cardoso, M. Jorge; Mendelson, Alex F. [Center for Medical Image Computing, University College London, London WC1E 6BT (United Kingdom); Eminowicz, Gemma; Mendes, Ruheena; Wong, Swee-Ling; D’Souza, Derek [Radiotherapy Department, University College London Hospitals, 235 Euston Road, London NW1 2BU (United Kingdom); Veiga, Catarina [Department of Medical Physics and Bioengineering, University College London, London WC1E 6BT (United Kingdom); Kadir, Timor [Mirada Medical UK, Oxford Center for Innovation, New Road, Oxford OX1 1BY (United Kingdom); Ourselin, Sebastien [Centre for Medical Image Computing, University College London, London WC1E 6BT (United Kingdom)

    2015-09-15

    Purpose: The aim of this study was to assess whether clinically acceptable segmentations of organs at risk (OARs) in head and neck cancer can be obtained automatically and efficiently using the novel “similarity and truth estimation for propagated segmentations” (STEPS) compared to the traditional “simultaneous truth and performance level estimation” (STAPLE) algorithm. Methods: First, 6 OARs were contoured by 2 radiation oncologists in a dataset of 100 patients with head and neck cancer on planning computed tomography images. Each image in the dataset was then automatically segmented with STAPLE and STEPS using those manual contours. Dice similarity coefficient (DSC) was then used to compare the accuracy of these automatic methods. Second, in a blind experiment, three separate and distinct trained physicians graded manual and automatic segmentations into one of the following three grades: clinically acceptable as determined by universal delineation guidelines (grade A), reasonably acceptable for clinical practice upon manual editing (grade B), and not acceptable (grade C). Finally, STEPS segmentations graded B were selected and one of the physicians manually edited them to grade A. Editing time was recorded. Results: Significant improvements in DSC can be seen when using the STEPS algorithm on large structures such as the brainstem, spinal canal, and left/right parotid compared to the STAPLE algorithm (all p < 0.001). In addition, across all three trained physicians, manual and STEPS segmentation grades were not significantly different for the brainstem, spinal canal, parotid (right/left), and optic chiasm (all p > 0.100). In contrast, STEPS segmentation grades were lower for the eyes (p < 0.001). Across all OARs and all physicians, STEPS produced segmentations graded as well as manual contouring at a rate of 83%, giving a lower bound on this rate of 80% with 95% confidence. Reduction in manual interaction time was on average 61% and 93% when automatic

  18. Ground-based water vapor Raman lidar measurements up to the upper troposphere and lower stratosphere – Part 1: Instrument development, optimization, and validation

    Directory of Open Access Journals (Sweden)

    I. S. McDermid

    2011-08-01

    Full Text Available Recognizing the importance of water vapor in the upper troposphere and lower stratosphere (UT/LS and the scarcity of high-quality, long-term measurements, JPL began the development of a powerful Raman lidar in 2005 to try to meet these needs. This development was endorsed by the Network for the Detection of Atmospheric Composition Change (NDACC and the validation program for the EOS-Aura satellite. In this paper we review the stages in the instrumental development of the lidar and the conclusions from three validation campaigns: MOHAVE, MOHAVE-II, and MOHAVE 2009 (Measurements of Humidity in the Atmosphere and Validation Experiments. The data analysis, profile retrieval and calibration procedures, as well as additional results from MOHAVE-2009 are presented in detail in a companion paper (Leblanc et al., 2011a. Ultimately the lidar has demonstrated capability to measure water vapor profiles from ~1 km above the ground to the lower stratosphere, reaching 14 km for 1-h integrated profiles and 21 km for 6-h integrated profiles, with a precision of 10 % or better near 13 km and below, and an estimated accuracy of 5 %.

  19. The CU Airborne MAX-DOAS instrument: ground based validation, and vertical profiling of aerosol extinction and trace gases

    Directory of Open Access Journals (Sweden)

    S. Baidar

    2012-09-01

    Full Text Available The University of Colorado Airborne Multi Axis Differential Optical Absorption Spectroscopy (CU AMAX-DOAS instrument uses solar stray light remote sensing to detect and quantify multiple trace gases, including nitrogen dioxide (NO2, glyoxal (CHOCHO, formaldehyde (HCHO, water vapor (H2O, nitrous acid (HONO, iodine monoxide (IO, bromine monoxide (BrO, and oxygen dimers (O4 at multiple wavelengths (360 nm, 477 nm, 577 nm and 632 nm simultaneously, and sensitively in the open atmosphere. The instrument is unique, in that it presents the first systematic implementation of MAX-DOAS on research aircraft, i.e. (1 includes measurements of solar stray light photons from nadir, zenith, and multiple elevation angles forward and below the plane by the same spectrometer/detector system, and (2 features a motion compensation system that decouples the telescope field of view (FOV from aircraft movements in real-time (< 0.35° accuracy. Sets of solar stray light spectra collected from nadir to zenith scans provide some vertical profile information within 2 km above and below the aircraft altitude, and the vertical column density (VCD below the aircraft is measured in nadir view. Maximum information about vertical profiles is derived simultaneously for trace gas concentrations and aerosol extinction coefficients over similar spatial scales and with a vertical resolution of typically 250 m during aircraft ascent/descent.

    The instrument is described, and data from flights over California during the CalNex and CARES air quality field campaigns is presented. Horizontal distributions of NO2 VCDs (below the aircraft maps are sampled with typically 1 km resolution, and show good agreement with two ground based CU MAX-DOAS instruments (slope 0.95 ± 0.09, R2 = 0.86. As a case study vertical profiles of NO2, CHOCHO, HCHO, and H2O mixing ratios and aerosol extinction coefficients

  20. Retrieval and validation of O3 measurements from ground-based FTIR spectrometer at equatorial station: Addis Ababa, Ethiopia

    Science.gov (United States)

    Takele Kenea, S.; Mengistu Tsidu, G.; Blumenstock, T.; Hase, F.; von Clarmann, T.; Stiller, G. P.

    2012-09-01

    Since May 2009 high-resolution Fourier transform infrared (FTIR) solar absorption spectra are recorded at Addis Ababa (9.01° N latitude, 38.76° E longitude, 2443 m altitude a.s.l.), Ethiopia. The vertical profiles and total column amounts of ozone (O3) are deduced from the spectra by using the retrieval code PROFFIT (V9.5) and regularly determined instrumental line shape (ILS). A detailed error analysis of the O3 retrieval is performed. Averaging kernels analysis of the target gas shows that the major contribution to the retrieved information always comes from the measurement. We obtained 2.1 degrees of freedom on average for signals in the retrieval of O3 from the observed FTIR spectra. We have compared the FTIR retrieval of ozone Volume Mixing Ratio (VMR) profiles and column amounts with the coincident satellite observations of Microwave Limb Sounding (MLS), Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) and Tropospheric Emission Spectrometer (TES), Ozone Monitoring Instrument (OMI), Atmospheric Infrared Sounding (AIRS) and Global Ozone Monitoring Experiment (GOME-2) instrument. The mean relative differences are generally found below +15% in the altitude range of 27 to 36 km for comparison of VMR profiles made between MLS and MIPAS, whereas comparison with TES has shown below 9.4% relative difference. Furthermore, the mean relative difference is positive above 31 km, suggesting positive bias in the FTIR measurement of O3 VMR with respect to MLS, MIPAS and TES. The overall comparisons of column amounts of satellite measurements with the ground-based FTIR instruments show better agreement exhibiting mean relative differences of ground-based FTIR with respect to MLS and GOME-2 within +0.4% to +4.0% and corresponding standard deviations of 2.2 to 4.3% whereas, in the case of OMI, TES, AIRS, the mean relative differences are from -0.38 to -6.8%. Thus, the retrieved O3 VMR and column amounts from a tropical site, Addis Ababa, is found to exhibit

  1. The CU Airborne MAX-DOAS instrument: ground based validation, and vertical profiling of aerosol extinction and trace gases

    Science.gov (United States)

    Baidar, S.; Oetjen, H.; Coburn, S.; Dix, B.; Ortega, I.; Sinreich, R.; Volkamer, R.

    2012-09-01

    The University of Colorado Airborne Multi Axis Differential Optical Absorption Spectroscopy (CU AMAX-DOAS) instrument uses solar stray light remote sensing to detect and quantify multiple trace gases, including nitrogen dioxide (NO2), glyoxal (CHOCHO), formaldehyde (HCHO), water vapor (H2O), nitrous acid (HONO), iodine monoxide (IO), bromine monoxide (BrO), and oxygen dimers (O4) at multiple wavelengths (360 nm, 477 nm, 577 nm and 632 nm) simultaneously, and sensitively in the open atmosphere. The instrument is unique, in that it presents the first systematic implementation of MAX-DOAS on research aircraft, i.e. (1) includes measurements of solar stray light photons from nadir, zenith, and multiple elevation angles forward and below the plane by the same spectrometer/detector system, and (2) features a motion compensation system that decouples the telescope field of view (FOV) from aircraft movements in real-time (CHOCHO, HCHO, and H2O mixing ratios and aerosol extinction coefficients, ɛ, at 477nm calculated from O4 measurements from a low approach at Brackett airfield inside the South Coast Air Basin (SCAB) are presented. These profiles contain ~ 12 degrees of freedom (DOF) over a 3.5 km altitude range, independent of signal-to-noise at which the trace gas is detected. The boundary layer NO2 concentration, and the integral aerosol extinction over height (aerosol optical depth, AOD) agrees well with nearby ground-based in-situ NO2 measurement, and AERONET station. The detection limits of NO2, CHOCHO, HCHO, ɛ360, ɛ477 from 30 s integration time spectra recorded forward of the plane are 5 ppt, 3 ppt, 100 ppt, 0.004 km-1, 0.002 km-1 in the free troposphere (FT), and 30 ppt, 16 ppt, 540 ppt, 0.012 km-1, 0.006 km-1 inside the boundary layer (BL), respectively. Mobile column observations of trace gases and aerosols are complimentary to in-situ observations, and help bridge the spatial scales probed by ground-based observations, satellites, and predicted by atmospheric

  2. M3 version 3.0: Verification and validation; Hydrochemical model of ground water at repository site

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Javier B. (Dept. of Earth Sciences, Univ. of Zaragoza, Zaragoza (Spain)); Laaksoharju, Marcus (Geopoint AB, Sollentuna (Sweden)); Skaarman, Erik (Abscondo, Bromma (Sweden)); Gurban, Ioana (3D-Terra (Canada))

    2009-01-15

    Hydrochemical evaluation is a complex type of work that is carried out by specialists. The outcome of this work is generally presented as qualitative models and process descriptions of a site. To support and help to quantify the processes in an objective way, a multivariate mathematical tool entitled M3 (Multivariate Mixing and Mass balance calculations) has been constructed. The computer code can be used to trace the origin of the groundwater, and to calculate the mixing proportions and mass balances from groundwater data. The M3 code is a groundwater response model, which means that changes in the groundwater chemistry in terms of sources and sinks are traced in relation to an ideal mixing model. The complexity of the measured groundwater data determines the configuration of the ideal mixing model. Deviations from the ideal mixing model are interpreted as being due to reactions. Assumptions concerning important mineral phases altering the groundwater or uncertainties associated with thermodynamic constants do not affect the modelling because the calculations are solely based on the measured groundwater composition. M3 uses the opposite approach to that of many standard hydrochemical models. In M3, mixing is evaluated and calculated first. The constituents that cannot be described by mixing are described by reactions. The M3 model consists of three steps: the first is a standard principal component analysis, followed by mixing and finally mass balance calculations. The measured groundwater composition can be described in terms of mixing proportions (%), while the sinks and sources of an element associated with reactions are reported in mg/L. This report contains a set of verification and validation exercises with the intention of building confidence in the use of the M3 methodology. At the same time, clear answers are given to questions related to the accuracy and the precision of the results, including the inherent uncertainties and the errors that can be made

  3. M3 version 3.0: Verification and validation; Hydrochemical model of ground water at repository site

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, Javier B. (Dept. of Earth Sciences, Univ. of Zaragoza, Zaragoza (Spain)); Laaksoharju, Marcus (Geopoint AB, Sollentuna (Sweden)); Skaarman, Erik (Abscondo, Bromma (Sweden)); Gurban, Ioana (3D-Terra (Canada))

    2009-01-15

    Hydrochemical evaluation is a complex type of work that is carried out by specialists. The outcome of this work is generally presented as qualitative models and process descriptions of a site. To support and help to quantify the processes in an objective way, a multivariate mathematical tool entitled M3 (Multivariate Mixing and Mass balance calculations) has been constructed. The computer code can be used to trace the origin of the groundwater, and to calculate the mixing proportions and mass balances from groundwater data. The M3 code is a groundwater response model, which means that changes in the groundwater chemistry in terms of sources and sinks are traced in relation to an ideal mixing model. The complexity of the measured groundwater data determines the configuration of the ideal mixing model. Deviations from the ideal mixing model are interpreted as being due to reactions. Assumptions concerning important mineral phases altering the groundwater or uncertainties associated with thermodynamic constants do not affect the modelling because the calculations are solely based on the measured groundwater composition. M3 uses the opposite approach to that of many standard hydrochemical models. In M3, mixing is evaluated and calculated first. The constituents that cannot be described by mixing are described by reactions. The M3 model consists of three steps: the first is a standard principal component analysis, followed by mixing and finally mass balance calculations. The measured groundwater composition can be described in terms of mixing proportions (%), while the sinks and sources of an element associated with reactions are reported in mg/L. This report contains a set of verification and validation exercises with the intention of building confidence in the use of the M3 methodology. At the same time, clear answers are given to questions related to the accuracy and the precision of the results, including the inherent uncertainties and the errors that can be made

  4. Validation of SCIAMACHY NO2 Vertical Column Densities with Mt.Cimone and Stara Zagora Ground-Based Zenith Sky DOAS Observations

    Science.gov (United States)

    Kostadinov, I.; Petritoli, A.; Werner, R.; Valev, D.; Atanasov, At.; Bortoli, D.; Markova, T.; Ravegnani, F.; Palazzi, E.; Giovanelli, G.

    2004-08-01

    Ground-based zenith sky Differential Optical Absorption Spectroscopy (DOAS) measurements performed by means of GASCOD instruments at Mt. Cimone (44N 11E), Italy and Stara Zagora (42N, 25E), Bulgaria are used for validation of SCIAMACHY NO2 vertical column density (vcd) of ESA SCI_NL product retrieved with 5.01 processor version. The results presented in this work regard satellite data for the JulyDecember 2002 period. On this base it is concluded that during summer-autumn period the overall NO2 vcd above both stations is fairly well reproduced by the SCIAMACHY data, while towards the winter period they deviate from the seasonal behaviour of NO2 vcd derived at both stations

  5. In Pursuit of Improving Airburst and Ground Damage Predictions: Recent Advances in Multi-Body Aerodynamic Testing and Computational Tools Validation

    Science.gov (United States)

    Venkatapathy, Ethiraj; Gulhan, Ali; Aftosmis, Michael; Brock, Joseph; Mathias, Donovan; Need, Dominic; Rodriguez, David; Seltner, Patrick; Stern, Eric; Wiles, Sebastian

    2017-01-01

    An airburst from a large asteroid during entry can cause significant ground damage. The damage depends on the energy and the altitude of airburst. Breakup of asteroids into fragments and their lateral spread have been observed. Modeling the underlying physics of fragmented bodies interacting at hypersonic speeds and the spread of fragments is needed for a true predictive capability. Current models use heuristic arguments and assumptions such as pancaking or point source explosive energy release at pre-determined altitude or an assumed fragmentation spread rate to predict airburst damage. A multi-year collaboration between German Aerospace Center (DLR) and NASA has been established to develop validated computational tools to address the above challenge.

  6. Essays in International Market Segmentation

    NARCIS (Netherlands)

    Hofstede, ter F.

    1999-01-01

    The primary objective of this thesis is to develop and validate new methodologies to improve the effectiveness of international segmentation strategies. The current status of international market segmentation research is reviewed in an introductory chapter, which provided a number of methodological

  7. Essays in international market segmentation

    NARCIS (Netherlands)

    Hofstede, ter F.

    1999-01-01

    The primary objective of this thesis is to develop and validate new methodologies to improve the effectiveness of international segmentation strategies. The current status of international market segmentation research is reviewed in an introductory chapter, which provided a number of

  8. Forward Modeling and validation of a new formulation to compute self-potential signals associated with ground water flow

    Directory of Open Access Journals (Sweden)

    A. Bolève

    2007-10-01

    Full Text Available The classical formulation of the coupled hydroelectrical flow in porous media is based on a linear formulation of two coupled constitutive equations for the electrical current density and the seepage velocity of the water phase and obeying Onsager's reciprocity. This formulation shows that the streaming current density is controlled by the gradient of the fluid pressure of the water phase and a streaming current coupling coefficient that depends on the so-called zeta potential. Recently a new formulation has been introduced in which the streaming current density is directly connected to the seepage velocity of the water phase and to the excess of electrical charge per unit pore volume in the porous material. The advantages of this formulation are numerous. First this new formulation is more intuitive not only in terms of establishing a constitutive equation for the generalized Ohm's law but also in specifying boundary conditions for the influence of the flow field upon the streaming potential. With the new formulation, the streaming potential coupling coefficient shows a decrease of its magnitude with permeability in agreement with published results. The new formulation has been extended in the inertial laminar flow regime and to unsaturated conditions with applications to the vadose zone. This formulation is suitable to model self-potential signals in the field. We investigate infiltration of water from an agricultural ditch, vertical infiltration of water into a sinkhole, and preferential horizontal flow of ground water in a paleochannel. For the three cases reported in the present study, a good match is obtained between finite element simulations performed and field observations. Thus, this formulation could be useful for the inverse mapping of the geometry of groundwater flow from self-potential field measurements.

  9. Towards a first ground-based validation of aerosol optical depths from Sentinel-2 over the complex topography of the Alps

    Science.gov (United States)

    Marinelli, Valerio; Cremonese, Edoardo; Diémoz, Henri; Siani, Anna Maria

    2017-04-01

    The European Space Agency (ESA) is spending notable effort to put in operation a new generation of advanced Earth-observation satellites, the Sentinel constellation. In particular, the Sentinel-2 host an instrumental payload mainly consisting in a MultiSpectral Instrument (MSI) imaging sensor, capable of acquiring high-resolution imagery of the Earth surface and atmospheric reflectance at selected spectral bands, hence providing complementary measurements to ground-based radiometric stations. The latter can provide reference data for validating the estimates from spaceborne instruments such as Sentinel-2A (operating since October 2015), whose aerosol optical thickness (AOT) values, can be obtained from correcting SWIR (2190 nm) reflectance with an improved dense dark vegetation (DDV) algorithm. In the Northwestern European Alps (Saint-Christophe, 45.74°N, 7.36°E) a Prede POM-02 sun/sky aerosol photometer has been operating for several years within the EuroSkyRad network by the Environmental Protection Agency of Aosta Valley (ARPA Valle d'Aosta), gathering direct sun and diffuse sky radiance for retrieving columnar aerosol optical properties. This aerosol optical depth (AOD) dataset represents an optimal ground-truth for the corresponding Sentinel-2 estimates obtained with the Sen2cor processor in the challenging environment of the Alps (complex topography, snow-covered surfaces). We show the deviations between the two measurement series and propose some corrections to enhance the overall accuracy of satellite estimates.

  10. Development of a ground segment for the scientific analysis of MIPAS/ENVISAT. Final report; Aufbau eines Bodensegments fuer die wissenschaftliche Auswertung von MIPAS/ENVISAT. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Stiller, G.P.; Clarmann, T. von; Fischer, H.; Grabowski, U.; Lutz, R.; Kiefer, M.; Milz, M.; Schulirsch, M.

    2001-11-30

    Based on the scientific work on the level-2 data analysis as performed in the parallel project 07UFE10/6 a partly automated analysis system for the MIPAS/ENVISAT data has been developed. The system fulfils the scientific requirements in terms of high flexibility and the needs of high effectivity and good computational performance. We expect that about 10% of all spectral data of MIPAS can be exhaustively analysed with respect to the geophysical information contained. The components of the system are a retrieval kernel consisting of a radiative transfer forward model and the inversion with respect to the geophysical parameters, a databank system which stores and administrates the level-1, level-2, and additional data, automated pre- and post-processing modules, as well as a computer cluster consisting of 8 Compaq work stations and a RAID system as kernel. The controlling of the system is managed via graphical user interfaces (GUIs). The system allows to analyse the MIPAS data with respect to ca. 45 trace species, their isotopomers and horizontally inhomogeneous distribution, non-LTE effects, and microphysical properties of atmospheric particles, and it supports activities in terms of instrument characterisation and validation. (orig.)

  11. Proof-of-Concept of a Networked Validation Environment for Distributed Air/Ground NextGen Concepts

    Science.gov (United States)

    Grisham, James; Larson, Natalie; Nelson, Justin; Reed, Joshua; Suggs, Marvin; Underwood, Matthew; Papelis, Yiannis; Ballin, Mark G.

    2013-01-01

    The National Airspace System (NAS) must be improved to increase capacity, reduce flight delays, and minimize environmental impacts of air travel. NASA has been tasked with aiding the Federal Aviation Administration (FAA) in NAS modernization. Automatic Dependent Surveillance-Broadcast (ADS-B) is an enabling technology that is fundamental to realization of the Next Generation Air Transportation System (NextGen). Despite the 2020 FAA mandate requiring ADS-B Out equipage, airspace users are lacking incentives to equip with the requisite ADS-B avionics. A need exists to validate in flight tests advanced concepts of operation (ConOps) that rely on ADS-B and other data links without requiring costly equipage. A potential solution is presented in this paper. It is possible to emulate future data link capabilities using the existing in-flight Internet and reduced-cost test equipment. To establish proof-of-concept, a high-fidelity traffic operations simulation was modified to include a module that simulated Internet transmission of ADS-B messages. An advanced NASA ConOp, Flight Deck Interval Management (FIM), was used to evaluate technical feasibility. A preliminary assessment of the effects of latency and dropout rate on FIM was performed. Flight hardware that would be used by proposed test environment was connected to the simulation so that data transfer from aircraft systems to test equipment could be verified. The results indicate that the FIM ConOp, and therefore, many other advanced ConOps with equal or lesser response characteristics and data requirements, can be evaluated in flight using the proposed concept.

  12. Validation and Application of Skeletochronology for Age Determination of the Ryukyu Ground Gecko, Goniurosaurus kuroiwae (Squamata:Eublepharidae)

    Institute of Scientific and Technical Information of China (English)

    Takaki KURITA; Mamoru TODA

    2013-01-01

    Skeletochronology is a method commonly used for estimating the age of amphibians and reptiles in the wild. However, the number of lines of arrested growth (LAGs) does not necessarily relfect age in some species. We validated the applicability of this method to an endangered eublepharid gecko, Goniurosaurus kuroiwae, then inferred its longevity and age structures in wild populations. We classiifed young geckos into three groups using previously published data for early growth:Group 1 contained hatchlings before the ifrst winter, Group 2 contained hatchlings after the ifrst win-ter, and Group 3 included yearlings after the second winter. LAG numbers in these groups were then compared. All individuals in Group 1 possessed a single LAG, which was considered as a hatching line. Most individuals in Groups 2 and 3 possessed one and two additional LAGs, respectively (LAG1 and LAG2), corroborating the notion that LAGs are formed annually. A few geckos exhibited fewer LAGs than expected. Analysis of variations in LAG and marrow cavity diameter demonstrated that in animals with fewer LAGs, endosteal resorption or fusion of hatching line and LAG1 had occurred. LAG2 was never lost by endosteal resorption and was identiifable by its diameter. Thus, the age of adult geckos could be determined by counting LAGs outward from LAG2. Application of this method to wild populations re-vealed that the longevity of this species is not less than 83 months, but that almost all individuals in fragmented habitats die before 50 months, suggesting lower population sustainability in such habitats.

  13. Retrieval of nitrogen dioxide stratospheric profiles from ground-based zenith-sky UV-visible observations: validation of the technique through correlative comparisons

    Directory of Open Access Journals (Sweden)

    F. Hendrick

    2004-01-01

    Full Text Available A retrieval algorithm based on the Optimal Estimation Method (OEM has been developed in order to provide vertical distributions of NO2 in the stratosphere from ground-based (GB zenith-sky UV-visible observations. It has been applied to observational data sets from the NDSC (Network for Detection of Stratospheric Change stations of Harestua (60° N, 10° E and Andøya (69° N, 16° E in Norway. The information content and retrieval errors have been analyzed following a formalism used for characterizing ozone profiles retrieved from solar infrared absorption spectra. In order to validate the technique, the retrieved NO2 vertical profiles and columns have been compared to correlative balloon and satellite observations. Such extensive validation of the profile and column retrievals was not reported in previously published work on the profiling from GB UV-visible measurements. A good agreement - generally better than 25% - has been found with the SAOZ (Système d'Analyse par Observations Zénithales and DOAS (Differential Optical Absorption Spectroscopy balloons. A similar agreement has been reached with correlative satellite data from the HALogen Occultation Experiment (HALOE and Polar Ozone and Aerosol Measurement (POAM III instruments above 25km of altitude. Below 25km, a systematic underestimation - by up to 40% in some cases - of both HALOE and POAM III profiles by our GB profile retrievals has been observed, pointing out more likely a limitation of both satellite instruments at these altitudes. We have concluded that our study strengthens our confidence in the reliability of the retrieval of vertical distribution information from GB UV-visible observations and offers new perspectives in the use of GB UV-visible network data for validation purposes.

  14. Retrieval of nitrogen dioxide stratospheric profiles from ground-based zenith-sky UV-visible observations: validation of the technique through correlative comparisons

    Directory of Open Access Journals (Sweden)

    F. Hendrick

    2004-05-01

    Full Text Available A retrieval algorithm based on the Optimal Estimation Method (OEM has been developed in order to provide vertical distributions of NO2 in the stratosphere from ground-based (GB zenith-sky UV-visible observations. It has been applied to observational data sets from the NDSC (Network for Detection of Stratospheric Change stations of Harestua (60° N, 10° E and Andøya (69.3° N, 16.1° E in Norway. The information content and retrieval errors have been analyzed following a formalism used for characterizing ozone profiles retrieved from solar infrared absorption spectra. In order to validate the technique, the retrieved NO2 vertical profiles and columns have been compared to correlative balloon and satellite observations. Such extensive validation of the profile and column retrievals was not reported in previously published work on the profiling from GB UV-visible measurements. A good agreement – generally better than 25% – has been found with the SAOZ (Système d'Analyse par Observations Zénithales and DOAS (Differential Optical Absorption Spectroscopy balloon data. A similar agreement has been reached with correlative satellite data from HALogen Occultation Experiment (HALOE and Polar Ozone and Aerosol Measurement (POAM III instruments above 25 km of altitude. Below 25 km, a systematic overestimation of our retrieved profiles – by up to 50% in some cases – has been observed by both HALOE and POAM III, pointing out the limitation of the satellite solar occultation technique at these altitudes. We have concluded that our study strengthens our confidence in the reliability of the retrieval of vertical distribution information from GB UV-visible observations and offers new perspectives in the use of GB UV-visible network data for validation purposes.

  15. Acellular allogeneic nerve grafting combined with bone marrow mesenchymal stem cell transplantation for the repair of long-segment sciatic nerve defects: biomechanics and validation of mathematical models

    Directory of Open Access Journals (Sweden)

    Ya-jun Li

    2016-01-01

    Full Text Available We hypothesized that a chemically extracted acellular allogeneic nerve graft used in combination with bone marrow mesenchymal stem cell transplantation would be an effective treatment for long-segment sciatic nerve defects. To test this, we established rabbit models of 30 mm sciatic nerve defects, and treated them using either an autograft or a chemically decellularized allogeneic nerve graft with or without simultaneous transplantation of bone marrow mesenchymal stem cells. We compared the tensile properties, electrophysiological function and morphology of the damaged nerve in each group. Sciatic nerves repaired by the allogeneic nerve graft combined with stem cell transplantation showed better recovery than those repaired by the acellular allogeneic nerve graft alone, and produced similar results to those observed with the autograft. These findings confirm that a chemically extracted acellular allogeneic nerve graft combined with transplantation of bone marrow mesenchymal stem cells is an effective method of repairing long-segment sciatic nerve defects.

  16. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 (United States); Chen, Ken-Chung; Tang, Zhen [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Xia, James J., E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery, Shanghai Jiao Tong University School of Medicine, Shanghai Ninth People’s Hospital, Shanghai 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 and Department of Brain and Cognitive Engineering, Korea University, Seoul 02841 (Korea, Republic of)

    2016-01-15

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method

  17. Fingerprint Segmentation

    OpenAIRE

    Jomaa, Diala

    2009-01-01

    In this thesis, a new algorithm has been proposed to segment the foreground of the fingerprint from the image under consideration. The algorithm uses three features, mean, variance and coherence. Based on these features, a rule system is built to help the algorithm to efficiently segment the image. In addition, the proposed algorithm combine split and merge with modified Otsu. Both enhancements techniques such as Gaussian filter and histogram equalization are applied to enhance and improve th...

  18. Derivation from the Landsat 7 NDVI and ground truth validation of LAI and interception storage capacity for wetland ecosystems in Biebrza Valley, Poland

    Science.gov (United States)

    Suliga, Joanna; Chormański, Jarosław; Szporak-Wasilewska, Sylwia; Kleniewska, Małgorzata; Berezowski, Tomasz; van Griensven, Ann; Verbeiren, Boud

    2015-10-01

    Wetlands are very valuable areas because they provide a wide range of ecosystems services therefore modeling of wetland areas is very relevant, however, the most widely used hydrological models were developed in the 90s and usually are not adjusted to simulate wetland conditions. In case of wetlands including interception storage into the model's calculation is even more challenging, because literature data hardly exists. This study includes the computation of interception storage capacity based on Landsat 7 image and ground truthing measurements conducted in the Biebrza Valley, Poland. The method was based on collecting and weighing dry, wet and fully saturated samples of sedges. During the experiments measurements of fresh/dry biomass and leaf area index (LAI) were performed. The research was repeated three times during the same season (May, June and July 2013) to observe temporal variability of parameters. Ground truthing measurements were used for the validating estimation of parameters derived from images acquired in a similar period as the measurements campaigns. The use of remote sensing has as major advantage of being able to obtain an area covering spatially and temporally distributed estimate of the interception storage capacity. Results from this study proved that interception capacity of wetlands vegetation is changing considerably during the vegetation season (temporal variability) and reaches its maximum value when plants are fully developed. Different areas depending on existing plants species are characterized with different values of interception capacity (spatial variability). This research frames within the INTREV and HiWET projects, funded respectively by National Science Centre (NCN) in Poland and BELSPO STEREO III.

  19. Assessment of the stress response in Columbian ground squirrels: laboratory and field validation of an enzyme immunoassay for fecal cortisol metabolites.

    Science.gov (United States)

    Bosson, Curtis O; Palme, Rupert; Boonstra, Rudy

    2009-01-01

    Stress responses play a critical role in the ecology and demography of wild animals, and the analysis of fecal hormone metabolites is a powerful noninvasive method to assess the role of stress. We characterized the metabolites of injected radiolabeled cortisol in the urine and feces of Columbian ground squirrels and validated an enzyme immunoassay for measuring fecal cortisol metabolites (FCM) with a 5 alpha-3beta,11 beta-diol structure by stimulation and suppression of adrenocortical activity and by evaluation of the circadian pattern of FCM excretion. In addition, we also evaluated the impact of capture, handling, and acclimation to the laboratory on FCM. Cortisol is highly metabolized, with virtually none being excreted, and of the radiolabeled cortisol injected, 31% was recovered in urine and 6.5% in feces. The lag time between cortisol injection and its appearance in urine and feces was 4.5 +/- 0.82 (SE) h and 7.0 +/- 0.53 (SE) h, respectively. FCM levels varied over the day, reflecting circadian variation in endogenous cortisol. Dexamethasone decreased FCM levels by 33%, and ACTH increased them by 255%. Trapping and housing initially increased FCM levels and decreased body mass, but these reversed within 3-7 d, indicating acclimation. Finally, FCM levels were modestly repeatable over time (r=0.57) in wild, live trapped, nonbreeding animals, indicating that FCMs provide a measure of the squirrel's stress-axis state. This assay provides a robust noninvasive assessment of the stress response of the Columbian ground squirrel and will facilitate an integration of its life history and physiology.

  20. Cross-validation of IASI/MetOp derived tropospheric δD with TES and ground-based FTIR observations

    Directory of Open Access Journals (Sweden)

    J.-L. Lacour

    2014-11-01

    Full Text Available The Infrared Atmospheric Sounding Interferometer (IASI flying on-board MetOpA and MetOpB is able to capture fine isotopic variations of the HDO to H2O ratio (δD in the troposphere. Such observations at the high spatio temporal resolution of the sounder are of great interest to improve our understanding of the mechanisms controlling humidity in the troposphere. In this study we aim to empirically assess the validity of our error estimation previously evaluated theoretically. To achieve this, we compare IASI δD retrieved profiles with other available profiles of δD, from the TES infrared sounder onboard AURA and from three ground-based FTIR stations produced within the MUSICA project: the NDACC (Network for the Detection of Atmospheric Composition Change sites Kiruna and Izana, and the TCCON site Karlsruhe, which in addition to near-infrared TCCON spectra also records mid-infrared spectra. We describe the achievable level of agreement between the different retrievals and show that these theoretical errors are in good agreement with empirical differences. The comparisons are made at different locations from tropical to Arctic latitudes, above sea and above land. Generally IASI and TES are similarly sensitive to δD in the free troposphere which allows to compare their measurements directly. At tropical latitudes where IASI's sensitivity is lower than that of TES, we show that the agreement improves when taking into account the sensitivity of IASI in the TES retrieval. For the comparison IASI-FTIR only direct comparisons are performed because of similar sensitivities. We identify a quasi negligible bias in the free troposphere (−3‰ between IASI retrieved δD with the TES one, which are bias corrected, but an important with the ground-based FTIR reaching −47‰. We also suggest that model-satellite observations comparisons could be optimized with IASI thanks to its high spatial and temporal sampling.

  1. Comparison of thyroid segmentation techniques for 3D ultrasound

    Science.gov (United States)

    Wunderling, T.; Golla, B.; Poudel, P.; Arens, C.; Friebe, M.; Hansen, C.

    2017-02-01

    The segmentation of the thyroid in ultrasound images is a field of active research. The thyroid is a gland of the endocrine system and regulates several body functions. Measuring the volume of the thyroid is regular practice of diagnosing pathological changes. In this work, we compare three approaches for semi-automatic thyroid segmentation in freehand-tracked three-dimensional ultrasound images. The approaches are based on level set, graph cut and feature classification. For validation, sixteen 3D ultrasound records were created with ground truth segmentations, which we make publicly available. The properties analyzed are the Dice coefficient when compared against the ground truth reference and the effort of required interaction. Our results show that in terms of Dice coefficient, all algorithms perform similarly. For interaction, however, each algorithm has advantages over the other. The graph cut-based approach gives the practitioner direct influence on the final segmentation. Level set and feature classifier require less interaction, but offer less control over the result. All three compared methods show promising results for future work and provide several possible extensions.

  2. Ground measurements of the hemispherical-directional reflectance of Arctic snow covered tundra for the validation of satellite remote sensing products

    Science.gov (United States)

    Ball, C. P.; Marks, A. A.; Green, P.; Mac Arthur, A.; Fox, N.; King, M. D.

    2013-12-01

    Surface albedo is the hemispherical and wavelength integrated reflectance over the visible, near infrared and shortwave infrared regions of the solar spectrum. The albedo of Arctic snow can be in excess of 0.8 and it is a critical component in the global radiation budget because it determines the proportion of solar radiation absorbed, and reflected, over a large part of the Earth's surface. We present here our first results of the angularly resolved surface reflectance of Arctic snow at high solar zenith angles (~80°) suitable for the validation of satellite remote sensing products. The hemispherical directional reflectance factor (HDRF) of Arctic snow covered tundra was measured using the GonioRAdiometric Spectrometer System (GRASS) during a three-week field campaign in Ny-Ålesund, Svalbard, in March/April 2013. The measurements provide one of few existing HDRF datasets at high solar zenith angles for wind-blown Arctic snow covered tundra (conditions typical of the Arctic region), and the first ground-based measure of HDRF at Ny-Ålesund. The HDRF was recorded under clear sky conditions with 10° intervals in view zenith, and 30° intervals in view azimuth, for several typical sites over a wavelength range of 400-1500 nm at 1 nm resolution. Satellite sensors such as MODIS, AVHRR and VIIRS offer a method to monitor the surface albedo with high spatial and temporal resolution. However, snow reflectance is anisotropic and is dependent on view and illumination angle and the wavelength of the incident light. Spaceborne sensors subtend a discrete angle to the target surface and measure radiance over a limited number of narrow spectral bands. Therefore, the derivation of the surface albedo requires accurate knowledge of the surfaces bidirectional reflectance as a function of wavelength. The ultimate accuracy to which satellite sensors are able to measure snow surface properties such as albedo is dependant on the accuracy of the BRDF model, which can only be assessed

  3. Estimates of evapotranspiration for riparian sites (Eucalyptus) in the Lower Murray -Darling Basin using ground validated sap flow and vegetation index scaling techniques

    Science.gov (United States)

    Doody, T.; Nagler, P. L.; Glenn, E. P.

    2014-12-01

    Water accounting is becoming critical globally, and balancing consumptive water demands with environmental water requirements is especially difficult in in arid and semi-arid regions. Within the Murray-Darling Basin (MDB) in Australia, riparian water use has not been assessed across broad scales. This study therefore aimed to apply and validate an existing U.S. riparian ecosystem evapotranspiration (ET) algorithm for the MDB river systems to assist water resource managers to quantify environmental water needs over wide ranges of niche conditions. Ground-based sap flow ET was correlated with remotely sensed predictions of ET, to provide a method to scale annual rates of water consumption by riparian vegetation over entire irrigation districts. Sap flux was measured at nine locations on the Murrumbidgee River between July 2011 and June 2012. Remotely sensed ET was calculated using a combination of local meteorological estimates of potential ET (ETo) and rainfall and MODIS Enhanced Vegetation Index (EVI) from selected 250 m resolution pixels. The sap flow data correlated well with MODIS EVI. Sap flow ranged from 0.81 mm/day to 3.60 mm/day and corresponded to a MODIS-based ET range of 1.43 mm/day to 2.42 mm/day. We found that mean ET across sites could be predicted by EVI-ETo methods with a standard error of about 20% across sites, but that ET at any given site could vary much more due to differences in aquifer and soil properties among sites. Water use was within range of that expected. We conclude that our algorithm developed for US arid land crops and riparian plants is applicable to this region of Australia. Future work includes the development of an adjusted algorithm using these sap flow validated results.

  4. CO measurements from the ACE-FTS satellite instrument: data analysis and validation using ground-based, airborne and spaceborne observations

    Directory of Open Access Journals (Sweden)

    C. Clerbaux

    2007-10-01

    Full Text Available The Atmospheric Chemistry Experiment (ACE mission was launched in August 2003 to sound the atmosphere by solar occultation. Carbon monoxide (CO, a good tracer of pollution plumes and atmospheric dynamics, is one of the key species provided by the primary instrument, the ACE-Fourier Transform Spectrometer (ACE-FTS. This instrument performs measurements in both the CO 1-0 and 2-0 ro-vibrational bands, from which vertically resolved CO concentration profiles are retrieved, from the mid-troposphere to the thermosphere. This paper presents an updated description of the ACE-FTS version 2.2 CO data product, along with a comprehensive validation of these profiles using available observations (February 2004 to December 2006. We have compared the CO partial columns with ground-based measurements using Fourier transform infrared spectroscopy and millimeter wave radiometry, and the volume mixing ratio profiles with airborne (both high-altitude balloon flight and airplane observations. CO satellite observations provided by nadir-looking instruments (MOPITT and TES as well as limb-viewing remote sensors (MIPAS, SMR and MLS were also compared with the ACE-FTS CO products. We show that the ACE-FTS measurements provide CO profiles with small retrieval errors (better than 5% from the upper troposphere to 40 km, and better than 10% above. These observations agree well with the correlative measurements, considering the rather loose coincidence criteria in some cases. Based on the validation exercise we assess the following uncertainties to the ACE-FTS measurement data: better than 15% in the upper troposphere (8–12 km, than 30% in the lower stratosphere (12–30 km, and than 25% from 30 to 100 km.

  5. CO measurements from the ACE-FTS satellite instrument: data analysis and validation using ground-based, airborne and spaceborne observations

    Directory of Open Access Journals (Sweden)

    C. Clerbaux

    2008-05-01

    Full Text Available The Atmospheric Chemistry Experiment (ACE mission was launched in August 2003 to sound the atmosphere by solar occultation. Carbon monoxide (CO, a good tracer of pollution plumes and atmospheric dynamics, is one of the key species provided by the primary instrument, the ACE-Fourier Transform Spectrometer (ACE-FTS. This instrument performs measurements in both the CO 1-0 and 2-0 ro-vibrational bands, from which vertically resolved CO concentration profiles are retrieved, from the mid-troposphere to the thermosphere. This paper presents an updated description of the ACE-FTS version 2.2 CO data product, along with a comprehensive validation of these profiles using available observations (February 2004 to December 2006. We have compared the CO partial columns with ground-based measurements using Fourier transform infrared spectroscopy and millimeter wave radiometry, and the volume mixing ratio profiles with airborne (both high-altitude balloon flight and airplane observations. CO satellite observations provided by nadir-looking instruments (MOPITT and TES as well as limb-viewing remote sensors (MIPAS, SMR and MLS were also compared with the ACE-FTS CO products. We show that the ACE-FTS measurements provide CO profiles with small retrieval errors (better than 5% from the upper troposphere to 40 km, and better than 10% above. These observations agree well with the correlative measurements, considering the rather loose coincidence criteria in some cases. Based on the validation exercise we assess the following uncertainties to the ACE-FTS measurement data: better than 15% in the upper troposphere (8–12 km, than 30% in the lower stratosphere (12–30 km, and than 25% from 30 to 100 km.

  6. Concurrent validity and reliability of using ground reaction force and center of pressure parameters in the determination of leg movement initiation during single leg lift.

    Science.gov (United States)

    Aldabe, Daniela; de Castro, Marcelo Peduzzi; Milosavljevic, Stephan; Bussey, Melanie Dawn

    2016-09-01

    Postural adjustment evaluations during single leg lift requires the initiation of heel lift (T1) identification. T1 measured by means of motion analyses system is the most reliable approach. However, this method involves considerable workspace, expensive cameras, and time processing data and setting up laboratory. The use of ground reaction forces (GRF) and centre of pressure (COP) data is an alternative method as its data processing and setting up is less time consuming. Further, kinetic data is normally collected using frequency samples higher than 1000Hz whereas kinematic data are commonly captured using 50-200Hz. This study describes the concurrent-validity and reliability of GRF and COP measurements in determining T1, using a motion analysis system as reference standard. Kinematic and kinetic data during single leg lift were collected from ten participants. GRF and COP data were collected using one and two force plates. Displacement of a single heel marker was captured by means of ten Vicon(©) cameras. Kinetic and kinematic data were collected using a sample frequency of 1000Hz. Data were analysed in two stages: identification of key events in the kinetic data, and assessing concurrent validity of T1 based on the chosen key events with T1 provided by the kinematic data. The key event presenting the least systematic bias, along with a narrow 95% CI and limits of agreement against the reference standard T1, was the Baseline COPy event. Baseline COPy event was obtained using one force plate and presented excellent between-tester reliability.

  7. Myocardium at risk in ST-segment elevation myocardial infarction comparison of T2-weighted edema imaging with the MR-assessed endocardial surface area and validation against angiographic scoring.

    Science.gov (United States)

    Fuernau, Georg; Eitel, Ingo; Franke, Vinzenz; Hildebrandt, Lysann; Meissner, Josefine; de Waha, Suzanne; Lurz, Philipp; Gutberlet, Matthias; Desch, Steffen; Schuler, Gerhard; Thiele, Holger

    2011-09-01

    The objective of this study was to assess the area at risk (AAR) in ST-segment elevation myocardial infarction with 2 different cardiac magnetic resonance (CMR) imaging methods and to compare them with the validated angiographic Alberta Provincial Project for Outcome Assessment in Coronary Heart Disease Score (APPROACH-score) in a large consecutive patient cohort. Edema imaging with T(2)-weighted CMR and the endocardial surface area (ESA) assessed by late gadolinium enhancement have been introduced as relatively new methods for AAR assessment in ST-segment elevation myocardial infarction. However, data on the utility and validation of these techniques are limited. A total of 197 patients undergoing primary percutaneous coronary intervention in acute ST-segment elevation myocardial infarction were included. AAR (assessed with T(2)-weighted edema imaging and the ESA method), infarct size, and myocardial salvage (AAR minus infarct size) were determined by CMR 2 to 4 days after primary angioplasty. Angiographic AAR scoring was performed by use of the APPROACH-score. All measurements were done offline by blinded observers. The AAR assessed by T(2)-weighted imaging showed good correlation with the angiographic AAR (r = 0.87; p myocardial salvage index. In contrast, no dependence of T(2)-weighted edema imaging or the APPROACH-score on myocardial salvage index was seen. The AAR can be reliably assessed by T(2)-weighted CMR, whereas assessment of the AAR by ESA seems to be dependent on the degree of myocardial salvage, thereby underestimating the AAR in patients with high myocardial salvage such as aborted infarction. Thus, assessment of the AAR with the ESA method cannot be recommended. (Myocardial Salvage and Contrast Dye Induced Nephropathy Reduction by N-Acetylcystein [LIPSIA-N-ACC]; NCT00463749). Copyright © 2011 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  8. NIR spectroscopic method for the in-line moisture assessment during drying in a six-segmented fluid bed dryer of a continuous tablet production line: Validation of quantifying abilities and uncertainty assessment.

    Science.gov (United States)

    Fonteyne, Margot; Arruabarrena, Julen; de Beer, Jacques; Hellings, Mario; Van Den Kerkhof, Tom; Burggraeve, Anneleen; Vervaet, Chris; Remon, Jean Paul; De Beer, Thomas

    2014-11-01

    This study focuses on the thorough validation of an in-line NIR based moisture quantification method in the six-segmented fluid bed dryer of a continuous from-powder-to-tablet manufacturing line (ConsiGma™ 25, GEA Pharma Systems nv, Wommelgem, Belgium). The moisture assessment ability of an FT-NIR spectrometer (Matrix™-F Duplex, Bruker Optics Ltd, UK) equipped with a fiber-optic Lighthouse Probe™ (LHP, GEA Pharma Systems nv, Wommelgem, Belgium) was investigated. Although NIR spectroscopy is a widely used technique for in-process moisture determination, a minority of NIR spectroscopy methods is thoroughly validated. A moisture quantification PLS model was developed. Twenty calibration experiments were conducted, during which spectra were collected at-line and then regressed versus the corresponding residual moisture values obtained via Karl Fischer measurements. The developed NIR moisture quantification model was then validated by calculating the accuracy profiles on the basis of the analysis results of independent in-line validation experiments. Furthermore, as the aim of the NIR method is to replace the destructive, time-consuming Karl Fischer titration, it was statistically demonstrated that the new NIR method performs at least as good as the Karl Fischer reference method.

  9. LIF LiDAR high resolution ground truth data, suitable to validate medium-resolution bands of MODIS/Terra radiometer in case of inner waterbody ecological monitoring

    Science.gov (United States)

    Pelevin, Vadim; Zavialov, Peter; Zlinszky, Andras; Khimchenko, Elizaveta; Toth, Viktor; Kremenetskiy, Vyacheslav

    2017-04-01

    The report is based on field measurements on the lake Balaton (Hungary) in September 2008 as obtained by Light Induced Fluorescence (LIF) portable LiDAR UFL-8. It was tested in natural lake waters and validated by contact conventional measurements. We had opportunity to compare our results with the MODIS/Terra spectroradiometer satellite images received at the satellite monitoring station of the Eötvös Loránd University (Budapest, Hungary) to make an attempt of lidar calibration of satellite medium-resolution bands data. Water quality parameters were surveyed with the help of UFL-8 in a time interval very close to the satellite overpass. High resolution maps of the chlorophyll-a, chromophoric dissolved organic matter and total suspended sediments spatial distributions were obtained. Our results show that the resolution provided by laboratory measurements on a few water samples does not resemble actual conditions in the lake, and it would be more efficient to measure these parameters less accurately but in a better spatial distribution with the LiDAR. The UFL instrument has a great potential for being used for collecting ground truth data for satellite remote sensing of these parameters. Its measurement accuracy is comparable to classic water sample measurements, the measurement speed is high and large areas can be surveyed in a time interval very close to the satellite overpass.

  10. Validation of S-NPP VIIRS Day-Night band and M bands performance using ground reference targets of Libya 4 and Dome C

    Science.gov (United States)

    Chen, Xuexia; Wu, Aisheng; Xiong, Xiaoxiong; Lei, Ning; Wang, Zhipeng; Chiang, Kwofu

    2015-09-01

    This paper provides methodologies developed and implemented by the NASA VIIRS Calibration Support Team (VCST) to validate the S-NPP VIIRS Day-Night band (DNB) and M bands calibration performance. The Sensor Data Records produced by the Interface Data Processing Segment (IDPS) and NASA Land Product Evaluation and Algorithm Testing Element (PEATE) are acquired nearly nadir overpass for Libya 4 desert and Dome C snow surfaces. In the past 3.5 years, the modulated relative spectral responses (RSR) change with time and lead to 3.8% increase on the DNB sensed solar irradiance and 0.1% or less increases on the M4-M7 bands. After excluding data before April 5th, 2013, IDPS DNB radiance and reflectance data are consistent with Land PEATE data with 0.6% or less difference for Libya 4 site and 2% or less difference for Dome C site. These difference are caused by inconsistent LUTs and algorithms used in calibration. In Libya 4 site, the SCIAMACHY spectral and modulated RSR derived top of atmosphere (TOA) reflectance are compared with Land PEATE TOA reflectance and they indicate a decrease of 1.2% and 1.3%, respectively. The radiance of Land PEATE DNB are compared with the simulated radiance from aggregated M bands (M4, M5, and M7). These data trends match well with 2% or less difference for Libya 4 site and 4% or less difference for Dome C. This study demonstrate the consistent quality of DNB and M bands calibration for Land PEATE products during operational period and for IDPS products after April 5th, 2013.

  11. Validation of S-NPP VIIRS Day-Night Band and M Bands Performance Using Ground Reference Targets of Libya 4 and Dome C

    Science.gov (United States)

    Chen, Xuexia; Wu, Aisheng; Xiong, Xiaoxiong; Lei, Ning; Wang, Zhipeng; Chiang, Kwofu

    2015-01-01

    This paper provides methodologies developed and implemented by the NASA VIIRS Calibration Support Team (VCST) to validate the S-NPP VIIRS Day-Night band (DNB) and M bands calibration performance. The Sensor Data Records produced by the Interface Data Processing Segment (IDPS) and NASA Land Product Evaluation and Algorithm Testing Element (PEATE) are acquired nearly nadir overpass for Libya 4 desert and Dome C snow surfaces. In the past 3.5 years, the modulated relative spectral responses (RSR) change with time and lead to 3.8% increase on the DNB sensed solar irradiance and 0.1% or less increases on the M4-M7 bands. After excluding data before April 5th, 2013, IDPS DNB radiance and reflectance data are consistent with Land PEATE data with 0.6% or less difference for Libya 4 site and 2% or less difference for Dome C site. These difference are caused by inconsistent LUTs and algorithms used in calibration. In Libya 4 site, the SCIAMACHY spectral and modulated RSR derived top of atmosphere (TOA) reflectance are compared with Land PEATE TOA reflectance and they indicate a decrease of 1.2% and 1.3%, respectively. The radiance of Land PEATE DNB are compared with the simulated radiance from aggregated M bands (M4, M5, and M7). These data trends match well with 2% or less difference for Libya 4 site and 4% or less difference for Dome C. This study demonstrate the consistent quality of DNB and M bands calibration for Land PEATE products during operational period and for IDPS products after April 5th, 2013.

  12. NEPR Ground Validation Points 2015

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater photos and videos taken in shallow water (0-35m) benthic habitats surrounding Northeast Puerto Rico and Culebra...

  13. Validation of satellite-based noontime UVI with NDACC ground-based instruments: influence of topography, environment and satellite overpass time

    Science.gov (United States)

    Brogniez, Colette; Auriol, Frédérique; Deroo, Christine; Arola, Antti; Kujanpää, Jukka; Sauvage, Béatrice; Kalakoski, Niilo; Riku Aleksi Pitkänen, Mikko; Catalfamo, Maxime; Metzger, Jean-Marc; Tournois, Guy; Da Conceicao, Pierre

    2016-12-01

    Spectral solar UV radiation measurements are performed in France using three spectroradiometers located at very different sites. One is installed in Villeneuve d'Ascq, in the north of France (VDA). It is an urban site in a topographically flat region. Another instrument is installed in Observatoire de Haute-Provence, located in the southern French Alps (OHP). It is a rural mountainous site. The third instrument is installed in Saint-Denis, Réunion Island (SDR). It is a coastal urban site on a small mountainous island in the southern tropics. The three instruments are affiliated with the Network for the Detection of Atmospheric Composition Change (NDACC) and carry out routine measurements to monitor the spectral solar UV radiation and enable derivation of UV index (UVI). The ground-based UVI values observed at solar noon are compared to similar quantities derived from the Ozone Monitoring Instrument (OMI, onboard the Aura satellite) and the second Global Ozone Monitoring Experiment (GOME-2, onboard the Metop-A satellite) measurements for validation of these satellite-based products. The present study concerns the period 2009-September 2012, date of the implementation of a new OMI processing tool. The new version (v1.3) introduces a correction for absorbing aerosols that were not considered in the old version (v1.2). Both versions of the OMI UVI products were available before September 2012 and are used to assess the improvement of the new processing tool. On average, estimates from satellite instruments always overestimate surface UVI at solar noon. Under cloudless conditions, the satellite-derived estimates of UVI compare satisfactorily with ground-based data: the median relative bias is less than 8 % at VDA and 4 % at SDR for both OMI v1.3 and GOME-2, and about 6 % for OMI v1.3 and 2 % for GOME-2 at OHP. The correlation between satellite-based and ground-based data is better at VDA and OHP (about 0.99) than at SDR (0.96) for both space-borne instruments. For all

  14. Performance evaluation of automated segmentation software on optical coherence tomography volume data.

    Science.gov (United States)

    Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E; Debuc, Delia Cabrera

    2016-05-01

    Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth.

  15. Cross-validation of IASI/MetOp derived tropospheric δD with TES and ground-based FTIR observations

    Science.gov (United States)

    Lacour, J.-L.; Clarisse, L.; Worden, J.; Schneider, M.; Barthlott, S.; Hase, F.; Risi, C.; Clerbaux, C.; Hurtmans, D.; Coheur, P.-F.

    2015-03-01

    The Infrared Atmospheric Sounding Interferometer (IASI) flying onboard MetOpA and MetOpB is able to capture fine isotopic variations of the HDO to H2O ratio (δD) in the troposphere. Such observations at the high spatio-temporal resolution of the sounder are of great interest to improve our understanding of the mechanisms controlling humidity in the troposphere. In this study we aim to empirically assess the validity of our error estimation previously evaluated theoretically. To achieve this, we compare IASI δD retrieved profiles with other available profiles of δD, from the TES infrared sounder onboard AURA and from three ground-based FTIR stations produced within the MUSICA project: the NDACC (Network for the Detection of Atmospheric Composition Change) sites Kiruna and Izaña, and the TCCON site Karlsruhe, which in addition to near-infrared TCCON spectra also records mid-infrared spectra. We describe the achievable level of agreement between the different retrievals and show that these theoretical errors are in good agreement with empirical differences. The comparisons are made at different locations from tropical to Arctic latitudes, above sea and above land. Generally IASI and TES are similarly sensitive to δD in the free troposphere which allows one to compare their measurements directly. At tropical latitudes where IASI's sensitivity is lower than that of TES, we show that the agreement improves when taking into account the sensitivity of IASI in the TES retrieval. For the comparison IASI-FTIR only direct comparisons are performed because the sensitivity profiles of the two observing systems do not allow to take into account their differences of sensitivity. We identify a quasi negligible bias in the free troposphere (-3‰) between IASI retrieved δD with the TES, which are bias corrected, but important with the ground-based FTIR reaching -47‰. We also suggest that model-satellite observation comparisons could be optimized with IASI thanks to its high

  16. Analysis of global and regional CO burdens measured from space between 2000 and 2009 and validated by ground-based solar tracking spectrometers

    Directory of Open Access Journals (Sweden)

    L. Yurganov

    2010-04-01

    Full Text Available Interannual variations in AIRS and MOPITT retrieved CO burdens are validated, corrected, and compared with CO emissions from wild fires from the Global Fire Emission Dataset (GFED2 inventory. Validation of daily mean CO total column (TC retrievals from MOPITT version 3 and AIRS version 5 is performed through comparisons with archived TC data from the Network for Detection of Atmospheric Composition Change (NDACC ground-based Fourier Transform Spectrometers (FTS between March 2000 and December 2007. MOPITT V3 retrievals exhibit an increasing temporal bias with a rate of 1.4–1.8% per year; thus far, AIRS retrievals appear to be more stable. For the lowest CO values in the Southern Hemisphere (SH, AIRS TC retrievals overestimate FTS TC by 20%. MOPITT's bias and standard deviation do not depend on CO TC absolute values. Empirical corrections are derived for AIRS and MOPITT retrievals based on the observed annually averaged bias versus the FTS TC. Recently published MOPITT V4 is found to be in a good agreement with MOPITT V3 corrected by us (with exception of 2000–2001 period. With these corrections, CO burdens from AIRS V5 and MOPITT V3 (as well as MOPITT V4 come into good agreement in the mid-latitudes of the Northern Hemisphere (NH and in the tropical belt. In the SH, agreement between AIRS and MOPITT CO burdens is better for the larger CO TC in austral winter and worse in austral summer when CO TC are smaller. Before July 2008, all variations in retrieved CO burden can be explained by changes in fire emissions. After July 2008, global and tropical CO burdens decreased until October before recovering by the beginning of 2009. The NH CO burden also decreased but reached a minimum in January 2009 before starting to recover. The decrease in tropical CO burdens is explained by lower than usual fire emissions in South America and Indonesia. This decrease in tropical emissions also accounts for most of the change in the global CO burden. However, no

  17. Methane cross-validation between three Fourier transform spectrometers: SCISAT ACE-FTS, GOSAT TANSO-FTS, and ground-based FTS measurements in the Canadian high Arctic

    Science.gov (United States)

    Holl, Gerrit; Walker, Kaley A.; Conway, Stephanie; Saitoh, Naoko; Boone, Chris D.; Strong, Kimberly; Drummond, James R.

    2016-05-01

    We present cross-validation of remote sensing measurements of methane profiles in the Canadian high Arctic. Accurate and precise measurements of methane are essential to understand quantitatively its role in the climate system and in global change. Here, we show a cross-validation between three data sets: two from spaceborne instruments and one from a ground-based instrument. All are Fourier transform spectrometers (FTSs). We consider the Canadian SCISAT Atmospheric Chemistry Experiment (ACE)-FTS, a solar occultation infrared spectrometer operating since 2004, and the thermal infrared band of the Japanese Greenhouse Gases Observing Satellite (GOSAT) Thermal And Near infrared Sensor for carbon Observation (TANSO)-FTS, a nadir/off-nadir scanning FTS instrument operating at solar and terrestrial infrared wavelengths, since 2009. The ground-based instrument is a Bruker 125HR Fourier transform infrared (FTIR) spectrometer, measuring mid-infrared solar absorption spectra at the Polar Environment Atmospheric Research Laboratory (PEARL) Ridge Laboratory at Eureka, Nunavut (80° N, 86° W) since 2006. For each pair of instruments, measurements are collocated within 500 km and 24 h. An additional collocation criterion based on potential vorticity values was found not to significantly affect differences between measurements. Profiles are regridded to a common vertical grid for each comparison set. To account for differing vertical resolutions, ACE-FTS measurements are smoothed to the resolution of either PEARL-FTS or TANSO-FTS, and PEARL-FTS measurements are smoothed to the TANSO-FTS resolution. Differences for each pair are examined in terms of profile and partial columns. During the period considered, the number of collocations for each pair is large enough to obtain a good sample size (from several hundred to tens of thousands depending on pair and configuration). Considering full profiles, the degrees of freedom for signal (DOFS) are between 0.2 and 0.7 for TANSO-FTS and

  18. Control of groundwater pH during bioremediation: Improvement and validation of a geochemical model to assess the buffering potential of ground silicate minerals

    Science.gov (United States)

    Lacroix, Elsa; Brovelli, Alessandro; Holliger, Christof; Barry, D. A.

    2014-05-01

    Accurate control of groundwater pH is of critical importance for in situ biological treatment of chlorinated solvents. The use of ground silicate minerals mixed with groundwater is an appealing buffering strategy as silicate minerals may act as long-term sources of alkalinity. In a previous study, we developed a geochemical model for evaluation of the pH buffering capacity of such minerals. The model included the main microbial processes driving groundwater acidification as well as mineral dissolution. In the present study, abiotic mineral dissolution experiments were conducted with five silicate minerals (andradite, diopside, fayalite, forsterite, nepheline). The goal of the study was to validate the model and to test the buffering capacity of the candidate minerals identified previously. These five minerals increased the pH from acidic to neutral and slightly basic values. The model was revised and improved to represent better the experimental observations. In particular, the experiments revealed the importance of secondary mineral precipitation on the buffering potential of silicates, a process not included in the original formulation. The main secondary phases likely to precipitate were identified through model calibration, as well as the degree of saturation at which they formed. The predictions of the revised geochemical model were in good agreement with the observations, with a correlation coefficient higher than 0.9 in most cases. This study confirmed the potential of silicates to act as pH control agents and showed the reliability of the geochemical model, which can be used as a design tool for field applications.

  19. Two-Column Aerosol Project (TCAP): Ground-Based Radiation and Aerosol Validation Using the NOAA Mobile SURFRAD Station Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Michalsky, Joseph [National Oceanic and Atmospheric Administration (NOAA), Boulder, CO (United States); Lantz, Kathy [Univ. of Colorado, Boulder, CO (United States)

    2016-05-01

    The National Oceanic and Atmospheric Administration (NOAA) is preparing for the launch of the Geostationary Operational Environmental Satellite R-Series (GOES-R) satellite in 2015. This satellite will feature higher time (5-minute versus 30-minute sampling) and spatial resolution (0.5 km vs 1 km in the visible channel) than current GOES instruments provide. NOAA’s National Environmental Satellite Data and Information Service has funded the Global Monitoring Division at the Earth System Research Laboratory to provide ground-based validation data for many of the new and old products the new GOES instruments will retrieve specifically related to radiation at the surface and aerosol and its extensive and intensive properties in the column. The Two-Column Aerosol Project (TCAP) had an emphasis on aerosol; therefore, we asked to be involved in this campaign to de-bug our new instrumentation and to provide a new capability that the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s Mobile Facilities (AMF) did not possess, namely surface albedo measurement out to 1625 nm. This gave us a chance to test remote operation of our new multi-filter rotating shadowband radiometer/multi-filter radiometer (MFRSR/MFR) combination. We did not deploy standard broadband shortwave and longwave radiation instrumentation because ARM does this as part of every AMF deployment. As it turned out, the ARM standard MFRSR had issues, and we were able to provide the aerosol column data for the first 2 months of the campaign covering the summer flight phase of the deployment. Using these data, we were able to work with personnel at Pacific Northwest National Laboratory (PNNL) to retrieve not only aerosol optical depth (AOD), but single scattering albedo and asymmetry parameter, as well.

  20. Reflectance conversion methods for the VIS/NIR imaging spectrometer aboard the Chang'E-3 lunar rover: based on ground validation experiment data

    Institute of Scientific and Technical Information of China (English)

    Bin Liu; Jian-Zhong Liu; Guang-Liang Zhang; Zong-Cheng Ling; Jiang Zhang; Zhi-Ping He; Ben-Yong Yang

    2013-01-01

    The second phase of the Chang'E Program (also named Chang'E-3) has the goal to land and perform in-situ detection on the lunar surface.A VIS/NIR imaging spectrometer (VNIS) will be carried on the Chang'E-3 lunar rover to detect the distribution of lunar minerals and resources.VNIS is the first mission in history to perform in-situ spectral measurement on the surface of the Moon,the reflectance data of which are fundamental for interpretation of lunar composition,whose quality would greatly affect the accuracy of lunar element and mineral determination.Until now,in-situ detection by imaging spectrometers was only performed by rovers on Mars.We firstly review reflectance conversion methods for rovers on Mars (Viking landers,Pathfinder and Mars Exploration rovers,etc).Secondly,we discuss whether these conversion methods used on Mars can be applied to lunar in-situ detection.We also applied data from a laboratory bidirectional reflectance distribution function (BRDF) using simulated lunar soil to test the availability of this method.Finally,we modify reflectance conversion methods used on Mars by considering differences between environments on the Moon and Mars and apply the methods to experimental data obtained from the ground validation of VNIS.These results were obtained by comparing reflectance data from the VNIS measured in the laboratory with those from a standard spectrometer obtained at the same time and under the same observing conditions.The shape and amplitude of the spectrum fits well,and the spectral uncertainty parameters for most samples are within 8%,except for the ilmenite sample which has a low albedo.In conclusion,our reflectance conversion method is suitable for lunar in-situ detection.

  1. Control of groundwater pH during bioremediation: improvement and validation of a geochemical model to assess the buffering potential of ground silicate minerals.

    Science.gov (United States)

    Lacroix, Elsa; Brovelli, Alessandro; Holliger, Christof; Barry, D A

    2014-05-01

    Accurate control of groundwater pH is of critical importance for in situ biological treatment of chlorinated solvents. The use of ground silicate minerals mixed with groundwater is an appealing buffering strategy as silicate minerals may act as long-term sources of alkalinity. In a previous study, we developed a geochemical model for evaluation of the pH buffering capacity of such minerals. The model included the main microbial processes driving groundwater acidification as well as mineral dissolution. In the present study, abiotic mineral dissolution experiments were conducted with five silicate minerals (andradite, diopside, fayalite, forsterite, nepheline). The goal of the study was to validate the model and to test the buffering capacity of the candidate minerals identified previously. These five minerals increased the pH from acidic to neutral and slightly basic values. The model was revised and improved to represent better the experimental observations. In particular, the experiments revealed the importance of secondary mineral precipitation on the buffering potential of silicates, a process not included in the original formulation. The main secondary phases likely to precipitate were identified through model calibration, as well as the degree of saturation at which they formed. The predictions of the revised geochemical model were in good agreement with the observations, with a correlation coefficient higher than 0.9 in most cases. This study confirmed the potential of silicates to act as pH control agents and showed the reliability of the geochemical model, which can be used as a design tool for field applications.

  2. [Segmental neurofibromatosis].

    Science.gov (United States)

    Zulaica, A; Peteiro, C; Pereiro, M; Pereiro Ferreiros, M; Quintas, C; Toribio, J

    1989-01-01

    Four cases of segmental neurofibromatosis (SNF) are reported. It is a rare entity considered to be a localized variant of neurofibromatosis (NF)-Riccardi's type V. Two cases are male and two female. The lesions are located to the head in a patient and the other three cases in the trunk. No family history nor transmission to progeny were manifested. The rest of the organs are undamaged.

  3. Validation of the IASI operational CH4 and N2O products using ground-based Fourier Transform Spectrometer: preliminary results at the Izaña Observatory (28ºN, 17ºW

    Directory of Open Access Journals (Sweden)

    Omaira García

    2014-01-01

    Full Text Available Within the project VALIASI (VALidation of IASI level 2 products the validation of the IASI operational atmospheric trace gas products (total column amounts of H2O, O3, CH4, N2O, CO2 and CO as well H2O and O3 profiles will be carried out. Ground-based FTS (Fourier Transform Spectrometer trace gas measurements made in the framework of NDACC (Network for the Detection of Atmospheric Composition Change serve as the validation reference. In this work, we will present the validation methodology developed for this project and show the first intercomparison results obtained for the Izaña Atmospheric Observatory between 2008 and 2012. As example, we will focus on two of the most important greenhouse gases, CH4 and N2O.

  4. Validation of an image registration and segmentation method to measure stent graft motion on ECG-gated CT using a physical dynamic stent graft model

    Science.gov (United States)

    Koenrades, Maaike A.; Struijs, Ella M.; Klein, Almar; Kuipers, Henny; Geelkerken, Robert H.; Slump, Cornelis H.

    2017-03-01

    The application of endovascular aortic aneurysm repair has expanded over the last decade. However, the long-term performance of stent grafts, in particular durable fixation and sealing to the aortic wall, remains the main concern of this treatment. The sealing and fixation are challenged at every heartbeat due to downward and radial pulsatile forces. Yet knowledge on cardiac-induced dynamics of implanted stent grafts is sparse, as it is not measured in routine clinical follow-up. Such knowledge is particularly relevant to perform fatigue tests, to predict failure in the individual patient and to improve stent graft designs. Using a physical dynamic stent graft model in an anthropomorphic phantom, we have evaluated the performance of our previously proposed segmentation and registration algorithm to detect periodic motion of stent grafts on ECG-gated (3D+t) CT data. Abdominal aortic motion profiles were simulated in two series of Gaussian based patterns with different amplitudes and frequencies. Experiments were performed on a 64-slice CT scanner with a helical scan protocol and retrospective gating. Motion patterns as estimated by our algorithm were compared to motion patterns obtained from optical camera recordings of the physical stent graft model in motion. Absolute errors of the patterns' amplitude were smaller than 0.28 mm. Even the motion pattern with an amplitude of 0.23 mm was measured, although the amplitude of motion was overestimated by the algorithm with 43%. We conclude that the algorithm performs well for measurement of stent graft motion in the mm and sub-mm range. This ultimately is expected to aid in patient-specific risk assessment and improving stent graft designs.

  5. VALIDATION OF COOKING TIMES AND TEMPERATURES FOR THERMAL INACTIVATION OF YERSINIA PESTIS STRAINS KIM5 AND CDC-A1112 IN GROUND BEEF

    Science.gov (United States)

    The thermal stability of Yersinia pestis inoculated into retail ground beef (25 per cent fat) and heated in a temperature-controlled water bath or cooked on commercial grills was evaluated. Irradiated ground beef (3-g portions) was inoculated with ca. 6.7 log10 CFU/g of Y. pestis strain KIM5 and hea...

  6. Mixed segmentation

    DEFF Research Database (Denmark)

    Bonde, Anders; Aagaard, Morten; Hansen, Allan Grutt

    This book is about using recent developments in the fields of data analytics and data visualization to frame new ways of identifying target groups in media communication. Based on a mixed-methods approach, the authors combine psychophysiological monitoring (galvanic skin response) with textual...... content analysis and audience segmentation in a single-source perspective. The aim is to explain and understand target groups in relation to, on the one hand, emotional response to commercials or other forms of audio-visual communication and, on the other hand, living preferences and personality traits...

  7. Estimation of Carbon Budgets for Croplands by Combining High Resolution Remote Sensing Data with a Crop Model and Validation Ground Data

    Science.gov (United States)

    Mangiarotti, S.; Veloso, A.; Ceschia, E.; Tallec, T.; Dejoux, J. F.

    2015-12-01

    Croplands occupy large areas of Earth's land surface playing a key role in the terrestrial carbon cycle. Hence, it is essential to quantify and analyze the carbon fluxes from those agro-ecosystems, since they contribute to climate change and are impacted by the environmental conditions. In this study we propose a regional modeling approach that combines high spatial and temporal resolutions (HSTR) optical remote sensing data with a crop model and a large set of in-situ measurements for model calibration and validation. The study area is located in southwest France and the model that we evaluate, called SAFY-CO2, is a semi-empirical one based on the Monteith's light-use efficiency theory and adapted for simulating the components of the net ecosystem CO2 fluxes (NEE) and of the annual net ecosystem carbon budgets (NECB) at a daily time step. The approach is based on the assimilation of satellite-derived green area index (GAI) maps for calibrating a number of the SAFY-CO2 parameters linked to crop phenology. HSTR data from the Formosat-2 and SPOT satellites were used to produce the GAI maps. The experimental data set includes eddy covariance measurements of net CO2 fluxes from two experimental sites and partitioned into gross primary production (GPP) and ecosystem respiration (Reco). It also includes measurements of GAI, biomass and yield between 2005 and 2011, focusing on the winter wheat crop. The results showed that the SAFY-CO2 model correctly reproduced the biomass production, its dynamic and the yield (relative errors about 24%) in contrasted climatic, environmental and management conditions. The net CO2 flux components estimated with the model were overall in agreement with the ground data, presenting good correlations (R² about 0.93 for GPP, 0.77 for Reco and 0.86 for NEE). The evaluation of the modelled NECB for the different site-years highlighted the importance of having accurate estimates of each component of the NECB. Future works aim at considering

  8. Technical Note: Validation of Odin/SMR limb observations of ozone, comparisons with OSIRIS, POAM III, ground-based and balloon-borne instruments

    Directory of Open Access Journals (Sweden)

    F. Jégou

    2008-01-01

    Full Text Available The Odin satellite carries two instruments capable of determining stratospheric ozone profiles by limb sounding: the Sub-Millimetre Radiometer (SMR and the UV-visible spectrograph of the OSIRIS (Optical Spectrograph and InfraRed Imager System instrument. A large number of ozone profiles measurements were performed during six years from November 2001 to present. This ozone dataset is here used to make quantitative comparisons with satellite measurements in order to assess the quality of the Odin/SMR ozone measurements. In a first step, we compare Swedish SMR retrievals version 2.1, French SMR ozone retrievals version 222 (both from the 501.8 GHz band, and the OSIRIS retrievals version 3.0, with the operational version 4.0 ozone product from POAM III (Polar Ozone Atmospheric Measurement. In a second step, we refine the Odin/SMR validation by comparisons with ground-based instruments and balloon-borne observations. We use observations carried out within the framework of the Network for Detection of Atmospheric Composition Change (NDACC and balloon flight missions conducted by the Canadian Space Agency (CSA, the Laboratoire de Physique et de Chimie de l'Environnement (LPCE, Orléans, France, and the Service d'Aéronomie (SA, Paris, France. Coincidence criteria were 5° in latitude x in 10° longitude, and 5 h in time in Odin/POAM III comparisons, 12 h in Odin/NDACC comparisons, and 72 h in Odin/balloons comparisons. An agreement is found with the POAM III experiment (10–60 km within −0.3±0.2 ppmv (bias±standard deviation for SMR (v222, v2.1 and within −0.5±0.2 ppmv for OSIRIS (v3.0. Odin ozone mixing ratio products are systematically slightly lower than the POAM III data and show an ozone maximum lower by 1–5 km in altitude. The comparisons with the NDACC data (10–34 km for ozonesonde, 10–50 km for lidar, 10–60 for microwave instruments yield a good agreement within −0.15±0.3 ppmv for the SMR data and −0.3±0.3 ppmv

  9. Automatic segmentation of kidneys from non-contrast CT images using efficient belief propagation

    Science.gov (United States)

    Liu, Jianfei; Linguraru, Marius George; Wang, Shijun; Summers, Ronald M.

    2013-03-01

    CT colonography (CTC) can increase the chance of detecting high-risk lesions not only within the colon but anywhere in the abdomen with a low cost. Extracolonic findings such as calculi and masses are frequently found in the kidneys on CTC. Accurate kidney segmentation is an important step to detect extracolonic findings in the kidneys. However, noncontrast CTC images make the task of kidney segmentation substantially challenging because the intensity values of kidney parenchyma are similar to those of adjacent structures. In this paper, we present a fully automatic kidney segmentation algorithm to support extracolonic diagnosis from CTC data. It is built upon three major contributions: 1) localize kidney search regions by exploiting the segmented liver and spleen as well as body symmetry; 2) construct a probabilistic shape prior handling the issue of kidney touching other organs; 3) employ efficient belief propagation on the shape prior to extract the kidneys. We evaluated the accuracy of our algorithm on five non-contrast CTC datasets with manual kidney segmentation as the ground-truth. The Dice volume overlaps were 88%/89%, the root-mean-squared errors were 3.4 mm/2.8 mm, and the average surface distances were 2.1 mm/1.9 mm for the left/right kidney respectively. We also validated the robustness on 27 additional CTC cases, and 23 datasets were successfully segmented. In four problematic cases, the segmentation of the left kidney failed due to problems with the spleen segmentation. The results demonstrated that the proposed algorithm could automatically and accurately segment kidneys from CTC images, given the prior correct segmentation of the liver and spleen.

  10. Segmented blockcopolymers with uniform amide segments

    NARCIS (Netherlands)

    Husken, D.; Krijgsman, J.; Gaymans, R.J.

    2004-01-01

    Segmented blockcopolymers based on poly(tetramethylene oxide) (PTMO) soft segments and uniform crystallisable tetra-amide segments (TxTxT) are made via polycondensation. The PTMO soft segments, with a molecular weight of 1000 g/mol, are extended with terephthalic groups to a molecular weight of 6000

  11. Clinical Validation of Atlas-Based Auto-Segmentation of Multiple Target Volumes and Normal Tissue (Swallowing/Mastication) Structures in the Head and Neck

    Energy Technology Data Exchange (ETDEWEB)

    Teguh, David N. [Department of Radiation Oncology, Erasmus Medical Center-Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Levendag, Peter C., E-mail: p.levendag@erasmusmc.nl [Department of Radiation Oncology, Erasmus Medical Center-Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Voet, Peter W.J.; Al-Mamgani, Abrahim [Department of Radiation Oncology, Erasmus Medical Center-Daniel den Hoed Cancer Center, Rotterdam (Netherlands); Han Xiao; Wolf, Theresa K.; Hibbard, Lyndon S. [Elekta-CMS Software, Maryland Heights, MO 63043 (United States); Nowak, Peter; Akhiat, Hafid; Dirkx, Maarten L.P.; Heijmen, Ben J.M.; Hoogeman, Mischa S. [Department of Radiation Oncology, Erasmus Medical Center-Daniel den Hoed Cancer Center, Rotterdam (Netherlands)

    2011-11-15

    Purpose: To validate and clinically evaluate autocontouring using atlas-based autosegmentation (ABAS) of computed tomography images. Methods and Materials: The data from 10 head-and-neck patients were selected as input for ABAS, and neck levels I-V and 20 organs at risk were manually contoured according to published guidelines. The total contouring times were recorded. Two different ABAS strategies, multiple and single subject, were evaluated, and the similarity of the autocontours with the atlas contours was assessed using Dice coefficients and the mean distances, using the leave-one-out method. For 12 clinically treated patients, 5 experienced observers edited the autosegmented contours. The editing times were recorded. The Dice coefficients and mean distances were calculated among the clinically used contours, autocontours, and edited autocontours. Finally, an expert panel scored all autocontours and the edited autocontours regarding their adequacy relative to the published atlas. Results: The time to autosegment all the structures using ABAS was 7 min/patient. No significant differences were observed in the autosegmentation accuracy for stage N0 and N+ patients. The multisubject atlas performed best, with a Dice coefficient and mean distance of 0.74 and 2 mm, 0.67 and 3 mm, 0.71 and 2 mm, 0.50 and 2 mm, and 0.78 and 2 mm for the salivary glands, neck levels, chewing muscles, swallowing muscles, and spinal cord-brainstem, respectively. The mean Dice coefficient and mean distance of the autocontours vs. the clinical contours was 0.8 and 2.4 mm for the neck levels and salivary glands, respectively. For the autocontours vs. the edited autocontours, the mean Dice coefficient and mean distance was 0.9 and 1.6 mm, respectively. The expert panel scored 100% of the autocontours as a 'minor deviation, editable' or better. The expert panel scored 88% of the edited contours as good compared with 83% of the clinical contours. The total editing time was 66 min

  12. Stochastic ground motion simulation

    Science.gov (United States)

    Rezaeian, Sanaz; Xiaodan, Sun; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan

    2014-01-01

    Strong earthquake ground motion records are fundamental in engineering applications. Ground motion time series are used in response-history dynamic analysis of structural or geotechnical systems. In such analysis, the validity of predicted responses depends on the validity of the input excitations. Ground motion records are also used to develop ground motion prediction equations(GMPEs) for intensity measures such as spectral accelerations that are used in response-spectrum dynamic analysis. Despite the thousands of available strong ground motion records, there remains a shortage of records for large-magnitude earthquakes at short distances or in specific regions, as well as records that sample specific combinations of source, path, and site characteristics.

  13. Combining Multiple Knowledge Sources for Discourse Segmentation

    CERN Document Server

    Litman, D J; Litman, Diane J.; Passonneau, Rebecca J.

    1995-01-01

    We predict discourse segment boundaries from linguistic features of utterances, using a corpus of spoken narratives as data. We present two methods for developing segmentation algorithms from training data: hand tuning and machine learning. When multiple types of features are used, results approach human performance on an independent test set (both methods), and using cross-validation (machine learning).

  14. Revealing Latent Value of Clinically Acquired CTs of Traumatic Brain Injury Through Multi-Atlas Segmentation in a Retrospective Study of 1,003 with External Cross-Validation.

    Science.gov (United States)

    Plassard, Andrew J; Kelly, Patrick D; Asman, Andrew J; Kang, Hakmook; Patel, Mayur B; Landman, Bennett A

    2015-03-20

    Medical imaging plays a key role in guiding treatment of traumatic brain injury (TBI) and for diagnosing intracranial hemorrhage; most commonly rapid computed tomography (CT) imaging is performed. Outcomes for patients with TBI are variable and difficult to predict upon hospital admission. Quantitative outcome scales (e.g., the Marshall classification) have been proposed to grade TBI severity on CT, but such measures have had relatively low value in staging patients by prognosis. Herein, we examine a cohort of 1,003 subjects admitted for TBI and imaged clinically to identify potential prognostic metrics using a "big data" paradigm. For all patients, a brain scan was segmented with multi-atlas labeling, and intensity/volume/texture features were computed in a localized manner. In a 10-fold cross-validation approach, the explanatory value of the image-derived features is assessed for length of hospital stay (days), discharge disposition (five point scale from death to return home), and the Rancho Los Amigos functional outcome score (Rancho Score). Image-derived features increased the predictive R(2) to 0.38 (from 0.18) for length of stay, to 0.51 (from 0.4) for discharge disposition, and to 0.31 (from 0.16) for Rancho Score (over models consisting only of non-imaging admission metrics, but including positive/negative radiological CT findings). This study demonstrates that high volume retrospective analysis of clinical imaging data can reveal imaging signatures with prognostic value. These targets are suited for follow-up validation and represent targets for future feature selection efforts. Moreover, the increase in prognostic value would improve staging for intervention assessment and provide more reliable guidance for patients.

  15. Semi-supervised segmentation of ultrasound images based on patch representation and continuous min cut.

    Directory of Open Access Journals (Sweden)

    Anca Ciurte

    Full Text Available Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye. We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average and the proposed algorithm performs favorably with the literature.

  16. Semi-supervised segmentation of ultrasound images based on patch representation and continuous min cut.

    Science.gov (United States)

    Ciurte, Anca; Bresson, Xavier; Cuisenaire, Olivier; Houhou, Nawal; Nedevschi, Sergiu; Thiran, Jean-Philippe; Cuadra, Meritxell Bach

    2014-01-01

    Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.

  17. Multi-scale feature learning on pixels and super-pixels for seminal vesicles MRI segmentation

    Science.gov (United States)

    Gao, Qinquan; Asthana, Akshay; Tong, Tong; Rueckert, Daniel; Edwards, Philip "Eddie"

    2014-03-01

    We propose a learning-based approach to segment the seminal vesicles (SV) via random forest classifiers. The proposed discriminative approach relies on the decision forest using high-dimensional multi-scale context-aware spatial, textual and descriptor-based features at both pixel and super-pixel level. After affine transformation to a template space, the relevant high-dimensional multi-scale features are extracted and random forest classifiers are learned based on the masked region of the seminal vesicles from the most similar atlases. Using these classifiers, an intermediate probabilistic segmentation is obtained for the test images. Then, a graph-cut based refinement is applied to this intermediate probabilistic representation of each voxel to get the final segmentation. We apply this approach to segment the seminal vesicles from 30 MRI T2 training images of the prostate, which presents a particularly challenging segmentation task. The results show that the multi-scale approach and the augmentation of the pixel based features with the super-pixel based features enhances the discriminative power of the learnt classifier which leads to a better quality segmentation in some very difficult cases. The results are compared to the radiologist labeled ground truth using leave-one-out cross-validation. Overall, the Dice metric of 0:7249 and Hausdorff surface distance of 7:0803 mm are achieved for this difficult task.

  18. Demonstration and Validation of a Regenerated Cellulose Dialysis Membrane Diffusion Sampler for Monitoring Ground Water Quality and Remediation Progress at DoD Sites

    Science.gov (United States)

    2007-08-30

    with biodegradation was probably because of their longer deployment times, warmer ground-water temperatures, and proximity to high bacteria ...NFESC Naval Facilities Engineering Service Center NJDEP New Jersey Department of Environmental Protection NTU Nephelometric turbidity units PAH ...high ionic strength waters and due to biodegradation were not significant when equilibration times in wells were one to two weeks. Water samples

  19. Bayesian segmentation of brainstem structures in MRI

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Van Leemput, Koen; Bhatt, Priyanka

    2015-01-01

    In this paper we present a method to segment four brainstem structures (midbrain, pons, medulla oblongata and superior cerebellar peduncle) from 3D brain MRI scans. The segmentation method relies on a probabilistic atlas of the brainstem and its neighboring brain structures. To build the atlas, we...... the brainstem structures in novel scans. Thanks to the generative nature of the scheme, the segmentation method is robust to changes in MRI contrast or acquisition hardware. Using cross validation, we show that the algorithm can segment the structures in previously unseen T1 and FLAIR scans with great accuracy...

  20. Anatomy-aware measurement of segmentation accuracy

    Science.gov (United States)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  1. RSRM Segment Train Derailment and Recovery

    Science.gov (United States)

    Taylor Jr., Robert H.; McConnaugghey, Paul K.; Beaman, David E.; Moore, Dennis R.; Reed, Harry

    2008-01-01

    On May 2, 2007, a freight train carrying segments of the space shuttle's solid rocket boosters derailed in Myrtlewood, Alabama, after a rail trestle collapsed. The train was carrying Reusable Solid Rocket Motors (RSRM) 98 center and forward segments (STS-120) and RSRM 99 aft segments (STS-122). Initially, it was not known if the segments had been seriously damaged. Four segments dropped approximately 10 feet when the trestle collapsed and one of those four rolled off the track onto its side. The exit cones and the other four segments, not yet on the trestle, remained on solid ground. ATK and NASA immediately dispatched an investigation and recovery team to determine the safety of the situation and eventually the usability of the segments and exit cones for flight. Instrumentation on each segment provided invaluable data to determine the acceleration loads imparted into each loaded segment and exit cone. This paper details the incident, recovery plan and the team work that created a success story that ended with the safe launch of STS120 using the four center segments and the launch of STS122 using the Aft exit cones assemblies.

  2. Demonstration/Validation of the Snap Sampler Passive Ground Water Sampling Device for Sampling Inorganic Analytes at the Former Pease Air Force Base

    Science.gov (United States)

    2009-07-01

    it starts to undergo biodegradation . Also, because diffusion samplers typically require at least several days for equilibration to occur, they... PAHs ), and metals have been found in soils on the base. The ground wa- ter has been found to be contaminated with volatile organic compounds ERDC...CRREL TR-09-12 14 (VOCs) including trichloroethylene (TCE) and tetrachloroethylene (PCE). PAHs , pesticides, and heavy metals have been found in the

  3. Strategic market segmentation

    National Research Council Canada - National Science Library

    Maričić Branko R; Đorđević Aleksandar

    2015-01-01

    ..., requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation...

  4. Development of hedge operator based fuzzy divergence measure and its application in segmentation of chronic myelogenous leukocytes from microscopic image of peripheral blood smear.

    Science.gov (United States)

    Ghosh, Madhumala; Chakraborty, Chandan; Konar, Amit; Ray, Ajoy K

    2014-02-01

    This paper introduces a hedge operator based fuzzy divergence measure and its application in segmentation of leukocytes in case of chronic myelogenous leukemia using light microscopic images of peripheral blood smears. The concept of modified discrimination measure is applied to develop the measure of divergence based on Shannon exponential entropy and Yager's measure of entropy. These two measures of divergence are compared with the existing literatures and validated by ground truth images. Finally, it is found that hedge operator based divergence measure using Yager's entropy achieves better segmentation accuracy i.e., 98.29% for normal and 98.15% for chronic myelogenous leukocytes. Furthermore, Jaccard index has been performed to compare the segmented image with ground truth ones where it is found that that the proposed scheme leads to higher Jaccard index (0.39 for normal, 0.24 for chronic myelogenous leukemia).

  5. Growth and inactivation of Salmonella enterica and Listeria monocytogenes in broth and validation in ground pork meat during simulated home storage abusive temperature and home pan-frying

    Directory of Open Access Journals (Sweden)

    Xiang eWang

    2015-10-01

    Full Text Available Ground pork meat with natural microbiota and inoculated with low initial densities (1-10 or 10-100 CFU/g of Salmonella enterica or Listeria monocytogenes was stored under abusive temperature at 10°C and thermally treated by a simulated home pan-frying procedure. The growth and inactivation characteristics were also evaluated in broth. In ground pork meat, the population of S. enterica increased by less than one log after 12-days of storage at 10°C, whereas L. monocytogenes increased by 2.3 to 2.8 log units. No unusual intrinsic heat resistance of the pathogens was noted when tested in broth at 60°C although shoulders were observed on the inactivation curves of L. monocytogenes. After growth of S. enterica and L. monocytogenes at 10°C for 5 days to levels of 1.95 log CFU/g and 3.10 log CFU/g, respectively, in ground pork meat, their inactivation in the burger subjected to a simulated home pan-frying was studied. After thermal treatment S. enterica was undetectable but L. monocytogenes was recovered in three out of six of the 25 g burger samples. Overall, the present study shows that data on growth and inactivation of broths are indicative but may underestimate as well as overestimate behavior of pathogens and thus need confirmation in food matrix conditions to assess food safety in reasonably foreseen abusive conditions of storage and usual home pan-frying of of meat burgers in Belgium.

  6. The potential surface in the ground electronic state of HCP with the isomerization process: the validity of calculating potential surface with DFT methods

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The density functional theory (DFT) provides us an effective way to calculate large cluster systems with moderate computational demands. We calculate potential energy surfaces (PES) with several different approaches of DFT. The PES in the ground electronic state are related to HCP's isomerization process. The calculated PES are compared with the “experimental” PES obtained by fitting from the experimental vibrational spectra and that given by the “accurate” quantum chemistry calculation with more expensive computations. The comparisons show that the potential surfaces calculated with DFT methods can reach the accuracy of less than 0.1 eV.

  7. Review of segmentation process in consumer markets

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2013-01-01

    Full Text Available Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step of the segmentation process with equal treatment. Hence, the objective of this paper is two-fold. First, a snapshot of the segmentation process in a step-by-step fashion will be provided. Second, each step (where possible will be evaluated on chosen criteria by means of description, comparison, analysis and synthesis of 32 academic papers and 13 commercial typology systems. Ultimately, the segmentation stages will be discussed with empirical findings prevalent in the segmentation studies and last but not least suggestions calling for further investigation will be presented. This seven-step-framework may assist when segmenting in practice allowing for more confidential targeting which in turn might prepare grounds for creating of a differential advantage.

  8. Image segmentation based on competitive learning

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jing; LIU Qun; Baikunth Nath

    2004-01-01

    Image segment is a primary step in image analysis of unexploded ordnance (UXO) detection by ground p enetrating radar (GPR) sensor which is accompanied with a lot of noises and other elements that affect the recognition of real target size. In this paper we bring forward a new theory, that is, we look the weight sets as target vector sets which is the new cues in semi-automatic segmentation to form the final image segmentation. The experiment results show that the measure size of target with our method is much smaller than the size with other methods and close to the real size of target.

  9. A geometric flow for segmenting vasculature in proton-density weighted MRI.

    Science.gov (United States)

    Descoteaux, Maxime; Collins, D Louis; Siddiqi, Kaleem

    2008-08-01

    Modern neurosurgery takes advantage of magnetic resonance images (MRI) of a patient's cerebral anatomy and vasculature for planning before surgery and guidance during the procedure. Dual echo acquisitions are often performed that yield proton-density (PD) and T2-weighted images to evaluate edema near a tumor or lesion. In this paper we develop a novel geometric flow for segmenting vasculature in PD images, which can also be applied to the easier cases of MR angiography data or Gadolinium enhanced MRI. Obtaining vasculature from PD data is of clinical interest since the acquisition of such images is widespread, the scanning process is non-invasive, and the availability of vessel segmentation methods could obviate the need for an additional angiographic or contrast-based sequence during preoperative imaging. The key idea is to first apply Frangi's vesselness measure [Frangi, A., Niessen, W., Vincken, K.L., Viergever, M.A., 1998. Multiscale vessel enhancement filtering. In: International Conference on Medical Image Computing and Computer Assisted Intervention, vol. 1496 of Lecture Notes in Computer Science, pp. 130-137] to find putative centerlines of tubular structures along with their estimated radii. This measure is then distributed to create a vector field which allows the flux maximizing flow algorithm of Vasilevskiy and Siddiqi [Vasilevskiy, A., Siddiqi, K., 2002. Flux maximizing geometric flows. IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (12), 1565-1578] to be applied to recover vessel boundaries. We carry out a qualitative validation of the approach on PD, MR angiography and Gadolinium enhanced MRI volumes and suggest a new way to visualize the segmentations in 2D with masked projections. We validate the approach quantitatively on a single-subject data set consisting of PD, phase contrast (PC) angiography and time of flight (TOF) angiography volumes, with an expert segmented version of the TOF volume viewed as the ground truth. We then

  10. Segmentation Similarity and Agreement

    CERN Document Server

    Fournier, Chris

    2012-01-01

    We propose a new segmentation evaluation metric, called segmentation similarity (S), that quantifies the similarity between two segmentations as the proportion of boundaries that are not transformed when comparing them using edit distance, essentially using edit distance as a penalty function and scaling penalties by segmentation size. We propose several adapted inter-annotator agreement coefficients which use S that are suitable for segmentation. We show that S is configurable enough to suit a wide variety of segmentation evaluations, and is an improvement upon the state of the art. We also propose using inter-annotator agreement coefficients to evaluate automatic segmenters in terms of human performance.

  11. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  12. Automated carotid artery intima layer regional segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Meiburger, Kristen M; Molinari, Filippo [Biolab, Department of Electronics, Politecnico di Torino, Torino (Italy); Acharya, U Rajendra [Department of ECE, Ngee Ann Polytechnic (Singapore); Saba, Luca [Department of Radiology, A.O.U. di Cagliari, Cagliari (Italy); Rodrigues, Paulo [Department of Computer Science, Centro Universitario da FEI, Sao Paulo (Brazil); Liboni, William [Neurology Division, Gradenigo Hospital, Torino (Italy); Nicolaides, Andrew [Vascular Screening and Diagnostic Centre, London (United Kingdom); Suri, Jasjit S, E-mail: filippo.molinari@polito.it [Fellow AIMBE, CTO, Global Biomedical Technologies Inc., CA (United States)

    2011-07-07

    Evaluation of the carotid artery wall is essential for the assessment of a patient's cardiovascular risk or for the diagnosis of cardiovascular pathologies. This paper presents a new, completely user-independent algorithm called carotid artery intima layer regional segmentation (CAILRS, a class of AtheroEdge(TM) systems), which automatically segments the intima layer of the far wall of the carotid ultrasound artery based on mean shift classification applied to the far wall. Further, the system extracts the lumen-intima and media-adventitia borders in the far wall of the carotid artery. Our new system is characterized and validated by comparing CAILRS borders with the manual tracings carried out by experts. The new technique is also benchmarked with a semi-automatic technique based on a first-order absolute moment edge operator (FOAM) and compared to our previous edge-based automated methods such as CALEX (Molinari et al 2010 J. Ultrasound Med. 29 399-418, 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CULEX (Delsanto et al 2007 IEEE Trans. Instrum. Meas. 56 1265-74, Molinari et al 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CALSFOAM (Molinari et al Int. Angiol. (at press)), and CAUDLES-EF (Molinari et al J. Digit. Imaging (at press)). Our multi-institutional database consisted of 300 longitudinal B-mode carotid images. In comparison to semi-automated FOAM, CAILRS showed the IMT bias of -0.035 {+-} 0.186 mm while FOAM showed -0.016 {+-} 0.258 mm. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed uniform behavior over the entire database. CAILRS outperformed all the four previous automated methods. The system's figure of merit was 95.6%, which was lower than that of the semi-automated method (98%), but higher than that of the other automated techniques.

  13. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    Science.gov (United States)

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.

  14. Marketing ambulatory care to women: a segmentation approach.

    Science.gov (United States)

    Harrell, G D; Fors, M F

    1985-01-01

    Although significant changes are occurring in health care delivery, in many instances the new offerings are not based on a clear understanding of market segments being served. This exploratory study suggests that important differences may exist among women with regard to health care selection. Five major women's segments are identified for consideration by health care executives in developing marketing strategies. Additional research is suggested to confirm this segmentation hypothesis, validate segmental differences and quantify the findings.

  15. Cross-validation Methodology between Ground and GPM Satellite-based Radar Rainfall Product over Dallas-Fort Worth (DFW) Metroplex

    Science.gov (United States)

    Chen, H.; Chandrasekar, V.; Biswas, S.

    2015-12-01

    Over the past two decades, a large number of rainfall products have been developed based on satellite, radar, and/or rain gauge observations. However, to produce optimal rainfall estimation for a given region is still challenging due to the space time variability of rainfall at many scales and the spatial and temporal sampling difference of different rainfall instruments. In order to produce high-resolution rainfall products for urban flash flood applications and improve the weather sensing capability in urban environment, the center for Collaborative Adaptive Sensing of the Atmosphere (CASA), in collaboration with National Weather Service (NWS) and North Central Texas Council of Governments (NCTCOG), has developed an urban radar remote sensing network in DFW Metroplex. DFW is the largest inland metropolitan area in the U.S., that experiences a wide range of natural weather hazards such as flash flood and hailstorms. The DFW urban remote sensing network, centered by the deployment of eight dual-polarization X-band radars and a NWS WSR-88DP radar, is expected to provide impacts-based warning and forecasts for benefit of the public safety and economy. High-resolution quantitative precipitation estimation (QPE) is one of the major goals of the development of this urban test bed. In addition to ground radar-based rainfall estimation, satellite-based rainfall products for this area are also of interest for this study. Typical example is the rainfall rate product produced by the Dual-frequency Precipitation Radar (DPR) onboard Global Precipitation Measurement (GPM) Core Observatory satellite. Therefore, cross-comparison between ground and space-based rainfall estimation is critical to building an optimal regional rainfall system, which can take advantages of the sampling differences of different sensors. This paper presents the real-time high-resolution QPE system developed for DFW urban radar network, which is based upon the combination of S-band WSR-88DP and X

  16. MAX-DOAS measurements in southern China: retrieval of aerosol extinctions and validation using ground-based in-situ data

    Directory of Open Access Journals (Sweden)

    X. Li

    2010-03-01

    Full Text Available We performed MAX-DOAS measurements during the PRiDe-PRD2006 campaign in the Pearl River Delta region 50 km north of Guangzhou, China, for 4 weeks in June 2006. We used an instrument sampling at 7 different elevation angles between 3° and 90°. During 9 cloud-free days, differential slant column densities (DSCDs of O4 (O2 dimer absorptions between 351 nm and 389 nm were evaluated for 6 elevation angles. Here, we show that radiative transfer modeling of the DSCDS can be used to retrieve the aerosol extinction and the height of the boundary layer. A comparison of the aerosol extinction with simultaneously recorded, ground based nephelometer data shows excellent agreement.

  17. Validation of middle-atmospheric campaign-based water vapour measured by the ground-based microwave radiometer MIAWARA-C

    Directory of Open Access Journals (Sweden)

    B. Tschanz

    2013-07-01

    Full Text Available Middle atmospheric water vapour can be used as a tracer for dynamical processes. It is mainly measured by satellite instruments and ground-based microwave radiometers. Ground-based instruments capable of measuring middle-atmospheric water vapour are sparse but valuable as they complement satellite measurements, are relatively easy to maintain and have a long lifetime. MIAWARA-C is a ground-based microwave radiometer for middle-atmospheric water vapour designed for use on measurement campaigns for both atmospheric case studies and instrument intercomparisons. MIAWARA-C's retrieval version 1.1 (v1.1 is set up in a such way as to provide a consistent data set even if the instrument is operated from different locations on a campaign basis. The sensitive altitude range for v1.1 extends from 4 hPa (37 km to 0.017 hPa (75 km. For v1.1 the estimated systematic error is approximately 10% for all altitudes. At lower altitudes it is dominated by uncertainties in the calibration, with altitude the influence of spectroscopic and temperature uncertainties increases. The estimated random error increases with altitude from 5 to 25%. MIAWARA-C measures two polarisations of the incident radiation in separate receiver channels, and can therefore provide two measurements of the same air mass with independent instrumental noise. The standard deviation of the difference between the profiles obtained from the two polarisations is in excellent agreement with the estimated random measurement error of v1.1. In this paper, the quality of v1.1 data is assessed for measurements obtained at two different locations: (1 a total of 25 months of measurements in the Arctic (Sodankylä, 67.37° N, 26.63° E and (2 nine months of measurements at mid-latitudes (Zimmerwald, 46.88° N, 7.46° E. For both locations MIAWARA-C's profiles are compared to measurements from the satellite experiments Aura MLS and MIPAS. In addition, comparisons to ACE-FTS and SOFIE are presented for the

  18. A Novel Lateral Deployment Mechanism for Segmented Mirror/Solar Panel of Space Telescope

    Science.gov (United States)

    Thesiya, Dignesh; Srinivas, A. R.; Shukla, Piyush

    2015-09-01

    Space telescopes require large aperture primary mirrors to capture High Definition (HD) ground image while orbiting around the Earth. Fairing Volume of launch vehicles is limited and thus the size of monolithic mirror is limited to fairing size and solar panels are arranged within a petal formation in order to provide a greater power to volume ratio. This generates need for deployable mirrors for space use. This brings out a method for designing new deployment mechanism for segmented mirror. Details of mechanism folding strategy, design of components, FE simulations, realization and Lab model validation results are discussed in order to demonstrate the design using prototype.

  19. Development of gait segmentation methods for wearable foot pressure sensors.

    Science.gov (United States)

    Crea, S; De Rossi, S M M; Donati, M; Reberšek, P; Novak, D; Vitiello, N; Lenzi, T; Podobnik, J; Munih, M; Carrozza, M C

    2012-01-01

    We present an automated segmentation method based on the analysis of plantar pressure signals recorded from two synchronized wireless foot insoles. Given the strict limits on computational power and power consumption typical of wearable electronic components, our aim is to investigate the capability of a Hidden Markov Model machine-learning method, to detect gait phases with different levels of complexity in the processing of the wearable pressure sensors signals. Therefore three different datasets are developed: raw voltage values, calibrated sensor signals and a calibrated estimation of total ground reaction force and position of the plantar center of pressure. The method is tested on a pool of 5 healthy subjects, through a leave-one-out cross validation. The results show high classification performances achieved using estimated biomechanical variables, being on average the 96%. Calibrated signals and raw voltage values show higher delays and dispersions in phase transition detection, suggesting a lower reliability for online applications.

  20. Grounded theory.

    Science.gov (United States)

    Harris, Tina

    2015-04-29

    Grounded theory is a popular research approach in health care and the social sciences. This article provides a description of grounded theory methodology and its key components, using examples from published studies to demonstrate practical application. It aims to demystify grounded theory for novice nurse researchers, by explaining what it is, when to use it, why they would want to use it and how to use it. It should enable nurse researchers to decide if grounded theory is an appropriate approach for their research, and to determine the quality of any grounded theory research they read.

  1. Anatomy packing with hierarchical segments: an algorithm for segmentation of pulmonary nodules in CT images.

    Science.gov (United States)

    Tsou, Chi-Hsuan; Lor, Kuo-Lung; Chang, Yeun-Chung; Chen, Chung-Ming

    2015-05-14

    This paper proposes a semantic segmentation algorithm that provides the spatial distribution patterns of pulmonary ground-glass nodules with solid portions in computed tomography (CT) images. The proposed segmentation algorithm, anatomy packing with hierarchical segments (APHS), performs pulmonary nodule segmentation and quantification in CT images. In particular, the APHS algorithm consists of two essential processes: hierarchical segmentation tree construction and anatomy packing. It constructs the hierarchical segmentation tree based on region attributes and local contour cues along the region boundaries. Each node of the tree corresponds to the soft boundary associated with a family of nested segmentations through different scales applied by a hierarchical segmentation operator that is used to decompose the image in a structurally coherent manner. The anatomy packing process detects and localizes individual object instances by optimizing a hierarchical conditional random field model. Ninety-two histopathologically confirmed pulmonary nodules were used to evaluate the performance of the proposed APHS algorithm. Further, a comparative study was conducted with two conventional multi-label image segmentation algorithms based on four assessment metrics: the modified Williams index, percentage statistic, overlapping ratio, and difference ratio. Under the same framework, the proposed APHS algorithm was applied to two clinical applications: multi-label segmentation of nodules with a solid portion and surrounding tissues and pulmonary nodule segmentation. The results obtained indicate that the APHS-generated boundaries are comparable to manual delineations with a modified Williams index of 1.013. Further, the resulting segmentation of the APHS algorithm is also better than that achieved by two conventional multi-label image segmentation algorithms. The proposed two-level hierarchical segmentation algorithm effectively labelled the pulmonary nodule and its surrounding

  2. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method.

    Science.gov (United States)

    Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Hara, Takeshi; Fujita, Hiroshi

    2017-07-21

    We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. We propose a single network based on pixel-to-label deep learning to address the challenging

  3. Pituitary Adenoma Segmentation

    CERN Document Server

    Egger, Jan; Kuhnt, Daniela; Freisleben, Bernd; Nimsky, Christopher

    2011-01-01

    Sellar tumors are approximately 10-15% among all intracranial neoplasms. The most common sellar lesion is the pituitary adenoma. Manual segmentation is a time-consuming process that can be shortened by using adequate algorithms. In this contribution, we present a segmentation method for pituitary adenoma. The method is based on an algorithm we developed recently in previous work where the novel segmentation scheme was successfully used for segmentation of glioblastoma multiforme and provided an average Dice Similarity Coefficient (DSC) of 77%. This scheme is used for automatic adenoma segmentation. In our experimental evaluation, neurosurgeons with strong experiences in the treatment of pituitary adenoma performed manual slice-by-slice segmentation of 10 magnetic resonance imaging (MRI) cases. Afterwards, the segmentations were compared with the segmentation results of the proposed method via the DSC. The average DSC for all data sets was 77.49% +/- 4.52%. Compared with a manual segmentation that took, on the...

  4. A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.

    Science.gov (United States)

    Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F

    2012-09-01

    Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.

  5. A hybrid hierarchical approach for brain tissue segmentation by combining brain atlas and least square support vector machine.

    Science.gov (United States)

    Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh

    2013-10-01

    In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth.

  6. Correlative 3D-imaging of Pipistrellus penis micromorphology: Validating quantitative microCT images with undecalcified serial ground section histomorphology.

    Science.gov (United States)

    Herdina, Anna Nele; Plenk, Hanns; Benda, Petr; Lina, Peter H C; Herzig-Straschil, Barbara; Hilgers, Helge; Metscher, Brian D

    2015-06-01

    Detailed knowledge of histomorphology is a prerequisite for the understanding of function, variation, and development. In bats, as in other mammals, penis and baculum morphology are important in species discrimination and phylogenetic studies. In this study, nondestructive 3D-microtomographic (microCT, µCT) images of bacula and iodine-stained penes of Pipistrellus pipistrellus were correlated with light microscopic images from undecalcified surface-stained ground sections of three of these penes of P. pipistrellus (1 juvenile). The results were then compared with µCT-images of bacula of P. pygmaeus, P. hanaki, and P. nathusii. The Y-shaped baculum in all studied Pipistrellus species has a proximal base with two club-shaped branches, a long slender shaft, and a forked distal tip. The branches contain a medullary cavity of variable size, which tapers into a central canal of variable length in the proximal baculum shaft. Both are surrounded by a lamellar and a woven bone layer and contain fatty marrow and blood vessels. The distal shaft consists of woven bone only, without a vascular canal. The proximal ends of the branches are connected with the tunica albuginea of the corpora cavernosa via entheses. In the penis shaft, the corpus spongiosum-surrounded urethra lies in a ventral grove of the corpora cavernosa, and continues in the glans under the baculum. The glans penis predominantly comprises an enlarged corpus spongiosum, which surrounds urethra and baculum. In the 12 studied juvenile and subadult P. pipistrellus specimens the proximal branches of the baculum were shorter and without marrow cavity, while shaft and distal tip appeared already fully developed. The present combination with light microscopic images from one species enabled a more reliable interpretation of histomorphological structures in the µCT-images from all four Pipistrellus species.

  7. Grounded cognition.

    Science.gov (United States)

    Barsalou, Lawrence W

    2008-01-01

    Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.

  8. GPS Control Segment

    Science.gov (United States)

    2015-04-29

    Luke J. Schaub Chief, GPS Control Segment Division 29 Apr 15 GPS Control Segment Report Documentation Page Form ApprovedOMB No. 0704-0188...00-2015 4. TITLE AND SUBTITLE GPS Control Segment 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...Center, GPS Control Segment Division,Los Angeles AFB, El Segundo,CA,90245 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S

  9. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    Comparative molecular, developmental and morphogenetic analyses show that the three major segmented animal groups- Lophotrochozoa, Ecdysozoa and Vertebrata-use a wide range of ontogenetic pathways to establish metameric body organization. Even in the life history of a single specimen, different...... plasticity and potential evolutionary lability of segmentation nourishes the controversy of a segmented bilaterian ancestor versus multiple independent evolution of segmentation in respective metazoan lineages....

  10. Development of a microbial model for the combined effect of temperature and pH on spoilage of ground meat, and validation of the model under dynamic temperature conditions.

    Science.gov (United States)

    Koutsoumanis, K; Stamatiou, A; Skandamis, P; Nychas, G-J E

    2006-01-01

    The changes in microbial flora and sensory characteristics of fresh ground meat (beef and pork) with pH values ranging from 5.34 to 6.13 were monitored at different isothermal storage temperatures (0 to 20 degrees C) under aerobic conditions. At all conditions tested, pseudomonads were the predominant bacteria, followed by Brochothrix thermosphacta, while the other members of the microbial association (e.g., lactic acid bacteria and Enterobacteriaceae) remained at lower levels. The results from microbiological and sensory analysis showed that changes in pseudomonad populations followed closely sensory changes during storage and could be used as a good index for spoilage of aerobically stored ground meat. The kinetic parameters (maximum specific growth rate [mu(max)] and the duration of lag phase [lambda]) of the spoilage bacteria were modeled by using a modified Arrhenius equation for the combined effect of temperature and pH. Meat pH affected growth of all spoilage bacteria except that of lactic acid bacteria. The "adaptation work," characterized by the product of mu(max) and lambda(mu(max) x lambda) was found to be unaffected by temperature for all tested bacteria but was affected by pH for pseudomonads and B. thermosphacta. For the latter bacteria, a negative linear correlation between ln(mu(max) x lambda) and meat pH was observed. The developed models were further validated under dynamic temperature conditions using different fluctuating temperatures. Graphical comparison between predicted and observed growth and the examination of the relative errors of predictions showed that the model predicted satisfactorily growth under dynamic conditions. Predicted shelf life based on pseudomonads growth was slightly shorter than shelf life observed by sensory analysis with a mean difference of 13.1%. The present study provides a "ready-to-use," well-validated model for predicting spoilage of aerobically stored ground meat. The use of the model by the meat industry can

  11. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation analy

  12. Haustral fold segmentation with curvature-guided level set evolution.

    Science.gov (United States)

    Zhu, Hongbin; Barish, Matthew; Pickhardt, Perry; Liang, Zhengrong

    2013-02-01

    Human colon has complex structures mostly because of the haustral folds. The folds are thin flat protrusions on the colon wall, which complicate the shape analysis for computer-aided detection (CAD) of colonic polyps. Fold segmentation may help reduce the structural complexity, and the folds can serve as an anatomic reference for computed tomographic colonography (CTC). Therefore, in this study, based on a model of the haustral fold boundaries, we developed a level-set approach to automatically segment the fold surfaces. To evaluate the developed fold segmentation algorithm, we first established the ground truth of haustral fold boundaries by experts' drawing on 15 patient CTC datasets without severe under/over colon distention from two medical centers. The segmentation algorithm successfully detected 92.7% of the folds in the ground truth. In addition to the sensitivity measure, we further developed a merit of segmented-area ratio (SAR), i.e., the ratio between the area of the intersection and union of the expert-drawn folds and the area of the automatically segmented folds, to measure the segmentation accuracy. The segmentation algorithm reached an average value of SAR = 86.2%, showing a good match with the ground truth on the fold surfaces. We believe the automatically segmented fold surfaces have the potential to benefit many postprocedures in CTC, such as CAD, taenia coli extraction, supine-prone registration, etc.

  13. Polyp Segmentation in NBI Colonoscopy

    Science.gov (United States)

    Gross, Sebastian; Kennel, Manuel; Stehle, Thomas; Wulff, Jonas; Tischendorf, Jens; Trautwein, Christian; Aach, Til

    Endoscopic screening of the colon (colonoscopy) is performed to prevent cancer and to support therapy. During intervention colon polyps are located, inspected and, if need be, removed by the investigator. We propose a segmentation algorithm as a part of an automatic polyp classification system for colonoscopic Narrow-Band images. Our approach includes multi-scale filtering for noise reduction, suppression of small blood vessels, and enhancement of major edges. Results of the subsequent edge detection are compared to a set of elliptic templates and evaluated. We validated our algorithm on our polyp database with images acquired during routine colonoscopic examinations. The presented results show the reliable segmentation performance of our method and its robustness to image variations.

  14. Aorta Segmentation for Stent Simulation

    CERN Document Server

    Egger, Jan; Setser, Randolph; Renapuraar, Rahul; Biermann, Christina; O'Donnell, Thomas

    2011-01-01

    Simulation of arterial stenting procedures prior to intervention allows for appropriate device selection as well as highlights potential complications. To this end, we present a framework for facilitating virtual aortic stenting from a contrast computer tomography (CT) scan. More specifically, we present a method for both lumen and outer wall segmentation that may be employed in determining both the appropriateness of intervention as well as the selection and localization of the device. The more challenging recovery of the outer wall is based on a novel minimal closure tracking algorithm. Our aortic segmentation method has been validated on over 3000 multiplanar reformatting (MPR) planes from 50 CT angiography data sets yielding a Dice Similarity Coefficient (DSC) of 90.67%.

  15. Simultaneous Reconstruction and Segmentation with Class-Specific Priors

    DEFF Research Database (Denmark)

    Romanov, Mikhail

    for regularizing the reconstruction process. The thesis provides models and algorithms for simultaneous reconstruction and segmentation and their performance is empirically validated. Two method of simultaneous reconstruction and segmentation are described in the thesis. Also, a method for parameter selection......Studying the interior of objects using tomography often require an image segmentation, such that different material properties can be quantified. This can for example be volume or surface area. Segmentation is typically done as an image analysis step after the image has been reconstructed....... This thesis investigates computing the reconstruction and segmentation simultaneously. The advantage of this is that because the reconstruction and segmentation are computed jointly, reconstruction errors are not propagated to the segmentation step. Furthermore the segmentation procedure can be used...

  16. What is a segment?

    Science.gov (United States)

    Hannibal, Roberta L; Patel, Nipam H

    2013-12-17

    Animals have been described as segmented for more than 2,000 years, yet a precise definition of segmentation remains elusive. Here we give the history of the definition of segmentation, followed by a discussion on current controversies in defining a segment. While there is a general consensus that segmentation involves the repetition of units along the anterior-posterior (a-p) axis, long-running debates exist over whether a segment can be composed of only one tissue layer, whether the most anterior region of the arthropod head is considered segmented, and whether and how the vertebrate head is segmented. Additionally, we discuss whether a segment can be composed of a single cell in a column of cells, or a single row of cells within a grid of cells. We suggest that 'segmentation' be used in its more general sense, the repetition of units with a-p polarity along the a-p axis, to prevent artificial classification of animals. We further suggest that this general definition be combined with an exact description of what is being studied, as well as a clearly stated hypothesis concerning the specific nature of the potential homology of structures. These suggestions should facilitate dialogue among scientists who study vastly differing segmental structures.

  17. Brain tumor segmentation based on a hybrid clustering technique

    Directory of Open Access Journals (Sweden)

    Eman Abdel-Maksoud

    2015-03-01

    This paper presents an efficient image segmentation approach using K-means clustering technique integrated with Fuzzy C-means algorithm. It is followed by thresholding and level set segmentation stages to provide an accurate brain tumor detection. The proposed technique can get benefits of the K-means clustering for image segmentation in the aspects of minimal computation time. In addition, it can get advantages of the Fuzzy C-means in the aspects of accuracy. The performance of the proposed image segmentation approach was evaluated by comparing it with some state of the art segmentation algorithms in case of accuracy, processing time, and performance. The accuracy was evaluated by comparing the results with the ground truth of each processed image. The experimental results clarify the effectiveness of our proposed approach to deal with a higher number of segmentation problems via improving the segmentation quality and accuracy in minimal execution time.

  18. Detection and Segmentation of Small Trees in the Forest-Tundra Ecotone Using Airborne Laser Scanning

    Directory of Open Access Journals (Sweden)

    Marius Hauglin

    2016-05-01

    Full Text Available Due to expected climate change and increased focus on forests as a potential carbon sink, it is of interest to map and monitor even marginal forests where trees exist close to their tolerance limits, such as small pioneer trees in the forest-tundra ecotone. Such small trees might indicate tree line migrations and expansion of the forests into treeless areas. Airborne laser scanning (ALS has been suggested and tested as a tool for this purpose and in the present study a novel procedure for identification and segmentation of small trees is proposed. The study was carried out in the Rollag municipality in southeastern Norway, where ALS data and field measurements of individual trees were acquired. The point density of the ALS data was eight points per m2, and the field tree heights ranged from 0.04 to 6.3 m, with a mean of 1.4 m. The proposed method is based on an allometric model relating field-measured tree height to crown diameter, and another model relating field-measured tree height to ALS-derived height. These models are calibrated with local field data. Using these simple models, every positive above-ground height derived from the ALS data can be related to a crown diameter, and by assuming a circular crown shape, this crown diameter can be extended to a crown segment. Applying this model to all ALS echoes with a positive above-ground height value yields an initial map of possible circular crown segments. The final crown segments were then derived by applying a set of simple rules to this initial “map” of segments. The resulting segments were validated by comparison with field-measured crown segments. Overall, 46% of the field-measured trees were successfully detected. The detection rate increased with tree size. For trees with height >3 m the detection rate was 80%. The relatively large detection errors were partly due to the inherent limitations in the ALS data; a substantial fraction of the smaller trees was hit by no or just a few

  19. Graph-based pancreatic islet segmentation for early type 2 diabetes mellitus on histopathological tissue.

    Science.gov (United States)

    Floros, Xenofon; Fuchs, Thomas J; Rechsteiner, Markus P; Spinas, Giatgen; Moch, Holger; Buhmann, Joachim M

    2009-01-01

    It is estimated that in 2010 more than 220 million people will be affected by type 2 diabetes mellitus (T2DM). Early evidence indicates that specific markers for alpha and beta cells in pancreatic islets of Langerhans can be used for early T2DM diagnosis. Currently, the analysis of such histological tissues is manually performed by trained pathologists using a light microscope. To objectify classification results and to reduce the processing time of histological tissues, an automated computational pathology framework for segmentation of pancreatic islets from histopathological fluorescence images is proposed. Due to high variability in the staining intensities for alpha and beta cells, classical medical imaging approaches fail in this scenario. The main contribution of this paper consists of a novel graph-based segmentation approach based on cell nuclei detection with randomized tree ensembles. The algorithm is trained via a cross validation scheme on a ground truth set of islet images manually segmented by 4 expert pathologists. Test errors obtained from the cross validation procedure demonstrate that the graph-based computational pathology analysis proposed is performing competitively to the expert pathologists while outperforming a baseline morphological approach.

  20. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    Energy Technology Data Exchange (ETDEWEB)

    Ren, X; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China); Sharp, G [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  1. Voxel Based Segmentation of Large Airborne Topobathymetric LIDAR Data

    Science.gov (United States)

    Boerner, R.; Hoegner, L.; Stilla, U.

    2017-05-01

    Point cloud segmentation and classification is currently a research highlight. Methods in this field create labelled data, where each point has additional class information. Current approaches are to generate a graph on the basis of all points in the point cloud, calculate or learn descriptors and train a matcher for the descriptor to the corresponding classes. Since these approaches need to look on each point in the point cloud iteratively, they result in long calculation times for large point clouds. Therefore, large point clouds need a generalization, to save computation time. One kind of generalization is to cluster the raw points into a 3D grid structure, which is represented by small volume units ( i.e. voxels) used for further processing. This paper introduces a method to use such a voxel structure to cluster a large point cloud into ground and non-ground points. The proposed method for ground detection first marks ground voxels with a region growing approach. In a second step non ground voxels are searched and filtered in the ground segment to reduce effects of over-segmentations. This filter uses the probability that a voxel mostly consist of last pulses and a discrete gradient in a local neighbourhood . The result is the ground label as a first classification result and connected segments of non-ground points. The test area of the river Mangfall in Bavaria, Germany, is used for the first processing.

  2. Ground Wars

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Kleis

    Political campaigns today are won or lost in the so-called ground war--the strategic deployment of teams of staffers, volunteers, and paid part-timers who work the phones and canvass block by block, house by house, voter by voter. Ground Wars provides an in-depth ethnographic portrait of two...... infrastructures that utilize large databases with detailed individual-level information for targeting voters, and armies of dedicated volunteers and paid part-timers. Nielsen challenges the notion that political communication in America must be tightly scripted, controlled, and conducted by a select coterie...... of professionals. Yet he also quashes the romantic idea that canvassing is a purer form of grassroots politics. In today's political ground wars, Nielsen demonstrates, even the most ordinary-seeming volunteer knocking at your door is backed up by high-tech targeting technologies and party expertise. Ground Wars...

  3. A Cognitively Grounded Measure of Pronunciation Distance

    Science.gov (United States)

    Wieling, Martijn; Nerbonne, John; Bloem, Jelke; Gooskens, Charlotte; Heeringa, Wilbert; Baayen, R. Harald

    2014-01-01

    In this study we develop pronunciation distances based on naive discriminative learning (NDL). Measures of pronunciation distance are used in several subfields of linguistics, including psycholinguistics, dialectology and typology. In contrast to the commonly used Levenshtein algorithm, NDL is grounded in cognitive theory of competitive reinforcement learning and is able to generate asymmetrical pronunciation distances. In a first study, we validated the NDL-based pronunciation distances by comparing them to a large set of native-likeness ratings given by native American English speakers when presented with accented English speech. In a second study, the NDL-based pronunciation distances were validated on the basis of perceptual dialect distances of Norwegian speakers. Results indicated that the NDL-based pronunciation distances matched perceptual distances reasonably well with correlations ranging between 0.7 and 0.8. While the correlations were comparable to those obtained using the Levenshtein distance, the NDL-based approach is more flexible as it is also able to incorporate acoustic information other than sound segments. PMID:24416119

  4. A cognitively grounded measure of pronunciation distance.

    Science.gov (United States)

    Wieling, Martijn; Nerbonne, John; Bloem, Jelke; Gooskens, Charlotte; Heeringa, Wilbert; Baayen, R Harald

    2014-01-01

    In this study we develop pronunciation distances based on naive discriminative learning (NDL). Measures of pronunciation distance are used in several subfields of linguistics, including psycholinguistics, dialectology and typology. In contrast to the commonly used Levenshtein algorithm, NDL is grounded in cognitive theory of competitive reinforcement learning and is able to generate asymmetrical pronunciation distances. In a first study, we validated the NDL-based pronunciation distances by comparing them to a large set of native-likeness ratings given by native American English speakers when presented with accented English speech. In a second study, the NDL-based pronunciation distances were validated on the basis of perceptual dialect distances of Norwegian speakers. Results indicated that the NDL-based pronunciation distances matched perceptual distances reasonably well with correlations ranging between 0.7 and 0.8. While the correlations were comparable to those obtained using the Levenshtein distance, the NDL-based approach is more flexible as it is also able to incorporate acoustic information other than sound segments.

  5. A cognitively grounded measure of pronunciation distance.

    Directory of Open Access Journals (Sweden)

    Martijn Wieling

    Full Text Available In this study we develop pronunciation distances based on naive discriminative learning (NDL. Measures of pronunciation distance are used in several subfields of linguistics, including psycholinguistics, dialectology and typology. In contrast to the commonly used Levenshtein algorithm, NDL is grounded in cognitive theory of competitive reinforcement learning and is able to generate asymmetrical pronunciation distances. In a first study, we validated the NDL-based pronunciation distances by comparing them to a large set of native-likeness ratings given by native American English speakers when presented with accented English speech. In a second study, the NDL-based pronunciation distances were validated on the basis of perceptual dialect distances of Norwegian speakers. Results indicated that the NDL-based pronunciation distances matched perceptual distances reasonably well with correlations ranging between 0.7 and 0.8. While the correlations were comparable to those obtained using the Levenshtein distance, the NDL-based approach is more flexible as it is also able to incorporate acoustic information other than sound segments.

  6. Keypoint Transfer Segmentation

    OpenAIRE

    Wachinger, C.; Toews, M.; Langs, G.; Wells, W.; Golland, P.

    2015-01-01

    We present an image segmentation method that transfers label maps of entire organs from the training images to the novel image to be segmented. The transfer is based on sparse correspondences between keypoints that represent automatically identified distinctive image locations. Our segmentation algorithm consists of three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ label maps. We introduce generative models for th...

  7. Extracting Urban Ground Object Information from Images and LiDAR Data

    Science.gov (United States)

    Yi, Lina; Zhao, Xuesheng; Li, Luan; Zhang, Guifeng

    2016-06-01

    To deal with the problem of urban ground object information extraction, the paper proposes an object-oriented classification method using aerial image and LiDAR data. Firstly, we select the optimal segmentation scales of different ground objects and synthesize them to get accurate object boundaries. Then, this paper uses ReliefF algorithm to select the optimal feature combination and eliminate the Hughes phenomenon. Eventually, the multiple classifier combination method is applied to get the outcome of the classification. In order to validate the feasible of this method, this paper selects two experimental regions in Stuttgart and Germany (Region A and B, covers 0.21 km2 and 1.1 km2 respectively). The aim of the first experiment on the Region A is to get the optimal segmentation scales and classification features. The overall accuracy of the classification reaches to 93.3 %. The purpose of the experiment on region B is to validate the application-ability of this method for a large area, which is turned out to be reaches 88.4 % overall accuracy. In the end of this paper, the conclusion shows that the proposed method can be performed accurately and efficiently in terms of urban ground information extraction and be of high application value.

  8. Estimation of regeneration coverage in a temperate forest by 3D segmentation using airborne laser scanning data

    Science.gov (United States)

    Amiri, Nina; Yao, Wei; Heurich, Marco; Krzystek, Peter; Skidmore, Andrew K.

    2016-10-01

    Forest understory and regeneration are important factors in sustainable forest management. However, understanding their spatial distribution in multilayered forests requires accurate and continuously updated field data, which are difficult and time-consuming to obtain. Therefore, cost-efficient inventory methods are required, and airborne laser scanning (ALS) is a promising tool for obtaining such information. In this study, we examine a clustering-based 3D segmentation in combination with ALS data for regeneration coverage estimation in a multilayered temperate forest. The core of our method is a two-tiered segmentation of the 3D point clouds into segments associated with regeneration trees. First, small parts of trees (super-voxels) are constructed through mean shift clustering, a nonparametric procedure for finding the local maxima of a density function. In the second step, we form a graph based on the mean shift clusters and merge them into larger segments using the normalized cut algorithm. These segments are used to obtain regeneration coverage of the target plot. Results show that, based on validation data from field inventory and terrestrial laser scanning (TLS), our approach correctly estimates up to 70% of regeneration coverage across the plots with different properties, such as tree height and tree species. The proposed method is negatively impacted by the density of the overstory because of decreasing ground point density. In addition, the estimated coverage has a strong relationship with the overstory tree species composition.

  9. A novel segmentation approach for noisy medical images using intuitionistic fuzzy divergence with neighbourhood-based membership function.

    Science.gov (United States)

    Jati, A; Singh, G; Koley, S; Konar, A; Ray, A K; Chakraborty, C

    2015-03-01

    Medical image segmentation demands higher segmentation accuracy especially when the images are affected by noise. This paper proposes a novel technique to segment medical images efficiently using an intuitionistic fuzzy divergence-based thresholding. A neighbourhood-based membership function is defined here. The intuitionistic fuzzy divergence-based image thresholding technique using the neighbourhood-based membership functions yield lesser degradation of segmentation performance in noisy environment. Its ability in handling noisy images has been validated. The algorithm is independent of any parameter selection. Moreover, it provides robustness to both additive and multiplicative noise. The proposed scheme has been applied on three types of medical image datasets in order to establish its novelty and generality. The performance of the proposed algorithm has been compared with other standard algorithms viz. Otsu's method, fuzzy C-means clustering, and fuzzy divergence-based thresholding with respect to (1) noise-free images and (2) ground truth images labelled by experts/clinicians. Experiments show that the proposed methodology is effective, more accurate and efficient for segmenting noisy images.

  10. Development, validation, and application of a novel LC-MS/MS trace analysis method for the simultaneous quantification of seven iodinated X-ray contrast media and three artificial sweeteners in surface, ground, and drinking water.

    Science.gov (United States)

    Ens, Waldemar; Senner, Frank; Gygax, Benjamin; Schlotterbeck, Götz

    2014-05-01

    A new method for the simultaneous determination of iodated X-ray contrast media (ICM) and artificial sweeteners (AS) by liquid chromatography-tandem mass spectrometry (LC-MS/MS) operated in positive and negative ionization switching mode was developed. The method was validated for surface, ground, and drinking water samples. In order to gain higher sensitivities, a 10-fold sample enrichment step using a Genevac EZ-2 plus centrifugal vacuum evaporator that provided excellent recoveries (90 ± 6 %) was selected for sample preparation. Limits of quantification below 10 ng/L were obtained for all compounds. Furthermore, sample preparation recoveries and matrix effects were investigated thoroughly for all matrix types. Considerable matrix effects were observed in surface water and could be compensated by the use of four stable isotope-labeled internal standards. Due to their persistence, fractions of diatrizoic acid, iopamidol, and acesulfame could pass the whole drinking water production process and were observed also in drinking water. To monitor the fate and occurrence of these compounds, the validated method was applied to samples from different stages of the drinking water production process of the Industrial Works of Basel (IWB). Diatrizoic acid was found as the most persistent compound which was eliminated by just 40 % during the whole drinking water treatment process, followed by iopamidol (80 % elimination) and acesulfame (85 % elimination). All other compounds were completely restrained and/or degraded by the soil and thus were not detected in groundwater. Additionally, a direct injection method without sample preparation achieving 3-20 ng/L limits of quantification was compared to the developed method.

  11. Acellular allogeneic nerve grafting combined with bone marrow mesenchymal stem cell transplantation for the repair of long-segment sciatic nerve defects:biomechanics and validation of mathematical models

    Institute of Scientific and Technical Information of China (English)

    Ya-jun Li; Bao-lin Zhao; Hao-ze Lv; Zhi-gang Qin; Min Luo

    2016-01-01

    We hypothesized that a chemically extracted acellular allogeneic nerve graft used in combination with bone marrow mesenchymal stem cell transplantation would be an effective treatment for long-segment sciatic nerve defects. To test this, we established rabbit models of 30 mm sciatic nerve defects, and treated them using either an autograft or a chemically decellularized allogeneic nerve graft with or without simultaneous transplantation of bone marrow mesenchymal stem cells. We compared the tensile properties, electrophysiological function and morphology of the damaged nerve in each group. Sciatic nerves repaired by the allogeneic nerve graft combined with stem cell trans-plantation showed better recovery than those repaired by the acellular allogeneic nerve graft alone, and produced similar results to those observed with the autograft. These ifndings conifrm that a chemically extracted acellular allogeneic nerve graft combined with transplanta-tion of bone marrow mesenchymal stem cells is an effective method of repairing long-segment sciatic nerve defects.

  12. Auxiliary anatomical labels for joint segmentation and atlas registration

    Science.gov (United States)

    Gass, Tobias; Szekely, Gabor; Goksel, Orcun

    2014-03-01

    This paper studies improving joint segmentation and registration by introducing auxiliary labels for anatomy that has similar appearance to the target anatomy while not being part of that target. Such auxiliary labels help avoid false positive labelling of non-target anatomy by resolving ambiguity. A known registration of a segmented atlas can help identify where a target segmentation should lie. Conversely, segmentations of anatomy in two images can help them be better registered. Joint segmentation and registration is then a method that can leverage information from both registration and segmentation to help one another. It has received increasing attention recently in the literature. Often, merely a single organ of interest is labelled in the atlas. In the presense of other anatomical structures with similar appearance, this leads to ambiguity in intensity based segmentation; for example, when segmenting individual bones in CT images where other bones share the same intensity profile. To alleviate this problem, we introduce automatic generation of additional labels in atlas segmentations, by marking similar-appearance non-target anatomy with an auxiliary label. Information from the auxiliary-labeled atlas segmentation is then incorporated by using a novel coherence potential, which penalizes differences between the deformed atlas segmentation and the target segmentation estimate. We validated this on a joint segmentation-registration approach that iteratively alternates between registering an atlas and segmenting the target image to find a final anatomical segmentation. The results show that automatic auxiliary labelling outperforms the same approach using a single label atlasses, for both mandibular bone segmentation in 3D-CT and corpus callosum segmentation in 2D-MRI.

  13. Universal Numeric Segmented Display

    CERN Document Server

    Azad, Md Abul kalam; Kamruzzaman, S M

    2010-01-01

    Segmentation display plays a vital role to display numerals. But in today's world matrix display is also used in displaying numerals. Because numerals has lots of curve edges which is better supported by matrix display. But as matrix display is costly and complex to implement and also needs more memory, segment display is generally used to display numerals. But as there is yet no proposed compact display architecture to display multiple language numerals at a time, this paper proposes uniform display architecture to display multiple language digits and general mathematical expressions with higher accuracy and simplicity by using a 18-segment display, which is an improvement over the 16 segment display.

  14. Ground Validation GPS for American Samoa

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project is a cooperative effort among the National Ocean Service, National Centers for Coastal Ocean Science, Center for Coastal Monitoring and Assessment; the...

  15. Wind-induced ground motion

    Science.gov (United States)

    Naderyan, Vahid; Hickey, Craig J.; Raspet, Richard

    2016-02-01

    Wind noise is a problem in seismic surveys and can mask the seismic signals at low frequency. This research investigates ground motions caused by wind pressure and shear stress perturbations on the ground surface. A prediction of the ground displacement spectra using the measured ground properties and predicted pressure and shear stress at the ground surface is developed. Field measurements are conducted at a site having a flat terrain and low ambient seismic noise. Triaxial geophones are deployed at different depths to study the wind-induced ground vibrations as a function of depth and wind velocity. Comparison of the predicted to the measured wind-induced ground displacement spectra shows good agreement for the vertical component but significant underprediction for the horizontal components. To validate the theoretical model, a test experiment is designed to exert controlled normal pressure and shear stress on the ground using a vertical and a horizontal mass-spring apparatus. This experiment verifies the linear elastic rheology and the quasi-static displacements assumptions of the model. The results indicate that the existing surface shear stress models significantly underestimate the wind shear stress at the ground surface and the amplitude of the fluctuation shear stress must be of the same order of magnitude as the normal pressure. Measurement results show that mounting the geophones flush with the ground provides a significant reduction in wind noise on all three components of the geophone. Further reduction in wind noise with depth of burial is small for depths up to 40 cm.

  16. Automatic metastatic brain tumor segmentation for stereotactic radiosurgery applications

    Science.gov (United States)

    Liu, Yan; Stojadinovic, Strahinja; Hrycushko, Brian; Wardak, Zabi; Lu, Weiguo; Yan, Yulong; Jiang, Steve B.; Timmerman, Robert; Abdulrahman, Ramzi; Nedzi, Lucien; Gu, Xuejun

    2016-12-01

    The objective of this study is to develop an automatic segmentation strategy for efficient and accurate metastatic brain tumor delineation on contrast-enhanced T1-weighted (T1c) magnetic resonance images (MRI) for stereotactic radiosurgery (SRS) applications. The proposed four-step automatic brain metastases segmentation strategy is comprised of pre-processing, initial contouring, contour evolution, and contour triage. First, T1c brain images are preprocessed to remove the skull. Second, an initial tumor contour is created using a multi-scaled adaptive threshold-based bounding box and a super-voxel clustering technique. Third, the initial contours are evolved to the tumor boundary using a regional active contour technique. Fourth, all detected false-positive contours are removed with geometric characterization. The segmentation process was validated on a realistic virtual phantom containing Gaussian or Rician noise. For each type of noise distribution, five different noise levels were tested. Twenty-one cases from the multimodal brain tumor image segmentation (BRATS) challenge dataset and fifteen clinical metastases cases were also included in validation. Segmentation performance was quantified by the Dice coefficient (DC), normalized mutual information (NMI), structural similarity (SSIM), Hausdorff distance (HD), mean value of surface-to-surface distance (MSSD) and standard deviation of surface-to-surface distance (SDSSD). In the numerical phantom study, the evaluation yielded a DC of 0.98  ±  0.01, an NMI of 0.97  ±  0.01, an SSIM of 0.999  ±  0.001, an HD of 2.2  ±  0.8 mm, an MSSD of 0.1  ±  0.1 mm, and an SDSSD of 0.3  ±  0.1 mm. The validation on the BRATS data resulted in a DC of 0.89  ±  0.08, which outperform the BRATS challenge algorithms. Evaluation on clinical datasets gave a DC of 0.86  ±  0.09, an NMI of 0.80  ±  0.11, an SSIM of 0.999  ±  0.001, an HD of 8

  17. From interpretation to segmentation

    NARCIS (Netherlands)

    Koning, A.R.; Lier, R.J. van

    2005-01-01

    In visual perception, part segmentation of an object is considered to be guided by image-based properties, such as occurrences of deep concavities in the outer contour. However, object-based properties can also provide information regarding segmentation. In this study, outer contours and interpretat

  18. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two

  19. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    2008-01-01

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two consume

  20. Benign segmental bronchial obstruction

    Energy Technology Data Exchange (ETDEWEB)

    Loercher, U.

    1988-09-01

    The benigne segmental bronchial obstruction - mostly discovered on routine chest films - can well be diagnosed by CT. The specific findings in CT are the site of the bronchial obstruction, the mucocele and the localized empysema of the involved segment. Furthermore CT allows a better approach to the underlying process.

  1. Hospital benefit segmentation.

    Science.gov (United States)

    Finn, D W; Lamb, C W

    1986-12-01

    Market segmentation is an important topic to both health care practitioners and researchers. The authors explore the relative importance that health care consumers attach to various benefits available in a major metropolitan area hospital. The purposes of the study are to test, and provide data to illustrate, the efficacy of one approach to hospital benefit segmentation analysis.

  2. Automated vessel shadow segmentation of fovea-centered spectral-domain images from multiple OCT devices

    Science.gov (United States)

    Wu, Jing; Gerendas, Bianca S.; Waldstein, Sebastian M.; Simader, Christian; Schmidt-Erfurth, Ursula

    2014-03-01

    Spectral-domain Optical Coherence Tomography (SD-OCT) is a non-invasive modality for acquiring high reso- lution, three-dimensional (3D) cross sectional volumetric images of the retina and the subretinal layers. SD-OCT also allows the detailed imaging of retinal pathology, aiding clinicians in the diagnosis of sight degrading diseases such as age-related macular degeneration (AMD) and glaucoma.1 Disease diagnosis, assessment, and treatment requires a patient to undergo multiple OCT scans, possibly using different scanning devices, to accurately and precisely gauge disease activity, progression and treatment success. However, the use of OCT imaging devices from different vendors, combined with patient movement may result in poor scan spatial correlation, potentially leading to incorrect patient diagnosis or treatment analysis. Image registration can be used to precisely compare disease states by registering differing 3D scans to one another. In order to align 3D scans from different time- points and vendors using registration, landmarks are required, the most obvious being the retinal vasculature. Presented here is a fully automated cross-vendor method to acquire retina vessel locations for OCT registration from fovea centred 3D SD-OCT scans based on vessel shadows. Noise filtered OCT scans are flattened based on vendor retinal layer segmentation, to extract the retinal pigment epithelium (RPE) layer of the retina. Voxel based layer profile analysis and k-means clustering is used to extract candidate vessel shadow regions from the RPE layer. In conjunction, the extracted RPE layers are combined to generate a projection image featuring all candidate vessel shadows. Image processing methods for vessel segmentation of the OCT constructed projection image are then applied to optimize the accuracy of OCT vessel shadow segmentation through the removal of false positive shadow regions such as those caused by exudates and cysts. Validation of segmented vessel shadows uses

  3. Segmentation of the human spinal cord.

    Science.gov (United States)

    De Leener, Benjamin; Taso, Manuel; Cohen-Adad, Julien; Callot, Virginie

    2016-04-01

    Segmenting the spinal cord contour is a necessary step for quantifying spinal cord atrophy in various diseases. Delineating gray matter (GM) and white matter (WM) is also useful for quantifying GM atrophy or for extracting multiparametric MRI metrics into specific WM tracts. Spinal cord segmentation in clinical research is not as developed as brain segmentation, however with the substantial improvement of MR sequences adapted to spinal cord MR investigations, the field of spinal cord MR segmentation has advanced greatly within the last decade. Segmentation techniques with variable accuracy and degree of complexity have been developed and reported in the literature. In this paper, we review some of the existing methods for cord and WM/GM segmentation, including intensity-based, surface-based, and image-based methods. We also provide recommendations for validating spinal cord segmentation techniques, as it is important to understand the intrinsic characteristics of the methods and to evaluate their performance and limitations. Lastly, we illustrate some applications in the healthy and pathological spinal cord. One conclusion of this review is that robust and automatic segmentation is clinically relevant, as it would allow for longitudinal and group studies free from user bias as well as reproducible multicentric studies in large populations, thereby helping to further our understanding of the spinal cord pathophysiology and to develop new criteria for early detection of subclinical evolution for prognosis prediction and for patient management. Another conclusion is that at the present time, no single method adequately segments the cord and its substructure in all the cases encountered (abnormal intensities, loss of contrast, deformation of the cord, etc.). A combination of different approaches is thus advised for future developments, along with the introduction of probabilistic shape models. Maturation of standardized frameworks, multiplatform availability, inclusion

  4. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    DEFF Research Database (Denmark)

    Bertholet, Jenny; Wan, Hanlin; Toftegaard, Jakob;

    2017-01-01

    algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated....... The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio....... For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations....

  5. IMPROVED HYBRID SEGMENTATION OF BRAIN MRI TISSUE AND TUMOR USING STATISTICAL FEATURES

    Directory of Open Access Journals (Sweden)

    S. Allin Christe

    2010-08-01

    Full Text Available Medical image segmentation is the most essential and crucial process in order to facilitate the characterization and visualization of the structure of interest in medical images. Relevant application in neuroradiology is the segmentation of MRI data sets of the human brain into the structure classes gray matter, white matter and cerebrospinal fluid (CSF and tumor. In this paper, brain image segmentation algorithms such as Fuzzy C means (FCM segmentation and Kohonen means(K means segmentation were implemented. In addition to this, new hybrid segmentation technique, namely, Fuzzy Kohonen means of image segmentation based on statistical feature clustering is proposed and implemented along with standard pixel value clustering method. The clustered segmented tissue images are compared with the Ground truth and its performance metric is also found. It is found that the feature based hybrid segmentation gives improved performance metric and improved classification accuracy rather than pixel based segmentation.

  6. An objective evaluation framework for segmentation techniques of functional positron emission tomography studies

    CERN Document Server

    Kim, J; Eberl, S; Feng, D

    2004-01-01

    Segmentation of multi-dimensional functional positron emission tomography (PET) studies into regions of interest (ROI) exhibiting similar temporal behavior is useful in diagnosis and evaluation of neurological images. Quantitative evaluation plays a crucial role in measuring the segmentation algorithm's performance. Due to the lack of "ground truth" available for evaluating segmentation of clinical images, automated segmentation results are usually compared with manual delineation of structures which is, however, subjective, and is difficult to perform. Alternatively, segmentation of co-registered anatomical images such as magnetic resonance imaging (MRI) can be used as the ground truth to the PET segmentation. However, this is limited to PET studies which have corresponding MRI. In this study, we introduce a framework for the objective and quantitative evaluation of functional PET study segmentation without the need for manual delineation or registration to anatomical images of the patient. The segmentation ...

  7. Features of the Deployed NPOESS Ground System

    Science.gov (United States)

    Smith, D.; Grant, K. D.; Route, G.; Heckmann, G.

    2009-12-01

    NOAA, DoD, and NASA are jointly acquiring the National Polar-orbiting Operational Environmental Satellite System (NPOESS) replacing the current NOAA Polar-orbiting Operational Environmental Satellites (POES) and the DoD's Defense Meteorological Satellite Program (DMSP). The NPOESS satellites will carry a suite of sensors to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere and space. The ground data processing segment is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence & Information Systems (IIS). The IDPS processes NPOESS satellite data to provide environmental data products (aka, Environmental Data Records or EDRs) to US NOAA and DoD processing centers. The IDPS will process EDRs beginning with the NPOESS Preparatory Project (NPP) and through the lifetime of the NPOESS system. The command and telemetry segment is the Command, Control and Communications Segment (C3S), also developed by Raytheon IIS. C3S is responsible for managing the overall NPOESS mission from control and status of the space and ground assets to ensuring delivery of timely, high quality data from the Space Segment (SS) to IDPS for processing. In addition, the C3S provides the globally distributed ground assets necessary to collect and transport mission, telemetry, and command data between the satellites and the processing locations. The C3S provides all functions required for day-to-day commanding and state-of-health monitoring of the NPP and NPOESS satellites, and delivery of SMD to each Central IDP for data products development and transfer to System subscribers. The C3S also monitors and reports system-wide health, status and data communications with external systems and between the NPOESS segments. The NPOESS C3S and IDPS ground segments have been delivered and transitioned to operations for NPP. C3S was transitioned to operations at the NOAA Satellite Operations Facility in Suitland MD in August

  8. Segmentation of histological structures for fractal analysis

    Science.gov (United States)

    Dixon, Vanessa; Kouznetsov, Alexei; Tambasco, Mauro

    2009-02-01

    Pathologists examine histology sections to make diagnostic and prognostic assessments regarding cancer based on deviations in cellular and/or glandular structures. However, these assessments are subjective and exhibit some degree of observer variability. Recent studies have shown that fractal dimension (a quantitative measure of structural complexity) has proven useful for characterizing structural deviations and exhibits great potential for automated cancer diagnosis and prognosis. Computing fractal dimension relies on accurate image segmentation to capture the architectural complexity of the histology specimen. For this purpose, previous studies have used techniques such as intensity histogram analysis and edge detection algorithms. However, care must be taken when segmenting pathologically relevant structures since improper edge detection can result in an inaccurate estimation of fractal dimension. In this study, we established a reliable method for segmenting edges from grayscale images. We used a Koch snowflake, an object of known fractal dimension, to investigate the accuracy of various edge detection algorithms and selected the most appropriate algorithm to extract the outline structures. Next, we created validation objects ranging in fractal dimension from 1.3 to 1.9 imitating the size, structural complexity, and spatial pixel intensity distribution of stained histology section images. We applied increasing intensity thresholds to the validation objects to extract the outline structures and observe the effects on the corresponding segmentation and fractal dimension. The intensity threshold yielding the maximum fractal dimension provided the most accurate fractal dimension and segmentation, indicating that this quantitative method could be used in an automated classification system for histology specimens.

  9. Block-o-Matic: a Web Page Segmentation Tool and its Evaluation

    OpenAIRE

    Sanoja, Andrés; Gançarski, Stéphane

    2013-01-01

    National audience; In this paper we present our prototype for the web page segmentation called Block-o-matic and its counterpart Block-o-manual, for manual segmentation. The main idea is to evaluate the correctness of the segmentation algorithm. Build a ground truth database for evaluation can take days or months depending on the collection size, however we address our solution with our manual segmentation tool intended to minimize the time of annotation of blocks in web pages. Both tools imp...

  10. An experimentally validated panel of subfamily-specific oligonucleotide primers (V alpha 1-w29/V beta 1-w24) for the study of human T cell receptor variable V gene segment usage by polymerase chain reaction.

    Science.gov (United States)

    Genevée, C; Diu, A; Nierat, J; Caignard, A; Dietrich, P Y; Ferradini, L; Roman-Roman, S; Triebel, F; Hercend, T

    1992-05-01

    We report here the characterization of a series of T cell receptor (TcR) V alpha or V beta subfamily-specific oligonucleotide primers. Criteria that have guided the design of each oligonucleotide include appropriate thermodynamic parameters as well as differential base-pairing scores with related and unrelated target sequences. The specificity of the oligonucleotides for each V alpha or V beta subfamily was tested by polymerase chain reaction (PCR) on both a series of TcR encoding plasmid DNA and clonal T cell populations. Unexpected cross-reactivities were observed with plasmid cDNA sequences corresponding to unrelated subfamily gene segments. This led to the synthesis of additional series of oligonucleotides to obtain a relevant panel. A series of V alpha 1-w29/V beta 1-w24 TcR subfamily-specific oligonucleotides was eventually selected which generates little, if any, cross-reactivity. The use of C alpha or C beta primers for the amplification of internal positive control templates (i.e. C beta for the V alpha series and C alpha for the V beta series) has been tested in PCR performed with cDNA derived from peripheral blood lymphocytes; it was shown not to alter the amplification of the V subfamily-specific DNA fragments. This panel of oligonucleotides will be helpful in the study of TcRV gene segment usage and, thus, may lead to a better characterization of T cell responses in physiological and pathological situations.

  11. Keypoint Transfer Segmentation.

    Science.gov (United States)

    Wachinger, C; Toews, M; Langs, G; Wells, W; Golland, P

    2015-01-01

    We present an image segmentation method that transfers label maps of entire organs from the training images to the novel image to be segmented. The transfer is based on sparse correspondences between keypoints that represent automatically identified distinctive image locations. Our segmentation algorithm consists of three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ label maps. We introduce generative models for the inference of keypoint labels and for image segmentation, where keypoint matches are treated as a latent random variable and are marginalized out as part of the algorithm. We report segmentation results for abdominal organs in whole-body CT and in contrast-enhanced CT images. The accuracy of our method compares favorably to common multi-atlas segmentation while offering a speed-up of about three orders of magnitude. Furthermore, keypoint transfer requires no training phase or registration to an atlas. The algorithm's robustness enables the segmentation of scans with highly variable field-of-view.

  12. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  13. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  14. Cluster Ensemble-based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new\tcluster ensemble-based image\tsegmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  15. Segmentation of sows in farrowing pens

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Karstoft, Henrik; Pedersen, Lene Juul

    2014-01-01

    The correct segmentation of a foreground object in video recordings is an important task for many surveillance systems. The development of an effective and practical algorithm to segment sows in grayscale video recordings captured under commercial production conditions is described. The segmentat......The correct segmentation of a foreground object in video recordings is an important task for many surveillance systems. The development of an effective and practical algorithm to segment sows in grayscale video recordings captured under commercial production conditions is described....... The segmentation algorithm combines a modified adaptive Gaussian mixture model for background subtraction with the boundaries of foreground objects, which is obtained by using dyadic wavelet transform. This algorithm can accurately extract the shapes of a sow under complex environments, such as dynamic background...... and illumination changes as well as motionless foreground objects. About 97% of the segmented binary images in the validation data sets can be used to track sow behaviours, such as position, orientation and movement. The experimental results demonstrate that the proposed algorithm is able to provide a basis...

  16. Applications of magnetic resonance image segmentation in neurology

    Science.gov (United States)

    Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu

    1999-05-01

    After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.

  17. Application of A Global-To-Beam Irradiance Model to the Satellite-Based NASA GEWEX SRB Data and Validation of the Results against the Ground-Based BSRN Data

    Science.gov (United States)

    Zhang, T.; Stackhouse, P. W., Jr.; Chandler, W.; Hoell, J. M.; Westberg, D. J.

    2012-12-01

    The NASA/GEWEX SRB (Surface Radiation Budget) project has produced a 24.5-year continuous global record of shortwave and longwave radiation flux dataset at TOA and the Earth's surface from satellite measurements. The time span of the data is from July 1983 to December 2007, and the spatial resolution is 1 degree latitude by 1 degree longitude. SRB products are available on 3-hourly, 3-hourly-monthly, daily and monthly time scales. The inputs to the models include: 1.) Cloud parameters derived from pixel-level DX product of the International Satellite Cloud Climatology Project (ISCCP); 2.) Temperature and moisture profiles of the atmosphere generated with the Goddard Earth Observing System model Version 4.0.3 (GEOS-4.0.3) from a 4-D data assimilation product of the Data Assimilation Office at NASA Goddard Space Flight Center; 3.) Atmospheric column ozone record constructed from the Total Ozone Mapping Spectrometer (TOMS) aboard Nimbus-7 (July 1983 - November 1994), from the Operational Vertical Sounder aboard the Television Infrared Observation Satellite (TIROS, TOVS) (December 1994 - October 1995), from Ozone Monitoring Instrument (OMI), and from Stratospheric Monitoring Ozone Blended Analysis (SMOBA) products; 4.) Surface albedos based on monthly climatological clear-sky albedos at the top of the atmosphere (TOA) which in turn were derived from the NASA Clouds and the Earth's Radiant Energy System (CERES) data during 2000-2005; 5.) Surface emissivities from a map developed at NASA Langley Research Center. The SRB global irradiances have been extensively validated against the ground-based BSRN (Baseline Surface Radiation Network), GEBA (Global Energy Balance Archive), and WRDC (World Radiation Data Centre) data, and generally good agreement is achieved. In this paper, we apply the DirIndex model, a modified version of the DirInt model, to the SRB 3-hourly global irradiances and derive the 3-hourly beam, or direct normal, irradiances. Daily and monthly mean direct

  18. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.

    Directory of Open Access Journals (Sweden)

    Stephen M Plaza

    Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

  19. Dermoscopic Image Segmentation using Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    L. P. Suresh

    2011-01-01

    Full Text Available Problem statement: Malignant melanoma is the most frequent type of skin cancer. Its incidence has been rapidly increasing over the last few decades. Medical image segmentation is the most essential and crucial process in order to facilitate the characterization and visualization of the structure of interest in medical images. Approach: This study explains the task of segmenting skin lesions in Dermoscopy images based on intelligent systems such as Fuzzy and Neural Networks clustering techniques for the early diagnosis of Malignant Melanoma. The various intelligent system based clustering techniques used are Fuzzy C Means Algorithm (FCM, Possibilistic C Means Algorithm (PCM, Hierarchical C Means Algorithm (HCM; C-mean based Fuzzy Hopfield Neural Network, Adaline Neural Network and Regression Neural Network. Results: The segmented images are compared with the ground truth image using various parameters such as False Positive Error (FPE, False Negative Error (FNE Coefficient of similarity, spatial overlap and their performance is evaluated. Conclusion: The experimental results show that the Hierarchical C Means algorithm( Fuzzy provides better segmentation than other (Fuzzy C Means, Possibilistic C Means, Adaline Neural Network, FHNN and GRNN clustering algorithms. Thus Hierarchical C Means approach can handle uncertainties that exist in the data efficiently and useful for the lesion segmentation in a computer aided diagnosis system to assist the clinical diagnosis of dermatologists.

  20. An attribute-based image segmentation method

    Directory of Open Access Journals (Sweden)

    M.C. de Andrade

    1999-07-01

    Full Text Available This work addresses a new image segmentation method founded on Digital Topology and Mathematical Morphology grounds. The ABA (attribute based absorptions transform can be viewed as a region-growing method by flooding simulation working at the scale of the main structures of the image. In this method, the gray level image is treated as a relief flooded from all its local minima, which are progressively detected and merged as the flooding takes place. Each local minimum is exclusively associated to one catchment basin (CB. The CBs merging process is guided by their geometric parameters as depth, area and/or volume. This solution enables the direct segmentation of the original image without the need of a preprocessing step or the explicit marker extraction step, often required by other flooding simulation methods. Some examples of image segmentation, employing the ABA transform, are illustrated for uranium oxide samples. It is shown that the ABA transform presents very good segmentation results even in presence of noisy images. Moreover, it's use is often easier and faster when compared to similar image segmentation methods.

  1. Segmentation of antiperspirants and deodorants

    OpenAIRE

    KRÁL, Tomáš

    2009-01-01

    The goal of Master's Thesis on topic Segmentation of antiperspirants and deodorants is to discover differences in consumer's behaviour, determinate and describe segments of consumers based on these differences and propose marketing strategy for the most attractive segments. Theoretical part describes market segmentation in general, process of segmentation and segmentation criteria. Analytic part characterizes Czech market of antiperspirants and deodorants, analyzes ACNielsen market data and d...

  2. a segmentation approach

    African Journals Online (AJOL)

    kirstam

    a visitor survey was conducted at the Cape Town International Jazz ... 13Key words: dining motives, tipping, black diners, market segmentation, South .... and tipping behaviour as well as the findings from cross-cultural tipping and market.

  3. Segmental tuberculosis verrucosa cutis

    Directory of Open Access Journals (Sweden)

    Hanumanthappa H

    1994-01-01

    Full Text Available A case of segmental Tuberculosis Verrucosa Cutis is reported in 10 year old boy. The condition was resembling the ascending lymphangitic type of sporotrichosis. The lesions cleared on treatment with INH 150 mg daily for 6 months.

  4. Automatic Registration Method for Optical Remote Sensing Images with Large Background Variations Using Line Segments

    Directory of Open Access Journals (Sweden)

    Xiaolong Shi

    2016-05-01

    Full Text Available Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood. Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines, which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is

  5. Image Segmentation by Discounted Cumulative Ranking on Maximal Cliques

    CERN Document Server

    Carreira, Joao; Sminchisescu, Cristian

    2010-01-01

    We propose a mid-level image segmentation framework that combines multiple figure-ground hypothesis (FG) constrained at different locations and scales, into interpretations that tile the entire image. The problem is cast as optimization over sets of maximal cliques sampled from the graph connecting non-overlapping, putative figure-ground segment hypotheses. Potential functions over cliques combine unary Gestalt-based figure quality scores and pairwise compatibilities among spatially neighboring segments, constrained by T-junctions and the boundary interface statistics resulting from projections of real 3d scenes. Learning the model parameters is formulated as rank optimization, alternating between sampling image tilings and optimizing their potential function parameters. State of the art results are reported on both the Berkeley and the VOC2009 segmentation dataset, where a 28% improvement was achieved.

  6. Market segmentation: venezuelan adrs

    OpenAIRE

    Urbi Garay; Maximiliano González

    2012-01-01

    The foreign exchange controls imposed by Venezuela in 2003, constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that, although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, shares in the firm CANTV were, through its American Depositary Receipts (ADRs), partially integrated with the global market. Following the imposition of the ...

  7. Adjacent segment disease.

    Science.gov (United States)

    Virk, Sohrab S; Niedermeier, Steven; Yu, Elizabeth; Khan, Safdar N

    2014-08-01

    EDUCATIONAL OBJECTIVES As a result of reading this article, physicians should be able to: 1. Understand the forces that predispose adjacent cervical segments to degeneration. 2. Understand the challenges of radiographic evaluation in the diagnosis of cervical and lumbar adjacent segment disease. 3. Describe the changes in biomechanical forces applied to adjacent segments of lumbar vertebrae with fusion. 4. Know the risk factors for adjacent segment disease in spinal fusion. Adjacent segment disease (ASD) is a broad term encompassing many complications of spinal fusion, including listhesis, instability, herniated nucleus pulposus, stenosis, hypertrophic facet arthritis, scoliosis, and vertebral compression fracture. The area of the cervical spine where most fusions occur (C3-C7) is adjacent to a highly mobile upper cervical region, and this contributes to the biomechanical stress put on the adjacent cervical segments postfusion. Studies have shown that after fusion surgery, there is increased load on adjacent segments. Definitive treatment of ASD is a topic of continuing research, but in general, treatment choices are dictated by patient age and degree of debilitation. Investigators have also studied the risk factors associated with spinal fusion that may predispose certain patients to ASD postfusion, and these data are invaluable for properly counseling patients considering spinal fusion surgery. Biomechanical studies have confirmed the added stress on adjacent segments in the cervical and lumbar spine. The diagnosis of cervical ASD is complicated given the imprecise correlation of radiographic and clinical findings. Although radiological and clinical diagnoses do not always correlate, radiographs and clinical examination dictate how a patient with prolonged pain is treated. Options for both cervical and lumbar spine ASD include fusion and/or decompression. Current studies are encouraging regarding the adoption of arthroplasty in spinal surgery, but more long

  8. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  9. Skin Images Segmentation

    Directory of Open Access Journals (Sweden)

    Ali E. Zaart

    2010-01-01

    Full Text Available Problem statement: Image segmentation is a fundamental step in many applications of image processing. Skin cancer has been the most common of all new cancers detected each year. At early stage detection of skin cancer, simple and economic treatment can cure it mostly. An accurate segmentation of skin images can help the diagnosis to define well the region of the cancer. The principal approach of segmentation is based on thresholding (classification that is lied to the problem of the thresholds estimation. Approach: The objective of this study is to develop a method to segment the skin images based on a mixture of Beta distributions. We assume that the data in skin images can be modeled by a mixture of Beta distributions. We used an unsupervised learning technique with Beta distribution to estimate the statistical parameters of the data in skin image and then estimate the thresholds for segmentation. Results: The proposed method of skin images segmentation was implemented and tested on different skin images. We obtained very good results in comparing with the same techniques with Gamma distribution. Conclusion: The experiment showed that the proposed method obtained very good results but it requires more testing on different types of skin images.

  10. Real-time planar segmentation of depth images: from three-dimensional edges to segmented planes

    Science.gov (United States)

    Javan Hemmat, Hani; Bondarev, Egor; de With, Peter H. N.

    2015-09-01

    Real-time execution of processing algorithms for handling depth images in a three-dimensional (3-D) data framework is a major challenge. More specifically, considering depth images as point clouds and performing planar segmentation requires heavy computation, because available planar segmentation algorithms are mostly based on surface normals and/or curvatures, and, consequently, do not provide real-time performance. Aiming at the reconstruction of indoor environments, the spaces mainly consist of planar surfaces, so that a possible 3-D application would strongly benefit from a real-time algorithm. We introduce a real-time planar segmentation method for depth images avoiding any surface normal calculation. First, we detect 3-D edges in a depth image and generate line segments between the identified edges. Second, we fuse all the points on each pair of intersecting line segments into a plane candidate. Third and finally, we implement a validation phase to select planes from the candidates. Furthermore, various enhancements are applied to improve the segmentation quality. The GPU implementation of the proposed algorithm segments depth images into planes at the rate of 58 fps. Our pipeline-interleaving technique increases this rate up to 100 fps. With this throughput rate improvement, the application benefit of our algorithm may be further exploited in terms of quality and enhancing the localization.

  11. Social discourses of healthy eating. A market segmentation approach.

    Science.gov (United States)

    Chrysochou, Polymeros; Askegaard, Søren; Grunert, Klaus G; Kristensen, Dorthe Brogård

    2010-10-01

    This paper proposes a framework of discourses regarding consumers' healthy eating as a useful conceptual scheme for market segmentation purposes. The objectives are: (a) to identify the appropriate number of health-related segments based on the underlying discursive subject positions of the framework, (b) to validate and further describe the segments based on their socio-demographic characteristics and attitudes towards healthy eating, and (c) to explore differences across segments in types of associations with food and health, as well as perceptions of food healthfulness.316 Danish consumers participated in a survey that included measures of the underlying subject positions of the proposed framework, followed by a word association task that aimed to explore types of associations with food and health, and perceptions of food healthfulness. A latent class clustering approach revealed three consumer segments: the Common, the Idealists and the Pragmatists. Based on the addressed objectives, differences across the segments are described and implications of findings are discussed.

  12. A Novel Multiresolution Fuzzy Segmentation Method on MR Image

    Institute of Scientific and Technical Information of China (English)

    ZHANG HongMei(张红梅); BIAN ZhengZhong(卞正中); YUAN ZeJian(袁泽剑); YE Min(叶敏); JI Feng(冀峰)

    2003-01-01

    Multiresolution-based magnetic resonance (MR) image segmentation has attractedattention for its ability to capture rich information across scales compared with the conventionalsegmentation methods. In this paper, a new scale-space-based segmentation model is presented,where both the intra-scale and inter-scale properties are considered and formulated as two fuzzyenergy functions. Meanwhile, a control parameter is introduced to adjust the contribution of thesimilarity character across scales and the clustering character within the scale. By minimizing thecombined inter/intra energy function, the multiresolution fuzzy segmentation algorithm is derived.Then the coarse to fine leading segmentation is performed automatically and iteratively on a set ofmultiresolution images. The validity of the proposed algorithm is demonstrated by the test imageand pathological MR images. Experiments show that by this approach the segmentation results,especially in the tumor area delineation, are more precise than those of the conventional fuzzy segmentation methods.

  13. Learning Recursive Segments for Discourse Parsing

    CERN Document Server

    Afantenos, Stergos; Muller, Philippe; Danlos, Laurence

    2010-01-01

    Automatically detecting discourse segments is an important preliminary step towards full discourse parsing. Previous research on discourse segmentation have relied on the assumption that elementary discourse units (EDUs) in a document always form a linear sequence (i.e., they can never be nested). Unfortunately, this assumption turns out to be too strong, for some theories of discourse like SDRT allows for nested discourse units. In this paper, we present a simple approach to discourse segmentation that is able to produce nested EDUs. Our approach builds on standard multi-class classification techniques combined with a simple repairing heuristic that enforces global coherence. Our system was developed and evaluated on the first round of annotations provided by the French Annodis project (an ongoing effort to create a discourse bank for French). Cross-validated on only 47 documents (1,445 EDUs), our system achieves encouraging performance results with an F-score of 73% for finding EDUs.

  14. 'Grounded' Politics

    DEFF Research Database (Denmark)

    Schmidt, Garbi

    2012-01-01

    play within one particular neighbourhood: Nørrebro in the Danish capital, Copenhagen. The article introduces the concept of grounded politics to analyse how groups of Muslim immigrants in Nørrebro use the space, relationships and history of the neighbourhood for identity political statements....... The article further describes how national political debates over the Muslim presence in Denmark affect identity political manifestations within Nørrebro. By using Duncan Bell’s concept of mythscape (Bell, 2003), the article shows how some political actors idealize Nørrebro’s past to contest the present...

  15. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  16. Spine segmentation from C-arm CT data sets: application to region-of-interest volumes for spinal interventions

    Science.gov (United States)

    Buerger, C.; Lorenz, C.; Babic, D.; Hoppenbrouwers, J.; Homan, R.; Nachabe, R.; Racadio, J. M.; Grass, M.

    2017-03-01

    Spinal fusion is a common procedure to stabilize the spinal column by fixating parts of the spine. In such procedures, metal screws are inserted through the patients back into a vertebra, and the screws of adjacent vertebrae are connected by metal rods to generate a fixed bridge. In these procedures, 3D image guidance for intervention planning and outcome control is required. Here, for anatomical guidance, an automated approach for vertebra segmentation from C-arm CT images of the spine is introduced and evaluated. As a prerequisite, 3D C-arm CT images are acquired covering the vertebrae of interest. An automatic model-based segmentation approach is applied to delineate the outline of the vertebrae of interest. The segmentation approach is based on 24 partial models of the cervical, thoracic and lumbar vertebrae which aggregate information about (i) the basic shape itself, (ii) trained features for image based adaptation, and (iii) potential shape variations. Since the volume data sets generated by the C-arm system are limited to a certain region of the spine the target vertebra and hence initial model position is assigned interactively. The approach was trained and tested on 21 human cadaver scans. A 3-fold cross validation to ground truth annotations yields overall mean segmentation errors of 0.5 mm for T1 to 1.1 mm for C6. The results are promising and show potential to support the clinician in pedicle screw path and rod planning to allow accurate and reproducible insertions.

  17. Surface Figure Metrology for CELT Primary Mirror Segments

    Energy Technology Data Exchange (ETDEWEB)

    Sommargren, G; Phillion, D; Seppala, L; Lerner, S

    2001-02-27

    The University of California and California Institute of Technology are currently studying the feasibility of building a 30-m segmented ground based optical telescope called the California Extremely Large Telescope (CELT). The early ideas for this telescope were first described by Nelson and Mast and more recently refined by Nelson. In parallel, concepts for the fabrication of the primary segments were proposed by Mast, Nelson and Sommargren where high risk technologies were identified. One of these was the surface figure metrology needed for fabricating the aspheric mirror segments. This report addresses the advanced interferometry that will be needed to achieve 15nm rms accuracy for mirror segments with aspheric departures as large as 35mm peak-to-valley. For reasons of cost, size, measurement consistency and ease of operation we believe it is desirable to have a single interferometer that can be universally applied to each and every mirror segment. Such an instrument is described in this report.

  18. Segmented conjugated polymers

    Indian Academy of Sciences (India)

    G Padmanaban; S Ramakrishnan

    2003-08-01

    Segmented conjugated polymers, wherein the conjugation is randomly truncated by varying lengths of non-conjugated segments, form an interesting class of polymers as they not only represent systems of varying stiffness, but also ones where the backbone can be construed as being made up of chromophores of varying excitation energies. The latter feature, especially when the chromophores are fluorescent, like in MEHPPV, makes these systems particularly interesting from the photophysics point of view. Segmented MEHPPV- samples, where x represents the mole fraction of conjugated segments, were prepared by a novel approach that utilizes a suitable precursor wherein selective elimination of one of the two eliminatable groups is affected; the uneliminated units serve as conjugation truncations. Control of the composition x of the precursor therefore permits one to prepare segmented MEHPPV- samples with varying levels of conjugation (elimination). Using fluorescence spectroscopy, we have seen that even in single isolated polymer chains, energy migration from the shorter (higher energy) chromophores to longer (lower energy) ones occurs – the extent of which depends on the level of conjugation. Further, by varying the solvent composition, it is seen that the extent of energy transfer and the formation of poorly emissive inter-chromophore excitons are greatly enhanced with increasing amounts of non-solvent. A typical S-shaped curve represents the variation of emission yields as a function of composition suggestive of a cooperative collapse of the polymer coil, reminiscent of conformational transitions seen in biological macromolecules.

  19. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  20. Segmented heterochromia in scalp hair.

    Science.gov (United States)

    Yoon, Kyeong Han; Kim, Daehwan; Sohn, Seonghyang; Lee, Won Soo

    2003-12-01

    Segmented heterochromia of scalp hair is characterized by the irregularly alternating segmentation of hair into dark and light bands and is known to be associated with iron deficiency anemia. The authors report the case of an 11-year-old boy with segmented heterochromia associated with iron deficiency anemia. After 11 months of iron replacement, the boy's segmented heterochromic hair recovered completely.

  1. Validation of ACE-FTS v2.2 measurements of HCl, HF, CCl3F and CCl2F2 using space-, balloon- and ground-based instrument observations

    Directory of Open Access Journals (Sweden)

    C. Servais

    2008-10-01

    Full Text Available Hydrogen chloride (HCl and hydrogen fluoride (HF are respectively the main chlorine and fluorine reservoirs in the Earth's stratosphere. Their buildup resulted from the intensive use of man-made halogenated source gases, in particular CFC-11 (CCl3F and CFC-12 (CCl2F2, during the second half of the 20th century. It is important to continue monitoring the evolution of these source gases and reservoirs, in support of the Montreal Protocol and also indirectly of the Kyoto Protocol. The Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS is a space-based instrument that has been performing regular solar occultation measurements of over 30 atmospheric gases since early 2004. In this validation paper, the HCl, HF, CFC-11 and CFC-12 version 2.2 profile data products retrieved from ACE-FTS measurements are evaluated. Volume mixing ratio profiles have been compared to observations made from space by MLS and HALOE, and from stratospheric balloons by SPIRALE, FIRS-2 and Mark-IV. Partial columns derived from the ACE-FTS data were also compared to column measurements from ground-based Fourier transform instruments operated at 12 sites. ACE-FTS data recorded from March 2004 to August 2007 have been used for the comparisons. These data are representative of a variety of atmospheric and chemical situations, with sounded air masses extending from the winter vortex to summer sub-tropical conditions. Typically, the ACE-FTS products are available in the 10–50 km altitude range for HCl and HF, and in the 7–20 and 7–25 km ranges for CFC-11 and -12, respectively. For both reservoirs, comparison results indicate an agreement generally better than 5–10% above 20 km altitude, when accounting for the known offset affecting HALOE measurements of HCl and HF. Larger positive differences are however found for comparisons with single profiles from FIRS-2 and SPIRALE. For CFCs, the few coincident measurements available suggest that the differences

  2. Validation of ACE-FTS v2.2 measurements of HCl, HF, CCl3F and CCl2F2 using space-, balloon- and ground-based instrument observations

    Directory of Open Access Journals (Sweden)

    C. Tétard

    2008-02-01

    Full Text Available Hydrogen chloride (HCl and hydrogen fluoride (HF are respectively the main chlorine and fluorine reservoirs in the Earth's stratosphere. Their buildup resulted from the intensive use of man-made halogenated source gases, in particular CFC-11 (CCl3F and CFC-12 (CCl2F2, during the second half of the 20th century. It is important to continue monitoring the evolution of these source gases and reservoirs, in support of the Montreal Protocol and also indirectly of the Kyoto Protocol. The Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS is a space-based instrument that has been performing regular solar occultation measurements of over 30 atmospheric gases since early 2004. In this validation paper, the HCl, HF, CFC-11 and CFC-12 version 2.2 profile data products retrieved from ACE-FTS measurements are evaluated. Volume mixing ratio profiles have been compared to observations made from space by MLS and HALOE, and from stratospheric balloons by SPIRALE, FIRS-2 and Mark-IV. Partial columns derived from the ACE-FTS data were also compared to column measurements from ground-based Fourier transform instruments operated at 12 sites. ACE-FTS data recorded from March 2004 to August 2007 have been used for the comparisons. These data are representative of a variety of atmospheric and chemical situations, with sounded air masses extending from the winter vortex to summer sub-tropical conditions. Typically, the ACE-FTS products are available in the 10–50 km altitude range for HCl and HF, and in the 7–20 and 7–25 km ranges for CFC-11 and CFC-12, respectively. For both reservoirs, comparison results indicate an agreement generally better than 5–10%, when accounting for the known offset affecting HALOE measurements of HCl and HF. Larger positive differences are however found for comparisons with single profiles from FIRS-2 and SPIRALE. For CFCs, the few coincident measurements available suggest that the differences probably remain

  3. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus;

    ], or are applicable only to analytically solvable geometries[4]. In addition, some questions remained fundamentally unanswered, such as how to segment a given design into N uniformly magnetized pieces.Our method calculates the globally optimal shape and magnetization direction of each segment inside a certain......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... designarea with an optional constraint on the total amount of magnetic material. The method can be applied to any objective functional which is linear respect to the field, and with any combination of linear materials. Being based on an analytical-optimization approach, the algorithm is not computationally...

  4. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  5. Microscopic Halftone Image Segmentation

    Institute of Scientific and Technical Information of China (English)

    WANG Yong-gang; YANG Jie; DING Yong-sheng

    2004-01-01

    Microscopic halftone image recognition and analysis can provide quantitative evidence for printing quality control and fault diagnosis of printing devices, while halftone image segmentation is one of the significant steps during the procedure. Automatic segmentation on microscopic dots by the aid of the Fuzzy C-Means (FCM) method that takes account of the fuzziness of halftone image and utilizes its color information adequately is realized. Then some examples show the technique effective and simple with better performance of noise immunity than some usual methods. In addition, the segmentation results obtained by the FCM in different color spaces are compared, which indicates that the method using the FCM in the f1f2f3 color space is superior to the rest.

  6. GPS Control Segment Improvements

    Science.gov (United States)

    2015-04-29

    Systems Center GPS Control Segment Improvements Mr. Tim McIntyre GPS Product Support Manager GPS Ops Support and Sustainment Division Peterson...DATE 29 APR 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE GPS Control Segment Improvements 5a. CONTRACT...ORGANIZATION NAME(S) AND ADDRESS(ES) Air Force Space Command,Space and Missile Systems Center, GPS Ops Support and Sustainment Division,Peterson AFB,CO,80916 8

  7. Statistical Images Segmentation

    Directory of Open Access Journals (Sweden)

    Corina Curilă

    2008-05-01

    Full Text Available This paper deals with fuzzy statistical imagesegmentation. We introduce a new hierarchicalMarkovian fuzzy hidden field model, which extends to thefuzzy case the classical Pérez and Heitz hard model. Twofuzzy statistical segmentation methods related with themodel proposed are defined in this paper and we show viasimulations that they are competitive with, in some casesthan, the classical Maximum Posterior Mode (MPMbased methods. Furthermore, they are faster, which willshould facilitate extensions to more than two hard classesin future work. In addition, the model proposed isapplicable to the multiscale segmentation andmultiresolution images fusion problems.

  8. Validation of the use of organic acids and acidified sodium chlorite to reduce Escherichia coli O157 and Salmonella typhimurium in beef trim and ground beef in a simulated processing environment.

    Science.gov (United States)

    Harris, K; Miller, M F; Loneragan, G H; Brashears, M M

    2006-08-01

    A study was conducted to determine if acidified sodium chlorite (1,200 ppm) and acetic and lactic acids (2 and 4%) were effective in reducing foodborne pathogens in beef trim prior to grinding in a simulated processing environment. The reduction of Salmonella Typhimurium and Escherichia coli O157:H7 at high (4.0 log CFU/g) and low (1.0 log CFU/g) inoculation doses was evaluated at various processing steps, including the following: (i) in trim just after treatment application, (ii) in ground beef just after grinding, (iii) in ground beef 24 h after refrigerated storage, (iv) in ground beef 5 days after refrigerated storage, and (v) in ground beef 30 days after frozen storage. All antimicrobial treatments reduced the pathogens on the trim inoculated with the lower inoculation dose to nondetectable numbers in the trim and in the ground beef. There were significant reductions of both pathogens in the trim and in the ground beef inoculated with the high inoculation doses. On the trim itself, E. coli O157:H7 and Salmonella Typhimurium were reduced by 1.5 to 2.0 log cycles, with no differences among all treatments. In the ground beef, the organic acids were more effective in reducing both pathogens than the acidified sodium chlorite immediately after grinding, but after 1 day of storage, there were no differences among treatments. Overall, in the ground beef, there was a 2.5-log reduction of E. coli O157:H7 and a 1.5-log reduction of Salmonella Typhimurium that was sustained over time in refrigerated and frozen storage. Very few sensory differences between the control samples and the treated samples were detected by a consumer panel. Thus, antimicrobial treatments did not cause serious adverse sensory changes. Use of these antimicrobial treatments can be a promising intervention available to ground beef processors who currently have few interventions in their process.

  9. Ground-Based Lidar Measurements During the CALIPSO and Twilight Zone (CATZ) Campaign

    Science.gov (United States)

    Berkoff, Timothy; Qian, Li; Kleidman, Richard; Stewart, Sebastian; Welton, Ellsworth; Li, Zhu; Holbem, Brent

    2008-01-01

    The CALIPSO and Twilight Zone (CATZ) field campaign was carried out between June 26th and August 29th of 2007 in the multi-state Maryland-Virginia-Pennsylvania region of the U.S. to study aerosol properties and cloud-aerosol interactions during overpasses of the CALIPSO satellite. Field work was conducted on selected days when CALIPSO ground tracks occurred in the region. Ground-based measurements included data from multiple Cimel sunphotometers that were placed at intervals along a segment of the CALIPSO ground-track. These measurements provided sky radiance and AOD measurements to enable joints inversions and comparisons with CALIPSO retrievals. As part of this activity, four ground-based lidars provided backscatter measurements (at 523 nm) in the region. Lidars at University of Maryland Baltimore County (Catonsville, MD) and Goddard Space Flight Center (Greenbelt, MD) provided continuous data during the campaign, while two micro-pulse lidar (MPL) systems were temporarily stationed at various field locations directly on CALIPSO ground-tracks. As a result, thirteen on-track ground-based lidar observations were obtained from eight different locations in the region. In some cases, nighttime CALIPSO coincident measurements were also obtained. In most studies reported to date, ground-based lidar validation efforts for CALIPSO rely on systems that are at fixed locations some distance away from the satellite ground-track. The CATZ ground-based lidar data provide an opportunity to examine vertical structure properties of aerosols and clouds both on and off-track simultaneously during a CALIPSO overpass. A table of available ground-based lidar measurements during this campaign will be presented, along with example backscatter imagery for a number of coincident cases with CALIPSO. Results indicate that even for a ground-based measurements directly on-track, comparisons can still pose a challenge due to the differing spatio-temporal properties of the ground and satellite

  10. Numerical simulation and validation on heat exchange performance of pile spiral coil ground heat exchanger%桩基螺旋型地埋管换热器换热性能的数值模拟与验证

    Institute of Scientific and Technical Information of China (English)

    杨卫波; 杨晶晶; 孔磊

    2016-01-01

    The new trends in energy savings and greenhouse gas reductions are expecting to explore the utilization of shallow geothermal energy. The most popular way to exploit shallow geothermal energy resources is the ground coupled heat pump (GCHP) system with using ground as a heat source. Because underground temperature is rather constant compared with ambient air temperature, the GCHP could achieve higher efficiency as well as more stable performance compared with traditional air source heat pumps. Thus the GCHP system becomes increasingly popular in commercial and institutional buildings. In general, a vertical borehole with ground heat exchanger (GHE) is used as the mainstream of GCHP system. However, the wide application of this type of GCHP technology has been limited by its higher initial cost and substantial land areas required to install the GHE. For this reason, the foundation piles of buildings have been used as part of GHE in recent years to reduce the cost of drilling borehole and save the required land area. This innovative idea of utilizing what are usually called “energy piles”, has led to notable progress in the field of GCHP systems. It has become particularly attractive because it lowers total cost and spatial requirements, and offers the higher renewable contribution. In this paper, a novel configuration of an energy pile with a spiral coil was proposed. In order to investigate the effects of various factors on heat exchange performance of the pile spiral coil GHE, a numerical model of the pile with a spiral coil was developed. Based on the numerical solution of the model, the effects of pile diameter, pile depth, spiral coil group number and soil type on the heat exchange rate and soil temperature distribution of the spiral pile GHE were analyzed. The results indicated that increasing foundation pile diameter can improve the thermal storage capacity and thus enhance heat exchange rate of pile. But increase in foundation pile diameter can also

  11. Engineering Tools and Validation Test Beds for New Telecommunication Satellite Multimedia Systems

    Science.gov (United States)

    Foix, V.; Taisant, J.-Ph.; Piau, P.; Thomasson, L.

    2002-01-01

    Satellite telecommunication and broadcasting systems have to adapt to the major evolutions introduced by the emergence of new multimedia services distributed by terrestrial networks. This major adaptation of satellite telecommunication systems implies the use of new technologies and standards, on-board satellites and within the telecommunication ground segment. The deeper interaction between space and ground infrastructures induced by these evolutions also leads to additional system complexity. The definition, design and end-to-end validation of these satellite networks require dedicated engineering tools and validation test beds running the major elements of the telecommunication mission, e.g. on-board and ground equipment implementing the various protocols and algorithms used in the system. Through two programmes called respectively "Atelier Télécom du Futur" and "Multimedia System Validation Test Beds", CNES has been developing since early 2000 an advanced simulation tool and complementary test beds to support engineering activities and cover most of the end-to-end validation needs of these new satellite telecommunication multimedia systems. This communication aims to present the technical objectives, the logic which has led to propose several complementary means, their main characteristics and development status. To end up, the first results provided by these tools and test beds are presented.

  12. Benchmark for license plate character segmentation

    Science.gov (United States)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  13. 赫歇尔空间天文台科学地面段软件开发管理方法的研究%A Study of the Method for Software-Development Management of the Herschel Science Ground Segment

    Institute of Scientific and Technical Information of China (English)

    张洁; 黄茂海

    2015-01-01

    Currently project software managements for space observatories in China adopt methods based on the waterfall model and the requirement management for long-term fixed requirements.These methods cannot meet the demand to develop complex systems for ground-based applications of spaced-based observation. In this paper we present a study of the method of software-development management for the Herschel Science Ground Segment ( HSGS) , which is a first-class successful model of software-development management of the world.The HSGS uses the method of branched development.Based on an iterative model the method of branched development adopts the Software Project Management Plan (SPMP), which is practically reasonable and applicable.The implementation of the method in the HSGS synthetically meets the requirements of the HSGS and the payloads of the entire project.The method is an open-management approach capable of incorporating application requirements in practically emerging use cases.With the method the HSGS changes the conventional situation that a system for ground-based applications is developed at the final stage of a project of a spaced-based observatory.Instead, the HSGS works right from the payload-development stage, and it is frequently adjusted to meet changing requirements.The HSGS can thus always support data-analysis systems highly efficiently.The instrument engineers and scientists can accept training of operation of the scientific instruments from the start of the project to reduce chances for operational mistakes.Meanwhile, the software in the HSGS can be improved in the course of operations to ensure mission success.The merits of the HSGS are absent in managements of Chinese space projects.Our study of the HSGS shows a new method and a new line of thoughts for software-engineering managements of space-observatory projects in China.%国内空间项目软件管理方法已经不能满足当前日益复杂的地面应用系统的要求。深入研究了具

  14. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    Comparative molecular, developmental and morphogenetic analyses show that the three major segmented animal groups- Lophotrochozoa, Ecdysozoa and Vertebrata-use a wide range of ontogenetic pathways to establish metameric body organization. Even in the life history of a single specimen, different m...

  15. [Segmental testicular infarction].

    Science.gov (United States)

    Ripa Saldías, L; Guarch Troyas, R; Hualde Alfaro, A; de Pablo Cárdenas, A; Ruiz Ramo, M; Pinós Paul, M

    2006-02-01

    We report the case of a 47 years old man previously diagnosed of left hidrocele. After having a recent mild left testicular pain, an ultrasonografic study revealed a solid hipoecoic testicular lesion rounded by a big hidrocele, suggesting a testicular neoplasm. Radical inguinal orchiectomy was made and pathologic study showed segmental testicular infarction. No malignancy was found. We review the literature of the topic.

  16. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  17. Pupil segmentation using active contour with shape prior

    Science.gov (United States)

    Ukpai, Charles O.; Dlay, Satnam S.; Woo, Wai L.

    2015-03-01

    Iris segmentation is the process of defining the valid part of the eye image used for further processing (feature extraction, matching and decision making). Segmentation of the iris mostly starts with pupil boundary segmentation. Most pupil segmentation techniques are based on the assumption that the pupil is circular shape. In this paper, we propose a new pupil segmentation technique which combines shape, location and spatial information for accurate and efficient segmentation of the pupil. Initially, the pupil's position and radius is estimated using a statistical approach and circular Hough transform. In order to segment the irregular boundary of the pupil, an active contour model is initialized close to the estimated boundary using information from the first step and segmentation is achieved using energy minimization based active contour. Pre-processing and post-processing were carried out to remove noise and occlusions respectively. Experimental results on CASIA V1.0 and 4.0 shows that the proposed method is highly effective at segmenting irregular boundaries of the pupil.

  18. Automated segmentation of the injured spleen.

    Science.gov (United States)

    Dandin, Ozgür; Teomete, Uygar; Osman, Onur; Tulum, Gökalp; Ergin, Tuncer; Sabuncuoglu, Mehmet Zafer

    2016-03-01

    To develop a novel automated method for segmentation of the injured spleen using morphological properties following abdominal trauma. Average attenuation of a normal spleen in computed tomography (CT) does not vary significantly between subjects. However, in the case of solid organ injury, the shape and attenuation of the spleen on CT may vary depending on the time and severity of the injury. Timely assessment of the severity and extent of the injury is of vital importance in the setting of trauma. We developed an automated computer-aided method for segmenting the injured spleen from CT scans of patients who had splenectomy due to abdominal trauma. We used ten subjects to train our computer-aided diagnosis (CAD) method. To validate the CAD method, we used twenty subjects in our testing group. Probabilistic atlases of the spleens were created using manually segmented data from ten CT scans. The organ location was modeled based on the position of the spleen with respect to the left side of the spine followed by the extraction of shape features. We performed the spleen segmentation in three steps. First, we created a mask of the spleen, and then we used this mask to segment the spleen. The third and final step was the estimation of the spleen edges in the presence of an injury such as laceration or hematoma. The traumatized spleens were segmented with a high degree of agreement with the radiologist-drawn contours. The spleen quantification led to [Formula: see text] volume overlap, [Formula: see text] Dice similarity index, [Formula: see text] precision/sensitivity, [Formula: see text] volume estimation error rate, [Formula: see text] average surface distance/root-mean-squared error. Our CAD method robustly segments the spleen in the presence of morphological changes such as laceration, contusion, pseudoaneurysm, active bleeding, periorgan and parenchymal hematoma, including subcapsular hematoma due to abdominal trauma. CAD of the splenic injury due to abdominal

  19. Segmentation in Tardigrada and diversification of segmental patterns in Panarthropoda.

    Science.gov (United States)

    Smith, Frank W; Goldstein, Bob

    2016-10-31

    The origin and diversification of segmented metazoan body plans has fascinated biologists for over a century. The superphylum Panarthropoda includes three phyla of segmented animals-Euarthropoda, Onychophora, and Tardigrada. This superphylum includes representatives with relatively simple and representatives with relatively complex segmented body plans. At one extreme of this continuum, euarthropods exhibit an incredible diversity of serially homologous segments. Furthermore, distinct tagmosis patterns are exhibited by different classes of euarthropods. At the other extreme, all tardigrades share a simple segmented body plan that consists of a head and four leg-bearing segments. The modular body plans of panarthropods make them a tractable model for understanding diversification of animal body plans more generally. Here we review results of recent morphological and developmental studies of tardigrade segmentation. These results complement investigations of segmentation processes in other panarthropods and paleontological studies to illuminate the earliest steps in the evolution of panarthropod body plans.

  20. Segmentation of human upper body movement using multiple IMU sensors.

    Science.gov (United States)

    Aoki, Takashi; Lin, Jonathan Feng-Shun; Kulic, Dana; Venture, Gentiane

    2016-08-01

    This paper proposes an approach for the segmentation of human body movements measured by inertial measurement unit sensors. Using the angular velocity and linear acceleration measurements directly, without converting to joint angles, we perform segmentation by formulating the problem as a classification problem, and training a classifier to differentiate between motion end-point and within-motion points. The proposed approach is validated with experiments measuring the upper body movement during reaching tasks, demonstrating classification accuracy of over 85.8%.

  1. Combining multi-atlas segmentation with brain surface estimation

    Science.gov (United States)

    Huo, Yuankai; Carass, Aaron; Resnick, Susan M.; Pham, Dzung L.; Prince, Jerry L.; Landman, Bennett A.

    2016-03-01

    Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitation in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.

  2. SPEED: the Segmented Pupil Experiment for Exoplanet Detection

    CERN Document Server

    Patrice, Martinez; Carole, Gouvret; Julien, Dejongue; Jean-Baptiste, Daban; Alain, Spang; Frantz, Martinache; Mathilde, Beaulieu; Pierre, Janin-Potiron; Lyu, Abe; Yan, Fantei-Caujolle; Damien, Mattei; Sebastien, Ottogali

    2014-01-01

    Searching for nearby exoplanets with direct imaging is one of the major scientific drivers for both space and ground-based programs. While the second generation of dedicated high-contrast instruments on 8-m class telescopes is about to greatly expand the sample of directly imaged planets, exploring the planetary parameter space to hitherto-unseen regions ideally down to Terrestrial planets is a major technological challenge for the forthcoming decades. This requires increasing spatial resolution and significantly improving high contrast imaging capabilities at close angular separations. Segmented telescopes offer a practical path toward dramatically enlarging telescope diameter from the ground (ELTs), or achieving optimal diameter in space. However, translating current technological advances in the domain of high-contrast imaging for monolithic apertures to the case of segmented apertures is far from trivial. SPEED (the segmented pupil experiment for exoplanet detection) is a new instrumental facility in deve...

  3. Statistical shape model with random walks for inner ear segmentation

    DEFF Research Database (Denmark)

    Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma

    2016-01-01

    Cochlear implants can restore hearing to completely or partially deaf patients. The intervention planning can be aided by providing a patient-specific model of the inner ear. Such a model has to be built from high resolution images with accurate segmentations. Thus, a precise segmentation...... is required. We propose a new framework for segmentation of micro-CT cochlear images using random walks combined with a statistical shape model (SSM). The SSM allows us to constrain the less contrasted areas and ensures valid inner ear shape outputs. Additionally, a topology preservation method is proposed...

  4. A Model for Web Page Usage Mining Based on Segmentation

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The web page usage mining plays a vital role in enriching the page's content and structure based on the feedbacks received from the user's interactions with the page. This paper proposes a model for micro-managing the tracking activities by fine-tuning the mining from the page level to the segment level. The proposed model enables the web-master to identify the segments which receives more focus from users comparing with others. The segment level analytics of user actions provides an important metric to analyse the factors which facilitate the increase in traffic for the page. The empirical validation of the model is performed through prototype implementation.

  5. Development of Image Segmentation Methods for Intracranial Aneurysms

    Directory of Open Access Journals (Sweden)

    Yuka Sen

    2013-01-01

    Full Text Available Though providing vital means for the visualization, diagnosis, and quantification of decision-making processes for the treatment of vascular pathologies, vascular segmentation remains a process that continues to be marred by numerous challenges. In this study, we validate eight aneurysms via the use of two existing segmentation methods; the Region Growing Threshold and Chan-Vese model. These methods were evaluated by comparison of the results obtained with a manual segmentation performed. Based upon this validation study, we propose a new Threshold-Based Level Set (TLS method in order to overcome the existing problems. With divergent methods of segmentation, we discovered that the volumes of the aneurysm models reached a maximum difference of 24%. The local artery anatomical shapes of the aneurysms were likewise found to significantly influence the results of these simulations. In contrast, however, the volume differences calculated via use of the TLS method remained at a relatively low figure, at only around 5%, thereby revealing the existence of inherent limitations in the application of cerebrovascular segmentation. The proposed TLS method holds the potential for utilisation in automatic aneurysm segmentation without the setting of a seed point or intensity threshold. This technique will further enable the segmentation of anatomically complex cerebrovascular shapes, thereby allowing for more accurate and efficient simulations of medical imagery.

  6. Market segmentation: Venezuelan ADRs

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2012-12-01

    Full Text Available The control on foreign exchange imposed by Venezuela in 2003 constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, the shares in the firm CANTV were, through their American Depositary Receipts (ADRs, partially integrated with the global market. Following the imposition of the exchange controls this integration was lost. Research also documents the spectacular and apparently contradictory rise experienced by the Caracas Stock Exchange during the serious economic crisis of 2003. It is argued that, as it happened in Argentina in 2002, the rise in share prices occurred because the depreciation of the Bolívar in the parallel currency market increased the local price of the stocks that had associated ADRs, which were negotiated in dollars.

  7. Segmentation Using Symmetry Deviation

    DEFF Research Database (Denmark)

    Hollensen, Christian; Højgaard, L.; Specht, L.;

    2011-01-01

    segmentations on manual contours was evaluated using concordance index and sensitivity for the hypopharyngeal patients. The resulting concordance index and sensitivity was compared with the result of using a threshold of 3 SUV using a paired t-test. Results: The anatomical and symmetrical atlas was constructed...... and sensitivity of respectively 0.43±0.15 and 0.56±0.18 was acquired. It was compared to the concordance index of segmentation using absolute threshold of 3 SUV giving respectively 0.41±0.16 and 0.51±0.19 for concordance index and sensitivity yielding p-values of 0.33 and 0.01 for a paired t-test respectively....

  8. Automated segmentation of atherosclerotic histology based on pattern classification

    Directory of Open Access Journals (Sweden)

    Arna van Engelen

    2013-01-01

    Full Text Available Background: Histology sections provide accurate information on atherosclerotic plaque composition, and are used in various applications. To our knowledge, no automated systems for plaque component segmentation in histology sections currently exist. Materials and Methods: We perform pixel-wise classification of fibrous, lipid, and necrotic tissue in Elastica Von Gieson-stained histology sections, using features based on color channel intensity and local image texture and structure. We compare an approach where we train on independent data to an approach where we train on one or two sections per specimen in order to segment the remaining sections. We evaluate the results on segmentation accuracy in histology, and we use the obtained histology segmentations to train plaque component classification methods in ex vivo Magnetic resonance imaging (MRI and in vivo MRI and computed tomography (CT. Results: In leave-one-specimen-out experiments on 176 histology slices of 13 plaques, a pixel-wise accuracy of 75.7 ± 6.8% was obtained. This increased to 77.6 ± 6.5% when two manually annotated slices of the specimen to be segmented were used for training. Rank correlations of relative component volumes with manually annotated volumes were high in this situation (P = 0.82-0.98. Using the obtained histology segmentations to train plaque component classification methods in ex vivo MRI and in vivo MRI and CT resulted in similar image segmentations for training on the automated histology segmentations as for training on a fully manual ground truth. The size of the lipid-rich necrotic core was significantly smaller when training on fully automated histology segmentations than when manually annotated histology sections were used. This difference was reduced and not statistically significant when one or two slices per section were manually annotated for histology segmentation. Conclusions: Good histology segmentations can be obtained by automated segmentation

  9. Connecting textual segments

    DEFF Research Database (Denmark)

    Brügger, Niels

    2017-01-01

    In “Connecting textual segments: A brief history of the web hyperlink” Niels Brügger investigates the history of one of the most fundamental features of the web: the hyperlink. Based on the argument that the web hyperlink is best understood if it is seen as another step in a much longer and broader......-alone computers and in local and global digital networks....

  10. Automated segmentation tool for brain infusions.

    Directory of Open Access Journals (Sweden)

    Kathryn Hammond Rosenbluth

    Full Text Available This study presents a computational tool for auto-segmenting the distribution of brain infusions observed by magnetic resonance imaging. Clinical usage of direct infusion is increasing as physicians recognize the need to attain high drug concentrations in the target structure with minimal off-target exposure. By co-infusing a Gadolinium-based contrast agent and visualizing the distribution using real-time using magnetic resonance imaging, physicians can make informed decisions about when to stop or adjust the infusion. However, manual segmentation of the images is tedious and affected by subjective preferences for window levels, image interpolation and personal biases about where to delineate the edge of the sloped shoulder of the infusion. This study presents a computational technique that uses a Gaussian Mixture Model to efficiently classify pixels as belonging to either the high-intensity infusate or low-intensity background. The algorithm was implemented as a distributable plug-in for the widely used imaging platform OsiriX®. Four independent operators segmented fourteen anonymized datasets to validate the tool's performance. The datasets were intra-operative magnetic resonance images of infusions into the thalamus or putamen of non-human primates. The tool effectively reproduced the manual segmentation volumes, while significantly reducing intra-operator variability by 67±18%. The tool will be used to increase efficiency and reduce variability in upcoming clinical trials in neuro-oncology and gene therapy.

  11. A toolbox for multiple sclerosis lesion segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Roura, Eloy; Oliver, Arnau; Valverde, Sergi; Llado, Xavier [University of Girona, Computer Vision and Robotics Group, Girona (Spain); Cabezas, Mariano; Pareto, Deborah; Rovira, Alex [Vall d' Hebron University Hospital, Magnetic Resonance Unit, Dept. of Radiology, Barcelona (Spain); Vilanova, Joan C. [Girona Magnetic Resonance Center, Girona (Spain); Ramio-Torrenta, Lluis [Dr. Josep Trueta University Hospital, Institut d' Investigacio Biomedica de Girona, Multiple Sclerosis and Neuroimmunology Unit, Girona (Spain)

    2015-10-15

    Lesion segmentation plays an important role in the diagnosis and follow-up of multiple sclerosis (MS). This task is very time-consuming and subject to intra- and inter-rater variability. In this paper, we present a new tool for automated MS lesion segmentation using T1w and fluid-attenuated inversion recovery (FLAIR) images. Our approach is based on two main steps, initial brain tissue segmentation according to the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) performed in T1w images, followed by a second step where the lesions are segmented as outliers to the normal apparent GM brain tissue on the FLAIR image. The tool has been validated using data from more than 100 MS patients acquired with different scanners and at different magnetic field strengths. Quantitative evaluation provided a better performance in terms of precision while maintaining similar results on sensitivity and Dice similarity measures compared with those of other approaches. Our tool is implemented as a publicly available SPM8/12 extension that can be used by both the medical and research communities. (orig.)

  12. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S. W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  13. Market Segmentation for Information Services.

    Science.gov (United States)

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  14. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  15. Market Segmentation for Information Services.

    Science.gov (United States)

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  16. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  17. Segmentation of the heart and great vessels in CT images using a model-based adaptation framework.

    Science.gov (United States)

    Ecabert, Olivier; Peters, Jochen; Walker, Matthew J; Ivanc, Thomas; Lorenz, Cristian; von Berg, Jens; Lessick, Jonathan; Vembar, Mani; Weese, Jürgen

    2011-12-01

    Recently, model-based methods for the automatic segmentation of the heart chambers have been proposed. An important application of these methods is the characterization of the heart function. Heart models are, however, increasingly used for interventional guidance making it necessary to also extract the attached great vessels. It is, for instance, important to extract the left atrium and the proximal part of the pulmonary veins to support guidance of ablation procedures for atrial fibrillation treatment. For cardiac resynchronization therapy, a heart model including the coronary sinus is needed. We present a heart model comprising the four heart chambers and the attached great vessels. By assigning individual linear transformations to the heart chambers and to short tubular segments building the great vessels, variable sizes of the heart chambers and bending of the vessels can be described in a consistent way. A configurable algorithmic framework that we call adaptation engine matches the heart model automatically to cardiac CT angiography images in a multi-stage process. First, the heart is detected using a Generalized Hough Transformation. Subsequently, the heart chambers are adapted. This stage uses parametric as well as deformable mesh adaptation techniques. In the final stage, segments of the large vascular structures are successively activated and adapted. To optimize the computational performance, the adaptation engine can vary the mesh resolution and freeze already adapted mesh parts. The data used for validation were independent from the data used for model-building. Ground truth segmentations were generated for 37 CT data sets reconstructed at several cardiac phases from 17 patients. Segmentation errors were assessed for anatomical sub-structures resulting in a mean surface-to-surface error ranging 0.50-0.82mm for the heart chambers and 0.60-1.32mm for the parts of the great vessels visible in the images.

  18. Validation of White-Matter Lesion Change Detection Methods on a Novel Publicly Available MRI Image Database.

    Science.gov (United States)

    Lesjak, Žiga; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2016-10-01

    Changes of white-matter lesions (WMLs) are good predictors of the progression of neurodegenerative diseases like multiple sclerosis (MS). Based on longitudinal magnetic resonance (MR) imaging the changes can be monitored, while the need for their accurate and reliable quantification led to the development of several automated MR image analysis methods. However, an objective comparison of the methods is difficult, because publicly unavailable validation datasets with ground truth and different sets of performance metrics were used. In this study, we acquired longitudinal MR datasets of 20 MS patients, in which brain regions were extracted, spatially aligned and intensity normalized. Two expert raters then delineated and jointly revised the WML changes on subtracted baseline and follow-up MR images to obtain ground truth WML segmentations. The main contribution of this paper is an objective, quantitative and systematic evaluation of two unsupervised and one supervised intensity based change detection method on the publicly available datasets with ground truth segmentations, using common pre- and post-processing steps and common evaluation metrics. Besides, different combinations of the two main steps of the studied change detection methods, i.e. dissimilarity map construction and its segmentation, were tested to identify the best performing combination.

  19. MRI Segmentation of the Human Brain: Challenges, Methods, and Applications

    Directory of Open Access Journals (Sweden)

    Ivana Despotović

    2015-01-01

    Full Text Available Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain’s anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation.

  20. MRI segmentation of the human brain: challenges, methods, and applications.

    Science.gov (United States)

    Despotović, Ivana; Goossens, Bart; Philips, Wilfried

    2015-01-01

    Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation.

  1. Segmentation and Location Computation of Bin Objects

    Directory of Open Access Journals (Sweden)

    C.R. Hema

    2008-11-01

    Full Text Available In this paper we present a stereo vision based system for segmentation and location computation of partially occluded objects in bin picking environments. Algorithms to segment partially occluded objects and to find the object location [midpoint,x, y and z coordinates] with respect to the bin area are proposed. The z co ordinate is computed using stereo images and neural networks. The proposed algorithms is tested using two neural network architectures namely the Radial Basis Function nets and Simple Feedforward nets. The training results fo feedforward nets are found to be more suitable for the current application.The proposed stereo vision system is interfaced with an Adept SCARA Robot to perform bin picking operations. The vision system is found to be effective for partially occluded objects, in the absence of albedo effects. The results are validated through real time bin picking experiments on the Adept Robot.

  2. Measuring segmented primary mirror WFE in the presence of vibration and thermal drift on the light-weighted JWST

    Science.gov (United States)

    Whitman, Tony L.; Dziak, Kenneth J.; Wells, Conrad; Olczak, Gene

    2012-09-01

    The light-weighted design of the Optical Telescope Element (OTE) of the James Webb Telescope (JWST) leads to additional sensitivity to vibration from the ground - an important consideration to the measurement uncertainty of the wavefront error (WFE) in the primary mirror. Furthermore, segmentation of the primary mirror leads to rigid-body movements of segment areas in the WFE. The ground vibrations are minimized with modifications to the test facility, and by the architecture of the equipment supporting the load. Additional special test equipment (including strategically placed isolators, tunable mass dampers, and cryogenic magnetic dampers) mitigates the vibration and the response sensitivity before reaching the telescope. A multi-wavelength interferometer is designed and operated to accommodate the predicted residual vibration. Thermal drift also adds to the measurement variation. Test results of test equipment components, measurement theory, and finite element analysis combine to predict the test uncertainty in the future measurement of the primary mirror. The vibration input to the finite element model comes from accelerometer measurements of the facility with the environmental control pumps operating. One of the isolators have been built and tested to validate the dynamic performance. A preliminary model of the load support equipment and the OTE with the Integrated Science Instrument Module (ISIM) is complete. The performance of the add-on dampers have been established in previous applications. And operation of the multi-wavelength interferometer was demonstrated on a scaled hardware version of the JWST in an environment with vibration and thermal drift.

  3. Comparison of atlas-based techniques for whole-body bone segmentation.

    Science.gov (United States)

    Arabi, Hossein; Zaidi, Habib

    2017-02-01

    We evaluate the accuracy of whole-body bone extraction from whole-body MR images using a number of atlas-based segmentation methods. The motivation behind this work is to find the most promising approach for the purpose of MRI-guided derivation of PET attenuation maps in whole-body PET/MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean square distance (MSD) as image similarity measures for calculating the weighting factors, along with other atlas-dependent algorithms, such as (v) shape-based averaging (SBA) and (vi) Hofmann's pseudo-CT generation method. The performance evaluation of the different segmentation techniques was carried out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice criterion, global weighting atlas fusion methods provided moderate improvement of whole-body bone segmentation (DSC= 0.65 ± 0.05) compared to non-weighted IA (DSC= 0.60 ± 0.02). The local weighed atlas fusion approach using the MSD similarity measure outperformed the other strategies by achieving a DSC of 0.81 ± 0.03 while using the NCC and NMI measures resulted in a DSC of 0.78 ± 0.05 and 0.75 ± 0.04, respectively. Despite very long computation time, the extracted

  4. Signs of segmentation?

    DEFF Research Database (Denmark)

    Ilsøe, Anna

    2012-01-01

    This article addresses the contribution of decentralized collective bargaining to the development of different forms of flexicurity for different groups of employees on the Danish labour market. Based on five case studies of company-level bargaining on flexible working hours in Danish industry...... the text of the agreements. On the other hand, less flexible employees often face difficulties in meeting the demands of the agreements and may ultimately be forced to leave the company and rely on unemployment benefits and active labour market policies. In a flexicurity perspective, this development seems...... to imply a segmentation of the Danish workforce regarding hard and soft versions of flexicurity....

  5. Noncooperative Iris Segmentation

    Directory of Open Access Journals (Sweden)

    Elsayed Mostafa

    2012-01-01

    Full Text Available In noncooperative iris recognition one should deal with uncontrolled behavior of the subject as well as uncontrolled lighting conditions. That means eyelids and eyelashes occlusion, non uniform intensities, reflections, imperfect focus, and orientation among the others are to be considered. To cope with this situation a noncooperative iris segmentation algorithm based on numerically stable direct least squares fitting of ellipses model and modified Chan-Vese model (local binary fitting energy with variational level set formulation is to be proposed. The proposed algorithm is tested using CASIA-IrisV3.

  6. Accurate Segmentation for Infrared Flying Bird Tracking

    Institute of Scientific and Technical Information of China (English)

    ZHENG Hong; HUANG Ying; LING Haibin; ZOU Qi; YANG Hao

    2016-01-01

    Bird strikes present a huge risk for air ve-hicles, especially since traditional airport bird surveillance is mainly dependent on inefficient human observation. For improving the effectiveness and efficiency of bird monitor-ing, computer vision techniques have been proposed to detect birds, determine bird flying trajectories, and pre-dict aircraft takeoff delays. Flying bird with a huge de-formation causes a great challenge to current tracking al-gorithms. We propose a segmentation based approach to enable tracking can adapt to the varying shape of bird. The approach works by segmenting object at a region of inter-est, where is determined by the object localization method and heuristic edge information. The segmentation is per-formed by Markov random field, which is trained by fore-ground and background mixture Gaussian models. Exper-iments demonstrate that the proposed approach provides the ability to handle large deformations and outperforms the m ost state-of-the-art tracker in the infrared flying bird tracking problem.

  7. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, V; Wang, Y; Romero, A; Heijmen, B; Hoogeman, M [Erasmus MC Cancer Institute, Rotterdam (Netherlands); Myronenko, A; Jordan, P [Accuray Incorporated, Sunnyvale, United States. (United States)

    2014-06-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain ground truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto-segmentation

  8. Space/ground systems as cooperating agents

    Science.gov (United States)

    Grant, T. J.

    1994-01-01

    Within NASA and the European Space Agency (ESA) it is agreed that autonomy is an important goal for the design of future spacecraft and that this requires on-board artificial intelligence. NASA emphasizes deep space and planetary rover missions, while ESA considers on-board autonomy as an enabling technology for missions that must cope with imperfect communications. ESA's attention is on the space/ground system. A major issue is the optimal distribution of intelligent functions within the space/ground system. This paper describes the multi-agent architecture for space/ground systems (MAASGS) which would enable this issue to be investigated. A MAASGS agent may model a complete spacecraft, a spacecraft subsystem or payload, a ground segment, a spacecraft control system, a human operator, or an environment. The MAASGS architecture has evolved through a series of prototypes. The paper recommends that the MAASGS architecture should be implemented in the operational Dutch Utilization Center.

  9. OASIS is Automated Statistical Inference for Segmentation, with applications to multiple sclerosis lesion segmentation in MRI.

    Science.gov (United States)

    Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2013-01-01

    Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78

  10. The use of zeolites to generate PET phantoms for the validation of quantification strategies in oncology

    Energy Technology Data Exchange (ETDEWEB)

    Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe [Nuclear Medicine Department, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, 20122 Milan (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy) and Tecnomed Foundation, University of Milano-Bicocca, via Pergolesi 33, 20900 Monza (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy); Nuclear Medicine Department, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, 20122 Milan (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy)

    2012-09-15

    Purpose: In recent years, segmentation algorithms and activity quantification methods have been proposed for oncological {sup 18}F-fluorodeoxyglucose (FDG) PET. A full assessment of these algorithms, necessary for a clinical transfer, requires a validation on data sets provided with a reliable ground truth as to the imaged activity distribution, which must be as realistic as possible. The aim of this work is to propose a strategy to simulate lesions of uniform uptake and irregular shape in an anthropomorphic phantom, with the possibility to easily obtain a ground truth as to lesion activity and borders. Methods: Lesions were simulated with samples of clinoptilolite, a family of natural zeolites of irregular shape, able to absorb aqueous solutions of {sup 18}F-FDG, available in a wide size range, and nontoxic. Zeolites were soaked in solutions of {sup 18}F-FDG for increasing times up to 120 min and their absorptive properties were characterized as function of soaking duration, solution concentration, and zeolite dry weight. Saturated zeolites were wrapped in Parafilm, positioned inside an Alderson thorax-abdomen phantom and imaged with a PET-CT scanner. The ground truth for the activity distribution of each zeolite was obtained by segmenting high-resolution finely aligned CT images, on the basis of independently obtained volume measurements. The fine alignment between CT and PET was validated by comparing the CT-derived ground truth to a set of zeolites' PET threshold segmentations in terms of Dice index and volume error. Results: The soaking time necessary to achieve saturation increases with zeolite dry weight, with a maximum of about 90 min for the largest sample. At saturation, a linear dependence of the uptake normalized to the solution concentration on zeolite dry weight (R{sup 2}= 0.988), as well as a uniform distribution of the activity over the entire zeolite volume from PET imaging were demonstrated. These findings indicate that the {sup 18}F

  11. Segmented Target Design

    Science.gov (United States)

    Merhi, Abdul Rahman; Frank, Nathan; Gueye, Paul; Thoennessen, Michael; MoNA Collaboration

    2013-10-01

    A proposed segmented target would improve decay energy measurements of neutron-unbound nuclei. Experiments like this have been performed at the National Superconducting Cyclotron Laboratory (NSCL) located at Michigan State University. Many different nuclei are produced in such experiments, some of which immediately decay into a charged particle and neutron. The charged particles are bent by a large magnet and measured by a suite of charged particle detectors. The neutrons are measured by the Modular Neutron Array (MoNA) and Large Multi-Institutional Scintillation Array (LISA). With the current target setup, a nucleus in a neutron-unbound state is produced with a radioactive beam impinged upon a beryllium target. The resolution of these measurements is very dependent on the target thickness since the nuclear interaction point is unknown. In a segmented target using alternating layers of silicon detectors and Be-targets, the Be-target in which the nuclear reaction takes place would be determined. Thus the experimental resolution would improve. This poster will describe the improvement over the current target along with the status of the design. Work supported by Augustana College and the National Science Foundation grant #0969173.

  12. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  13. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  14. Studies on the key parameters in segmental lining design

    Institute of Scientific and Technical Information of China (English)

    Zhenchang Guan; Tao Deng; Gang Wang; Yujing Jiang

    2015-01-01

    The uniform ring model and the shell-spring model for segmental lining design are reviewed in this article. The former is the most promising means to reflect the real behavior of segmental lining, while the latter is the most popular means in practice due to its simplicity. To understand the relationship and the difference between these two models, both of them are applied to the engineering practice of Fuzhou Metro Line I, where the key parameters used in both models are described and compared. The effective ratio of bending rigidity h reflecting the relative stiffness between segmental lining and surrounding ground and the transfer ratio of bending moment x reflecting the relative stiffness between segment and joint, which are two key parameters used in the uniform ring model, are especially emphasized. The reasonable values for these two key parameters are calibrated by comparing the bending moments calculated from both two models. Through case studies, it is concluded that the effective ratio of bending rigidity h increases significantly with good soil properties, increases slightly with increasing overburden, and decreases slightly with increasing water head. Meanwhile, the transfer ratio of bending moment x seems to only relate to the properties of segmental lining itself and has a minor relation with the ground conditions. These results could facilitate the design practice for Fuzhou Metro Line I, and could also provide some references to other projects with respect to similar scenarios.

  15. Towards Autonomous Agriculture: Automatic Ground Detection Using Trinocular Stereovision

    Directory of Open Access Journals (Sweden)

    Annalisa Milella

    2012-09-01

    Full Text Available Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively.

  16. An objective evaluation of a segmented foot model.

    Science.gov (United States)

    Okita, Nori; Meyers, Steven A; Challis, John H; Sharkey, Neil A

    2009-07-01

    Segmented foot and ankle models divide the foot into multiple segments in order to obtain more meaningful information about its functional behavior in health and disease. The goal of this research was to objectively evaluate the fidelity of a generalized three-segment foot and ankle model defined using externally mounted markers. An established apparatus that reproduces the kinematics and kinetics of gait in cadaver lower extremities was used to independently examine the validity of the rigid body assumption and the magnitude of soft tissue artifact induced by skin-mounted markers. Stance phase simulations were conducted on ten donated limbs while recording the three-dimensional kinematic trajectories of skin-mounted and then bone-mounted marker constructs. Segment kinematics were compared to underlying bone kinematics to examine the rigid body assumption. Virtual markers were calculated from the bone mounted marker set and then compared to the skin-mounted markers to examine soft tissue artifact. The shank and hindfoot segments behaved as rigid bodies. The forefoot segment violated the rigid body assumption, as evidenced by significant differences between motions of the first metatarsal and the forefoot segment, and relative motion between the first and fifth metatarsals. Motion vectors of the external skin markers relative to their virtual counterparts were no more than 3mm in each direction, and 3-7 mm overall. Artifactual marker motion had mild affects on inter-segmental kinematics. Despite errors, the segmented model appeared to perform reasonably well overall. The data presented here enable more informed interpretations of clinical findings using the segmented model approach.

  17. Validation of Mission Plans Through Simulation

    Science.gov (United States)

    St-Pierre, J.; Melanson, P.; Brunet, C.; Crabtree, D.

    2002-01-01

    The purpose of a spacecraft mission planning system is to automatically generate safe and optimized mission plans for a single spacecraft, or more functioning in unison. The system verifies user input syntax, conformance to commanding constraints, absence of duty cycle violations, timing conflicts, state conflicts, etc. Present day constraint-based systems with state-based predictive models use verification rules derived from expert knowledge. A familiar solution found in Mission Operations Centers, is to complement the planning system with a high fidelity spacecraft simulator. Often a dedicated workstation, the simulator is frequently used for operator training and procedure validation, and may be interfaced to actual control stations with command and telemetry links. While there are distinct advantages to having a planning system offer realistic operator training using the actual flight control console, physical verification of data transfer across layers and procedure validation, experience has revealed some drawbacks and inefficiencies in ground segment operations: With these considerations, two simulation-based mission plan validation projects are under way at the Canadian Space Agency (CSA): RVMP and ViSION. The tools proposed in these projects will automatically run scenarios and provide execution reports to operations planning personnel, prior to actual command upload. This can provide an important safeguard for system or human errors that can only be detected with high fidelity, interdependent spacecraft models running concurrently. The core element common to these projects is a spacecraft simulator, built with off-the- shelf components such as CAE's Real-Time Object-Based Simulation Environment (ROSE) technology, MathWork's MATLAB/Simulink, and Analytical Graphics' Satellite Tool Kit (STK). To complement these tools, additional components were developed, such as an emulated Spacecraft Test and Operations Language (STOL) interpreter and CCSDS TM

  18. Fast Approximate Broadband Phase Retrieval for Segmented Systems

    Science.gov (United States)

    Jurling, Alden S.; Fienup, James R.

    2011-01-01

    Broadband phase retrieval needed when: a) Narrow spectral filters are unavailable. b) Dim sources. c) Low throughput due to misalignment. d) Short exposures times. i.e., Pointing instability (space); and Atmospheric instability (ground based AO). Traditional approach is computationally burdensome for extreme bandwidths. Approximate approach: a) Substitute monochromatic model. b) Blur model and data. Test case performance: a) approx.270x reduction in computational cost for FGS-like test case. b) Good accuracy for monolithic system. c) Acceptable accuracy for segmented systems. i.e., Reduced by diffraction and Reduced by higher order segment model.

  19. Target segmentation in IR imagery using a wavelet-based technique

    Science.gov (United States)

    Sadjadi, Firooz A.

    1995-10-01

    Segmentation of ground based targets embedded in clutter obtained by airborne Infrared (IR) imaging sensors is one of the challenging problems in automatic target recognition. In this paper a new texture based segmentation technique is presented that uses the statistics of 2D wavelet decomposition components of the local sections of the image. A measure of statistical similarity is then used to segment the image and separate the target from the background. This technique is applied on a set of real sequential IR imagery and has shown to produce a high degree of segmentation accuracy across varying ranges.

  20. Color Image Segmentation using Kohonen Self-Organizing Map (SOM

    Directory of Open Access Journals (Sweden)

    I Komang Ariana

    2014-05-01

    Full Text Available Color image segmentation using Kohonen Self-Organizing Map (SOM, is proposed in this study. RGB color space is used as input in the process of clustering by SOM. Measurement of the distance between weight vector and input vector in learning and recognition stages in SOM method, uses Normalized Euclidean Distance. Then, the validity of clustering result is tested by Davies-Bouldin Index (DBI and Validity Measure (VM to determine the most optimal number of cluster. The clustering result, according to the most optimal number of cluster, then is processed with spatial operations. Spatial operations are used to eliminate noise and small regions which are formed from the clustering result. This system allows segmentation process become automatic and unsupervised. The segmentation results are close to human perception.

  1. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Seoungjae Cho

    2014-01-01

    Full Text Available A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  2. Segmented heat exchanger

    Science.gov (United States)

    Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann

    2010-12-14

    A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.

  3. Schizophrenia as segmental progeria

    Science.gov (United States)

    Papanastasiou, Evangelos; Gaughran, Fiona; Smith, Shubulade

    2011-01-01

    Schizophrenia is associated with a variety of physical manifestations (i.e. metabolic, neurological) and despite psychotropic medication being blamed for some of these (in particular obesity and diabetes), there is evidence that schizophrenia itself confers an increased risk of physical disease and early death. The observation that schizophrenia and progeroid syndromes share common clinical features and molecular profiles gives rise to the hypothesis that schizophrenia could be conceptualized as a whole body disorder, namely a segmental progeria. Mammalian cells employ the mechanisms of cellular senescence and apoptosis (programmed cell death) as a means to control inevitable DNA damage and cancer. Exacerbation of those processes is associated with accelerated ageing and schizophrenia and this warrants further investigation into possible underlying biological mechanisms, such as epigenetic control of the genome. PMID:22048679

  4. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  5. Learning evaluation of ultrasound image segmentation using combined measures

    Science.gov (United States)

    Fang, Mengjie; Luo, Yongkang; Ding, Mingyue

    2016-03-01

    Objective evaluation of medical image segmentation is one of the important steps for proving its validity and clinical applicability. Although there are many researches presenting segmentation methods on medical image, while with few studying the evaluation methods on their results, this paper presents a learning evaluation method with combined measures to make it as close as possible to the clinicians' judgment. This evaluation method is more quantitative and precise for the clinical diagnose. In our experiment, the same data sets include 120 segmentation results of lumen-intima boundary (LIB) and media-adventitia boundary (MAB) of carotid ultrasound images respectively. And the 15 measures of goodness method and discrepancy method are used to evaluate the different segmentation results alone. Furthermore, the experimental results showed that compared with the discrepancy method, the accuracy with the measures of goodness method is poor. Then, by combining with the measures of two methods, the average accuracy and the area under the receiver operating characteristic (ROC) curve of 2 segmentation groups are higher than 93% and 0.9 respectively. And the results of MAB are better than LIB, which proved that this novel method can effectively evaluate the segmentation results. Moreover, it lays the foundation for the non-supervised segmentation evaluation system.

  6. Implicit Language Learning: Adults' Ability to Segment Words in Norwegian

    Science.gov (United States)

    Kittleson, Megan M.; Aguilar, Jessica M.; Tokerud, Gry Line; Plante, Elena; Asbjornsen, Arve E.

    2010-01-01

    Previous language learning research reveals that the statistical properties of the input offer sufficient information to allow listeners to segment words from fluent speech in an artificial language. The current pair of studies uses a natural language to test the ecological validity of these findings and to determine whether a listener's language…

  7. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Ne