WorldWideScience

Sample records for ground truth automatic

  1. Development of mine explosion ground truth smart sensors

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Steven R. [Rocky Mountain Geophysics, Inc., Los Alamos, NM (United States); Harben, Phillip E. [Rocky Mountain Geophysics, Inc., Los Alamos, NM (United States); Jarpe, Steve [Jarpe Data Solutions, Prescott, AZ (United States); Harris, David B. [Deschutes Signal Processing, Maupin, OR (United States)

    2015-09-14

    Accurate seismo-acoustic source location is one of the fundamental aspects of nuclear explosion monitoring. Critical to improved location is the compilation of ground truth data sets for which origin time and location are accurately known. Substantial effort by the National Laboratories and other seismic monitoring groups have been undertaken to acquire and develop ground truth catalogs that form the basis of location efforts (e.g. Sweeney, 1998; Bergmann et al., 2009; Waldhauser and Richards, 2004). In particular, more GT1 (Ground Truth 1 km) events are required to improve three-dimensional velocity models that are currently under development. Mine seismicity can form the basis of accurate ground truth datasets. Although the location of mining explosions can often be accurately determined using array methods (e.g. Harris, 1991) and from overhead observations (e.g. MacCarthy et al., 2008), accurate origin time estimation can be difficult. Occasionally, mine operators will share shot time, location, explosion size and even shot configuration, but this is rarely done, especially in foreign countries. Additionally, shot times provided by mine operators are often inaccurate. An inexpensive, ground truth event detector that could be mailed to a contact, placed in close proximity (< 5 km) to mining regions or earthquake aftershock regions that automatically transmits back ground-truth parameters, would greatly aid in development of ground truth datasets that could be used to improve nuclear explosion monitoring capabilities. We are developing an inexpensive, compact, lightweight smart sensor unit (or units) that could be used in the development of ground truth datasets for the purpose of improving nuclear explosion monitoring capabilities. The units must be easy to deploy, be able to operate autonomously for a significant period of time (> 6 months) and inexpensive enough to be discarded after useful operations have expired (although this may not be part of our business

  2. Fast and Accurate Ground Truth Generation for Skew-Tolerance Evaluation of Page Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Okun Oleg

    2006-01-01

    Full Text Available Many image segmentation algorithms are known, but often there is an inherent obstacle in the unbiased evaluation of segmentation quality: the absence or lack of a common objective representation for segmentation results. Such a representation, known as the ground truth, is a description of what one should obtain as the result of ideal segmentation, independently of the segmentation algorithm used. The creation of ground truth is a laborious process and therefore any degree of automation is always welcome. Document image analysis is one of the areas where ground truths are employed. In this paper, we describe an automated tool called GROTTO intended to generate ground truths for skewed document images, which can be used for the performance evaluation of page segmentation algorithms. Some of these algorithms are claimed to be insensitive to skew (tilt of text lines. However, this fact is usually supported only by a visual comparison of what one obtains and what one should obtain since ground truths are mostly available for upright images, that is, those without skew. As a result, the evaluation is both subjective; that is, prone to errors, and tedious. Our tool allows users to quickly and easily produce many sufficiently accurate ground truths that can be employed in practice and therefore it facilitates automatic performance evaluation. The main idea is to utilize the ground truths available for upright images and the concept of the representative square [9] in order to produce the ground truths for skewed images. The usefulness of our tool is demonstrated through a number of experiments with real-document images of complex layout.

  3. Ground Truth Collections at the MTI Core Sites

    International Nuclear Information System (INIS)

    Garrett, A.J.

    2001-01-01

    The Savannah River Technology Center (SRTC) selected 13 sites across the continental US and one site in the western Pacific to serve as the primary or core site for collection of ground truth data for validation of MTI science algorithms. Imagery and ground truth data from several of these sites are presented in this paper. These sites are the Comanche Peak, Pilgrim and Turkey Point power plants, Ivanpah playas, Crater Lake, Stennis Space Center and the Tropical Western Pacific ARM site on the island of Nauru. Ground truth data includes water temperatures (bulk and skin), radiometric data, meteorological data and plant operating data. The organizations that manage these sites assist SRTC with its ground truth data collections and also give the MTI project a variety of ground truth measurements that they make for their own purposes. Collectively, the ground truth data from the 14 core sites constitute a comprehensive database for science algorithm validation

  4. Ground Truth Annotation in T Analyst

    DEFF Research Database (Denmark)

    2015-01-01

    This video shows how to annotate the ground truth tracks in the thermal videos. The ground truth tracks are produced to be able to compare them to tracks obtained from a Computer Vision tracking approach. The program used for annotation is T-Analyst, which is developed by Aliaksei Laureshyn, Ph...

  5. Ground-truth measurement systems

    Science.gov (United States)

    Serafin, R.; Seliga, T. A.; Lhermitte, R. M.; Nystuen, J. A.; Cherry, S.; Bringi, V. N.; Blackmer, R.; Heymsfield, G. M.

    1981-01-01

    Ground-truth measurements of precipitation and related weather events are an essential component of any satellite system designed for monitoring rainfall from space. Such measurements are required for testing, evaluation, and operations; they provide detailed information on the actual weather events, which can then be compared with satellite observations intended to provide both quantitative and qualitative information about them. Also, very comprehensive ground-truth observations should lead to a better understanding of precipitation fields and their relationships to satellite data. This process serves two very important functions: (a) aiding in the development and interpretation of schemes of analyzing satellite data, and (b) providing a continuing method for verifying satellite measurements.

  6. Validation of neural spike sorting algorithms without ground-truth information.

    Science.gov (United States)

    Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F

    2016-05-01

    The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Eliciting Perceptual Ground Truth for Image Segmentation

    OpenAIRE

    Hodge, Victoria Jane; Eakins, John; Austin, Jim

    2006-01-01

    In this paper, we investigate human visual perception and establish a body of ground truth data elicited from human visual studies. We aim to build on the formative work of Ren, Eakins and Briggs who produced an initial ground truth database. Human subjects were asked to draw and rank their perceptions of the parts of a series of figurative images. These rankings were then used to score the perceptions, identify the preferred human breakdowns and thus allow us to induce perceptual rules for h...

  8. On the ground truth problem of malicious DNS traffic analysis

    DEFF Research Database (Denmark)

    Stevanovic, Matija; Pedersen, Jens Myrup; D’Alconzo, Alessandro

    2015-01-01

    algorithms at their core. These methods require accurate ground truth of both malicious and benign DNS traffic for model training as well as for the performance evaluation. This paper elaborates on the problem of obtaining such a ground truth and evaluates practices employed by contemporary detection methods...

  9. ON CONSTRUCTION OF A RELIABLE GROUND TRUTH FOR EVALUATION OF VISUAL SLAM ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Jan Bayer

    2016-11-01

    Full Text Available In this work we are concerning the problem of localization accuracy evaluation of visual-based Simultaneous Localization and Mapping (SLAM techniques. Quantitative evaluation of the SLAM algorithm performance is usually done using the established metrics of Relative pose error and Absolute trajectory error which require a precise and reliable ground truth. Such a ground truth is usually hard to obtain, while it requires an expensive external localization system. In this work we are proposing to use the SLAM algorithm itself to construct a reliable ground-truth by offline frame-by-frame processing. The generated ground-truth is suitable for evaluation of different SLAM systems, as well as for tuning the parametrization of the on-line SLAM. The presented practical experimental results indicate the feasibility of the proposed approach.

  10. AMS Ground Truth Measurements: Calibration and Test Lines

    International Nuclear Information System (INIS)

    Wasiolek, P.

    2013-01-01

    Airborne gamma spectrometry is one of the primary techniques used to define the extent of ground contamination after a radiological incident. Its usefulness was demonstrated extensively during the response to the Fukushima nuclear power plant (NPP) accident in March-May 2011. To map ground contamination a set of scintillation detectors is mounted on an airborne platform (airplane or helicopter) and flown over contaminated areas. The acquisition system collects spectral information together with the aircraft position and altitude every second. To provide useful information to decision makers, the count rate data expressed in counts per second (cps) needs to be converted to the terrestrial component of the exposure rate 1 m above ground, or surface activity of isotopes of concern. This is done using conversion coefficients derived from calibration flights. During a large scale radiological event, multiple flights may be necessary and may require use of assets from different agencies. However, as the production of a single, consistent map product depicting the ground contamination is the primary goal, it is critical to establish very early into the event a common calibration line. Such a line should be flown periodically in order to normalize data collected from different aerial acquisition systems and potentially flown at different flight altitudes and speeds. In order to verify and validate individual aerial systems, the calibration line needs to be characterized in terms of ground truth measurements. This is especially important if the contamination is due to short-lived radionuclides. The process of establishing such a line, as well as necessary ground truth measurements, is described in this document.

  11. AMS Ground Truth Measurements: Calibrations and Test Lines

    Energy Technology Data Exchange (ETDEWEB)

    Wasiolek, Piotr T. [National Security Technologies, LLC

    2015-12-01

    Airborne gamma spectrometry is one of the primary techniques used to define the extent of ground contamination after a radiological incident. Its usefulness was demonstrated extensively during the response to the Fukushima NPP accident in March-May 2011. To map ground contamination, a set of scintillation detectors is mounted on an airborne platform (airplane or helicopter) and flown over contaminated areas. The acquisition system collects spectral information together with the aircraft position and altitude every second. To provide useful information to decision makers, the count data, expressed in counts per second (cps), need to be converted to a terrestrial component of the exposure rate at 1 meter (m) above ground, or surface activity of the isotopes of concern. This is done using conversion coefficients derived from calibration flights. During a large-scale radiological event, multiple flights may be necessary and may require use of assets from different agencies. However, because production of a single, consistent map product depicting the ground contamination is the primary goal, it is critical to establish a common calibration line very early into the event. Such a line should be flown periodically in order to normalize data collected from different aerial acquisition systems and that are potentially flown at different flight altitudes and speeds. In order to verify and validate individual aerial systems, the calibration line needs to be characterized in terms of ground truth measurements This is especially important if the contamination is due to short-lived radionuclides. The process of establishing such a line, as well as necessary ground truth measurements, is described in this document.

  12. Is our Ground-Truth for Traffic Classification Reliable?

    DEFF Research Database (Denmark)

    Carela-Español, Valentín; Bujlow, Tomasz; Barlet-Ros, Pere

    2014-01-01

    . In order to evaluate these tools we have carefully built a labeled dataset of more than 500 000 flows, which contains traffic from popular applications. Our results present PACE, a commercial tool, as the most reliable solution for ground-truth generation. However, among the open-source tools available...

  13. Modified ground-truthing: an accurate and cost-effective food environment validation method for town and rural areas.

    Science.gov (United States)

    Caspi, Caitlin Eicher; Friebur, Robin

    2016-03-17

    A major concern in food environment research is the lack of accuracy in commercial business listings of food stores, which are convenient and commonly used. Accuracy concerns may be particularly pronounced in rural areas. Ground-truthing or on-site verification has been deemed the necessary standard to validate business listings, but researchers perceive this process to be costly and time-consuming. This study calculated the accuracy and cost of ground-truthing three town/rural areas in Minnesota, USA (an area of 564 miles, or 908 km), and simulated a modified validation process to increase efficiency without comprising accuracy. For traditional ground-truthing, all streets in the study area were driven, while the route and geographic coordinates of food stores were recorded. The process required 1510 miles (2430 km) of driving and 114 staff hours. The ground-truthed list of stores was compared with commercial business listings, which had an average positive predictive value (PPV) of 0.57 and sensitivity of 0.62 across the three sites. Using observations from the field, a modified process was proposed in which only the streets located within central commercial clusters (the 1/8 mile or 200 m buffer around any cluster of 2 stores) would be validated. Modified ground-truthing would have yielded an estimated PPV of 1.00 and sensitivity of 0.95, and would have resulted in a reduction in approximately 88 % of the mileage costs. We conclude that ground-truthing is necessary in town/rural settings. The modified ground-truthing process, with excellent accuracy at a fraction of the costs, suggests a new standard and warrants further evaluation.

  14. Satellite markers: a simple method for ground truth car pose on stereo video

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  15. Modeling multiple time series annotations as noisy distortions of the ground truth: An Expectation-Maximization approach.

    Science.gov (United States)

    Gupta, Rahul; Audhkhasi, Kartik; Jacokes, Zach; Rozga, Agata; Narayanan, Shrikanth

    2018-01-01

    Studies of time-continuous human behavioral phenomena often rely on ratings from multiple annotators. Since the ground truth of the target construct is often latent, the standard practice is to use ad-hoc metrics (such as averaging annotator ratings). Despite being easy to compute, such metrics may not provide accurate representations of the underlying construct. In this paper, we present a novel method for modeling multiple time series annotations over a continuous variable that computes the ground truth by modeling annotator specific distortions. We condition the ground truth on a set of features extracted from the data and further assume that the annotators provide their ratings as modification of the ground truth, with each annotator having specific distortion tendencies. We train the model using an Expectation-Maximization based algorithm and evaluate it on a study involving natural interaction between a child and a psychologist, to predict confidence ratings of the children's smiles. We compare and analyze the model against two baselines where: (i) the ground truth in considered to be framewise mean of ratings from various annotators and, (ii) each annotator is assumed to bear a distinct time delay in annotation and their annotations are aligned before computing the framewise mean.

  16. Sea Ice Thickness Measurement by Ground Penetrating Radar for Ground Truth of Microwave Remote Sensing Data

    Science.gov (United States)

    Matsumoto, M.; Yoshimura, M.; Naoki, K.; Cho, K.; Wakabayashi, H.

    2018-04-01

    Observation of sea ice thickness is one of key issues to understand regional effect of global warming. One of approaches to monitor sea ice in large area is microwave remote sensing data analysis. However, ground truth must be necessary to discuss the effectivity of this kind of approach. The conventional method to acquire ground truth of ice thickness is drilling ice layer and directly measuring the thickness by a ruler. However, this method is destructive, time-consuming and limited spatial resolution. Although there are several methods to acquire ice thickness in non-destructive way, ground penetrating radar (GPR) can be effective solution because it can discriminate snow-ice and ice-sea water interface. In this paper, we carried out GPR measurement in Lake Saroma for relatively large area (200 m by 300 m, approximately) aiming to obtain grand truth for remote sensing data. GPR survey was conducted at 5 locations in the area. The direct measurement was also conducted simultaneously in order to calibrate GPR data for thickness estimation and to validate the result. Although GPR Bscan image obtained from 600MHz contains the reflection which may come from a structure under snow, the origin of the reflection is not obvious. Therefore, further analysis and interpretation of the GPR image, such as numerical simulation, additional signal processing and use of 200 MHz antenna, are required to move on thickness estimation.

  17. Visualization of ground truth tracks for the video 'Tracking a "facer's" behavior in a public plaza'

    DEFF Research Database (Denmark)

    2015-01-01

    The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager.......The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager....

  18. SEA ICE THICKNESS MEASUREMENT BY GROUND PENETRATING RADAR FOR GROUND TRUTH OF MICROWAVE REMOTE SENSING DATA

    Directory of Open Access Journals (Sweden)

    M. Matsumoto

    2018-04-01

    Full Text Available Observation of sea ice thickness is one of key issues to understand regional effect of global warming. One of approaches to monitor sea ice in large area is microwave remote sensing data analysis. However, ground truth must be necessary to discuss the effectivity of this kind of approach. The conventional method to acquire ground truth of ice thickness is drilling ice layer and directly measuring the thickness by a ruler. However, this method is destructive, time-consuming and limited spatial resolution. Although there are several methods to acquire ice thickness in non-destructive way, ground penetrating radar (GPR can be effective solution because it can discriminate snow-ice and ice-sea water interface. In this paper, we carried out GPR measurement in Lake Saroma for relatively large area (200 m by 300 m, approximately aiming to obtain grand truth for remote sensing data. GPR survey was conducted at 5 locations in the area. The direct measurement was also conducted simultaneously in order to calibrate GPR data for thickness estimation and to validate the result. Although GPR Bscan image obtained from 600MHz contains the reflection which may come from a structure under snow, the origin of the reflection is not obvious. Therefore, further analysis and interpretation of the GPR image, such as numerical simulation, additional signal processing and use of 200 MHz antenna, are required to move on thickness estimation.

  19. Ground truth data collection on mining industrial explosions registered by the International Monitoring System

    International Nuclear Information System (INIS)

    Ehl'tekov, A.Yu.; Gordon, V.P.; Firsov, V.A.; Chervyakov, V.B.

    2004-01-01

    The presentation is dedicated to organizational and technical issues connected with the task of Comprehensive Test-Ban-Treaty Organization timely notification on large chemical explosions including data on explosion location and time, on applied explosive substance quantity and type, and also on configuration and assumed purpose of explosion. Explosions registered by International Monitoring System are of special interest. Their data could be used for calibration of the monitoring system. Ground truth data collection and some explosions location results on Russia's mining enterprises were given. Ground truth data collection peculiarities according to mining industrial explosions were considered. (author)

  20. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  1. SIR-C/X-SAR data calibration and ground truth campaign over the NASA-CB1 test-site

    International Nuclear Information System (INIS)

    Notarnicola, C.; Posa, F.; Refice, A.; Sergi, R.; Smacchia, P.; Casarano, D.; De Carolis, G.; Mattia, F.; Schena, V.D.

    2001-01-01

    During the Space Shuttle Endeavour mission in October 1994, a remote-sensing campaign was carried out with the objectives of both radiometric and polarimetric calibration and ground truth data acquisition of bare soils. This paper presents the results obtained in the experiment. Polarimetric cross-talk and channel imbalance values, as well as radiometric calibration parameters, have been found to be within the science requirements for SAR images. Regarding ground truth measurements, a wide spread in the height rms values and correlation lengths has been observed, which was motivated a critical revisiting of surface parameters descriptors

  2. Reference-free ground truth metric for metal artifact evaluation in CT images

    International Nuclear Information System (INIS)

    Kratz, Baerbel; Ens, Svitlana; Mueller, Jan; Buzug, Thorsten M.

    2011-01-01

    Purpose: In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. Methods: The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. Results: The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. Conclusions: The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.

  3. First- and third-party ground truth for key frame extraction from consumer video clips

    Science.gov (United States)

    Costello, Kathleen; Luo, Jiebo

    2007-02-01

    Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.

  4. The evaluation of a population based diffusion tensor image atlas using a ground truth method

    Science.gov (United States)

    Van Hecke, Wim; Leemans, Alexander; D'Agostino, Emiliano; De Backer, Steve; Vandervliet, Evert; Parizel, Paul M.; Sijbers, Jan

    2008-03-01

    Purpose: Voxel based morphometry (VBM) is increasingly being used to detect diffusion tensor (DT) image abnormalities in patients for different pathologies. An important requisite for these VBM studies is the use of a high-dimensional, non-rigid coregistration technique, which is able to align both the spatial and the orientational information. Recent studies furthermore indicate that high-dimensional DT information should be included during coregistration for an optimal alignment. In this context, a population based DTI atlas is created that preserves the orientational DT information robustly and contains a minimal bias towards any specific individual data set. Methods: A ground truth evaluation method is developed using a single subject DT image that is deformed with 20 deformation fields. Thereafter, an atlas is constructed based on these 20 resulting images. Thereby, the non-rigid coregistration algorithm is based on a viscous fluid model and on mutual information. The fractional anisotropy (FA) maps as well as the DT elements are used as DT image information during the coregistration algorithm, in order to minimize the orientational alignment inaccuracies. Results: The population based DT atlas is compared with the ground truth image using accuracy and precision measures of spatial and orientational dependent metrics. Results indicate that the population based atlas preserves the orientational information in a robust way. Conclusion: A subject independent population based DT atlas is constructed and evaluated with a ground truth method. This atlas contains all available orientational information and can be used in future VBM studies as a reference system.

  5. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    International Nuclear Information System (INIS)

    Deeley, M A; Cmelak, A J; Malcolm, A W; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Ding, G X; Chen, A; Datteri, R; Noble, J H; Dawant, B M; Donnelly, E F; Yei, F; Koyama, T

    2011-01-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.

  6. Field Ground Truthing Data Collector - a Mobile Toolkit for Image Analysis and Processing

    Science.gov (United States)

    Meng, X.

    2012-07-01

    Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1) Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use) and health conditions of ecosystems and environments in the vicinity of the flight field; 2) Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3) Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.

  7. Improving the Quality of Satellite Imagery Based on Ground-Truth Data from Rain Gauge Stations

    Directory of Open Access Journals (Sweden)

    Ana F. Militino

    2018-03-01

    Full Text Available Multitemporal imagery is by and large geometrically and radiometrically accurate, but the residual noise arising from removal clouds and other atmospheric and electronic effects can produce outliers that must be mitigated to properly exploit the remote sensing information. In this study, we show how ground-truth data from rain gauge stations can improve the quality of satellite imagery. To this end, a simulation study is conducted wherein different sizes of outlier outbreaks are spread and randomly introduced in the normalized difference vegetation index (NDVI and the day and night land surface temperature (LST of composite images from Navarre (Spain between 2011 and 2015. To remove outliers, a new method called thin-plate splines with covariates (TpsWc is proposed. This method consists of smoothing the median anomalies with a thin-plate spline model, whereby transformed ground-truth data are the external covariates of the model. The performance of the proposed method is measured with the square root of the mean square error (RMSE, calculated as the root of the pixel-by-pixel mean square differences between the original data and the predicted data with the TpsWc model and with a state-space model with and without covariates. The study shows that the use of ground-truth data reduces the RMSE in both the TpsWc model and the state-space model used for comparison purposes. The new method successfully removes the abnormal data while preserving the phenology of the raw data. The RMSE reduction percentage varies according to the derived variables (NDVI or LST, but reductions of up to 20% are achieved with the new proposal.

  8. Skepticism, truth as coherence, and constructivist epistemology: grounds for resolving the discord between science and religion?

    Science.gov (United States)

    Staver, John R.

    2010-03-01

    Science and religion exhibit multiple relationships as ways of knowing. These connections have been characterized as cousinly, mutually respectful, non-overlapping, competitive, proximate-ultimate, dominant-subordinate, and opposing-conflicting. Some of these ties create stress, and tension between science and religion represents a significant chapter in humans' cultural heritage before and since the Enlightenment. Truth, knowledge, and their relation are central to science and religion as ways of knowing, as social institutions, and to their interaction. In religion, truth is revealed through God's word. In science, truth is sought after via empirical methods. Discord can be viewed as a competition for social legitimization between two social institutions whose goals are explaining the world and how it works. Under this view, the root of the discord is truth as correspondence. In this concept of truth, knowledge corresponds to the facts of reality, and conflict is inevitable for many because humans want to ask which one—science or religion—gets the facts correct. But, the root paradox, also known as the problem of the criterion, suggests that seeking to know nature as it is represents a fruitless endeavor. The discord can be set on new ground and resolved by taking a moderately skeptical line of thought, one which employs truth as coherence and a moderate form of constructivist epistemology. Quantum mechanics and evolution as scientific theories and scientific research on human consciousness and vision provide support for this line of argument. Within a constructivist perspective, scientists would relinquish only the pursuit of knowing reality as it is. Scientists would retain everything else. Believers who hold that religion explains reality would come to understand that God never revealed His truth of nature; rather, He revealed His truth in how we are to conduct our lives.

  9. Ground-truth aerosol lidar observations: can the Klett solutions obtained from ground and space be equal for the same aerosol case?

    International Nuclear Information System (INIS)

    Ansmann, Albert

    2006-01-01

    Upcoming multiyear satellite lidar aerosol observations need strong support by a worldwide ground-truth lidar network. In this context the question arises as to whether the ground stations can deliver the same results as obtained from space when the Klett formalism is applied to elastic backscatter lidar data for the same aerosol case. This question is investigated based on simulations of observed cases of simple and complex aerosol layering. The results show that the differences between spaceborne and ground-based observations can be as large as20% for the backscatter and extinction coefficients and the optimum estimates of the column lidar ratios. In cases with complex aerosol layering, the application of the two-layer approach can lead to similar results (space, ground) and accurate products provided that horizontally homogeneous aerosol conditions are given

  10. A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique

    International Nuclear Information System (INIS)

    Dou, Tai H.; Thomas, David H.; O'Connell, Dylan P.; Lamb, James M.; Lee, Percy; Low, Daniel A.

    2015-01-01

    Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the original 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its

  11. A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique

    Energy Technology Data Exchange (ETDEWEB)

    Dou, Tai H., E-mail: tdou@mednet.ucla.edu; Thomas, David H.; O' Connell, Dylan P.; Lamb, James M.; Lee, Percy; Low, Daniel A.

    2015-11-15

    Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the original 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its

  12. FIELD GROUND TRUTHING DATA COLLECTOR – A MOBILE TOOLKIT FOR IMAGE ANALYSIS AND PROCESSING

    Directory of Open Access Journals (Sweden)

    X. Meng

    2012-07-01

    Full Text Available Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1 Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use and health conditions of ecosystems and environments in the vicinity of the flight field; 2 Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3 Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.

  13. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.

  14. Automatic aortic root segmentation in CTA whole-body dataset

    Science.gov (United States)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  15. Strategies for cloud-top phase determination: differentiation between thin cirrus clouds and snow in manual (ground truth) analyses

    Science.gov (United States)

    Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.

    1996-12-01

    Quantitative assessments on the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create ground truth analyses for the evaluation of cloud detection algorithms is relatively straightforward. However, when focus shifts toward quantifying the performance of automated cloud classification algorithms, the task of creating ground truth images becomes much more complicated since these CNC analyses must differentiate between water and ice cloud tops while ensuring that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. The process of creating these ground truth CNC analyses may become particularly difficult when little or no spectral signature is evident between a cloud and its background, as appears to be the case when thin cirrus is present over snow-covered surfaces. In this paper, procedures are described that enhance the researcher's ability to manually interpret and differentiate between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery. The methodology uses data in up to six AVHRR spectral bands, including an additional band derived from the daytime 3.7 micron channel, which has proven invaluable for the manual discrimination between thin cirrus clouds and snow. It is concluded that while the 1.6 micron channel remains essential to differentiate between thin ice clouds and snow. However, this capability that may be lost if the 3.7 micron data switches to a nighttime-only transmission with the launch of future NOAA satellites.

  16. How Many Subjects are Needed for a Visual Field Normative Database? A Comparison of Ground Truth and Bootstrapped Statistics.

    Science.gov (United States)

    Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K

    2018-03-01

    The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.

  17. Automated Breast Ultrasound for Ductal Pattern Reconstruction: Ground Truth File Generation and CADe Evaluation

    Science.gov (United States)

    Manousaki, D.; Panagiotopoulou, A.; Bizimi, V.; Haynes, M. S.; Love, S.; Kallergi, M.

    2017-11-01

    The purpose of this study was the generation of ground truth files (GTFs) of the breast ducts from 3D images of the Invenia™ Automated Breast Ultrasound System (ABUS) system (GE Healthcare, Little Chalfont, UK) and the application of these GTFs for the optimization of the imaging protocol and the evaluation of a computer aided detection (CADe) algorithm developed for automated duct detection. Six lactating, nursing volunteers were scanned with the ABUS before and right after breastfeeding their infants. An expert in breast ultrasound generated rough outlines of the milk-filled ducts in the transaxial slices of all image volumes and the final GTFs were created by using thresholding and smoothing tools in ImageJ. In addition, a CADe algorithm automatically segmented duct like areas and its results were compared to the expert’s GTFs by estimating true positive fraction (TPF) or % overlap. The CADe output differed significantly from the expert’s but both detected a smaller than expected volume of the ducts due to insufficient contrast (ducts were partially filled with milk), discontinuities, and artifacts. GTFs were used to modify the imaging protocol and improve the CADe method. In conclusion, electronic GTFs provide a valuable tool in the optimization of a tomographic imaging system, the imaging protocol, and the CADe algorithms. Their generation, however, is an extremely time consuming, strenuous process, particularly for multi-slice examinations, and alternatives based on phantoms or simulations are highly desirable.

  18. Impact of the accuracy of automatic tumour functional volume delineation on radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Le Maitre, Amandine; Hatt, Mathieu; Pradier, Olivier; Cheze-le Rest, Catherine; Visvikis, Dimitris

    2012-01-01

    Over the past few years several automatic and semi-automatic PET segmentation methods for target volume definition in radiotherapy have been proposed. The objective of this study is to compare different methods in terms of dosimetry. For such a comparison, a gold standard is needed. For this purpose, realistic GATE-simulated PET images were used. Three lung cases and three H and N cases were designed with various shapes, contrasts and heterogeneities. Four different segmentation approaches were compared: fixed and adaptive thresholds, a fuzzy C-mean and the fuzzy locally adaptive Bayesian method. For each of these target volumes, an IMRT treatment plan was defined. The different algorithms and resulting plans were compared in terms of segmentation errors and ground-truth volume coverage using different metrics (V 95 , D 95 , homogeneity index and conformity index). The major differences between the threshold-based methods and automatic methods occurred in the most heterogeneous cases. Within the two groups, the major differences occurred for low contrast cases. For homogeneous cases, equivalent ground-truth volume coverage was observed for all methods but for more heterogeneous cases, significantly lower coverage was observed for threshold-based methods. Our study demonstrates that significant dosimetry errors can be avoided by using more advanced image-segmentation methods. (paper)

  19. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  20. Government Applications Task Force ground truth study of WAG 4

    International Nuclear Information System (INIS)

    Evers, T.K.; Smyre, J.L.; King, A.L.

    1997-06-01

    This report documents the Government Applications Task Force (GATF) Buried Waste Project. The project was initiated as a field investigation and verification of the 1994 Strategic Environmental Research and Development Program's (SERDP) Buried Waste Identification Project results. The GATF project team included staff from three US Department of Energy (DOE) Laboratories [Oak Ridge National Laboratory (ORNL), Los Alamos National Laboratory (LANL), and the Savannah River Technology Center (SRTC)] and from the National Exploitation Laboratory. Similar studies were conducted at each of the three DOE laboratories to demonstrate the effective use of remote sensing technologies. The three locations were selected to assess differences in buried waste signatures under various environmental conditions (i.e., climate, terrain, precipitation, geology, etc.). After a brief background discussion of the SERDP Project, this report documents the field investigation (ground truth) results from the 1994--1995 GATF Buried Waste Study at ORNL's Waste Area Grouping (WAG) 4. Figures for this report are located in Appendix A

  1. Automatic tracking of wake vortices using ground-wind sensor data

    Science.gov (United States)

    1977-01-03

    Algorithms for automatic tracking of wake vortices using ground-wind anemometer : data are developed. Methods of bad-data suppression, track initiation, and : track termination are included. An effective sensor-failure detection-and identification : ...

  2. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  3. Community detection algorithm evaluation with ground-truth data

    Science.gov (United States)

    Jebabli, Malek; Cherifi, Hocine; Cherifi, Chantal; Hamouda, Atef

    2018-02-01

    Community structure is of paramount importance for the understanding of complex networks. Consequently, there is a tremendous effort in order to develop efficient community detection algorithms. Unfortunately, the issue of a fair assessment of these algorithms is a thriving open question. If the ground-truth community structure is available, various clustering-based metrics are used in order to compare it versus the one discovered by these algorithms. However, these metrics defined at the node level are fairly insensitive to the variation of the overall community structure. To overcome these limitations, we propose to exploit the topological features of the 'community graphs' (where the nodes are the communities and the links represent their interactions) in order to evaluate the algorithms. To illustrate our methodology, we conduct a comprehensive analysis of overlapping community detection algorithms using a set of real-world networks with known a priori community structure. Results provide a better perception of their relative performance as compared to classical metrics. Moreover, they show that more emphasis should be put on the topology of the community structure. We also investigate the relationship between the topological properties of the community structure and the alternative evaluation measures (quality metrics and clustering metrics). It appears clearly that they present different views of the community structure and that they must be combined in order to evaluate the effectiveness of community detection algorithms.

  4. Automatic Barometric Updates from Ground-Based Navigational Aids

    Science.gov (United States)

    1990-03-12

    ro fAutomatic Barometric Updates US Department from of Transportation Ground-Based Federal Aviation Administration Navigational Aids Office of Safety...tighter vertical spacing controls , particularly for operations near Terminal Control Areas (TCAs), Airport Radar Service Areas (ARSAs), military climb and...E.F., Ruth, J.C., and Williges, B.H. (1987). Speech Controls and Displays. In Salvendy, G., E. Handbook of Human Factors/Ergonomics, New York, John

  5. Ground Truth Studies - A hands-on environmental science program for students, grades K-12

    Science.gov (United States)

    Katzenberger, John; Chappell, Charles R.

    1992-01-01

    The paper discusses the background and the objectives of the Ground Truth Studies (GTSs), an activity-based teaching program which integrates local environmental studies with global change topics, utilizing remotely sensed earth imagery. Special attention is given to the five key concepts around which the GTS programs are organized, the pilot program, the initial pilot study evaluation, and the GTS Handbook. The GTS Handbook contains a primer on global change and remote sensing, aerial and satellite images, student activities, glossary, and an appendix of reference material. Also described is a K-12 teacher training model. International participation in the program is to be initiated during the 1992-1993 school year.

  6. A Robust Bayesian Truth Serum for Small Populations

    OpenAIRE

    Parkes, David C.; Witkowski, Jens

    2012-01-01

    Peer prediction mechanisms allow the truthful elicitation of private signals (e.g., experiences, or opinions) in regard to a true world state when this ground truth is unobservable. The original peer prediction method is incentive compatible for any number of agents n >= 2, but relies on a common prior, shared by all agents and the mechanism. The Bayesian Truth Serum (BTS) relaxes this assumption. While BTS still assumes that agents share a common prior, this prior need not be known to the me...

  7. Assessment of infrasound signals recorded on seismic stations and infrasound arrays in the western United States using ground truth sources

    Science.gov (United States)

    Park, Junghyun; Hayward, Chris; Stump, Brian W.

    2018-06-01

    Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.

  8. Ground truth methods for optical cross-section modeling of biological aerosols

    Science.gov (United States)

    Kalter, J.; Thrush, E.; Santarpia, J.; Chaudhry, Z.; Gilberry, J.; Brown, D. M.; Brown, A.; Carter, C. C.

    2011-05-01

    Light detection and ranging (LIDAR) systems have demonstrated some capability to meet the needs of a fastresponse standoff biological detection method for simulants in open air conditions. These systems are designed to exploit various cloud signatures, such as differential elastic backscatter, fluorescence, and depolarization in order to detect biological warfare agents (BWAs). However, because the release of BWAs in open air is forbidden, methods must be developed to predict candidate system performance against real agents. In support of such efforts, the Johns Hopkins University Applied Physics Lab (JHU/APL) has developed a modeling approach to predict the optical properties of agent materials from relatively simple, Biosafety Level 3-compatible bench top measurements. JHU/APL has fielded new ground truth instruments (in addition to standard particle sizers, such as the Aerodynamic particle sizer (APS) or GRIMM aerosol monitor (GRIMM)) to more thoroughly characterize the simulant aerosols released in recent field tests at Dugway Proving Ground (DPG). These instruments include the Scanning Mobility Particle Sizer (SMPS), the Ultraviolet Aerodynamic Particle Sizer (UVAPS), and the Aspect Aerosol Size and Shape Analyser (Aspect). The SMPS was employed as a means of measuring smallparticle concentrations for more accurate Mie scattering simulations; the UVAPS, which measures size-resolved fluorescence intensity, was employed as a path toward fluorescence cross section modeling; and the Aspect, which measures particle shape, was employed as a path towards depolarization modeling.

  9. Evaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensor

    Directory of Open Access Journals (Sweden)

    Bodo eRückauer

    2016-04-01

    Full Text Available In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS. For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240x180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS. This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera.

  10. Optimal Recovery Trajectories for Automatic Ground Collision Avoidance Systems (Auto GCAS)

    Science.gov (United States)

    Suplisson, Angela W.

    The US Air Force recently fielded the F-16 Automatic Ground Collision Avoidance System (Auto GCAS). This system meets the operational requirements of being both aggressive and timely, meaning that extremely agile avoidance maneuvers will be executed at the last second to avoid the ground. This small window of automatic operation maneuvering in close proximity to the ground makes the problem challenging. There currently exists no similar Auto GCAS for manned military 'heavy' aircraft with lower climb performance such as transport, tanker, or bomber aircraft. The F-16 Auto GCAS recovery is a single pre-planned roll to wings-level and 5-g pull-up which is very effective for fighters due to their high g and climb performance, but it is not suitable for military heavy aircraft. This research proposes a new optimal control approach to the ground collision avoidance problem for heavy aircraft by mapping the aggressive and timely requirements of the automatic recovery to the optimal control formulation which includes lateral maneuvers around terrain. This novel mapping creates two ways to pose the optimal control problem for Auto GCAS; one as a Max Distance with a Timely Trigger formulation and the other as a Min Control with an Aggressive Trigger formulation. Further, the optimal path and optimal control admitted by these two formulations are demonstrated to be equivalent at the point the automatic recovery is initiated for the simplified 2-D case. The Min Control formulation was demonstrated to have faster computational speed and was chosen for the 3-D case. Results are presented for representative heavy aircraft scenarios against 3-D digital terrain. The Min Control formulation was then compared to a Multi-Trajectory Auto GCAS with five pre-planned maneuvers. Metrics were developed to quantify the improvement from using an optimal approach versus the pre-planned maneuvers. The proposed optimal Min Control method was demonstrated to require less control or trigger later

  11. Ground truth measurements plan for the Multispectral Thermal Imager (MTI) satellite

    Energy Technology Data Exchange (ETDEWEB)

    Garrett, A.J.

    2000-01-03

    Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), and the Savannah River Technology Center (SRTC) have developed a diverse group of algorithms for processing and analyzing the data that will be collected by the Multispectral Thermal Imager (MTI) after launch late in 1999. Each of these algorithms must be verified by comparison to independent surface and atmospheric measurements. SRTC has selected 13 sites in the continental U.S. for ground truth data collections. These sites include a high altitude cold water target (Crater Lake), cooling lakes and towers in the warm, humid southeastern US, Department of Energy (DOE) climate research sites, the NASA Stennis satellite Validation and Verification (V and V) target array, waste sites at the Savannah River Site, mining sites in the Four Corners area and dry lake beds in the southwestern US. SRTC has established mutually beneficial relationships with the organizations that manage these sites to make use of their operating and research data and to install additional instrumentation needed for MTI algorithm V and V.

  12. UAS-Borne Photogrammetry for Surface Topographic Characterization: A Ground-Truth Baseline for Future Change Detection and Refinement of Scaled Remotely-Sensed Datasets

    Science.gov (United States)

    Coppersmith, R.; Schultz-Fellenz, E. S.; Sussman, A. J.; Vigil, S.; Dzur, R.; Norskog, K.; Kelley, R.; Miller, L.

    2015-12-01

    While long-term objectives of monitoring and verification regimes include remote characterization and discrimination of surficial geologic and topographic features at sites of interest, ground truth data is required to advance development of remote sensing techniques. Increasingly, it is desirable for these ground-based or ground-proximal characterization methodologies to be as nimble, efficient, non-invasive, and non-destructive as their higher-altitude airborne counterparts while ideally providing superior resolution. For this study, the area of interest is an alluvial site at the Nevada National Security Site intended for use in the Source Physics Experiment's (Snelson et al., 2013) second phase. Ground-truth surface topographic characterization was performed using a DJI Inspire 1 unmanned aerial system (UAS), at very low altitude (clouds. Within the area of interest, careful installation of surveyed ground control fiducial markers supplied necessary targets for field collection, and information for model georectification. The resulting model includes a Digital Elevation Model derived from 2D imagery. It is anticipated that this flexible and versatile characterization process will provide point cloud data resolution equivalent to a purely ground-based LiDAR scanning deployment (e.g., 1-2cm horizontal and vertical resolution; e.g., Sussman et al., 2012; Schultz-Fellenz et al., 2013). In addition to drastically increasing time efficiency in the field, the UAS method also allows for more complete coverage of the study area when compared to ground-based LiDAR. Comparison and integration of these data with conventionally-acquired airborne LiDAR data from a higher-altitude (~ 450m) platform will aid significantly in the refinement of technologies and detection capabilities of remote optical systems to identify and detect surface geologic and topographic signatures of interest. This work includes a preliminary comparison of surface signatures detected from varying

  13. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    Science.gov (United States)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  14. Towards Autonomous Agriculture: Automatic Ground Detection Using Trinocular Stereovision

    Directory of Open Access Journals (Sweden)

    Annalisa Milella

    2012-09-01

    Full Text Available Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively.

  15. Combining Ground-Truthing and Technology to Improve Accuracy in Establishing Children's Food Purchasing Behaviors.

    Science.gov (United States)

    Coakley, Hannah Lee; Steeves, Elizabeth Anderson; Jones-Smith, Jessica C; Hopkins, Laura; Braunstein, Nadine; Mui, Yeeli; Gittelsohn, Joel

    Developing nutrition-focused environmental interventions for youth requires accurate assessment of where they purchase food. We have developed an innovative, technology-based method to improve the accuracy of food source recall among children using a tablet PC and ground-truthing methodologies. As part of the B'more Healthy Communties for Kids study, we mapped and digitally photographed every food source within a half-mile radius of 14 Baltimore City recreation centers. This food source database was then used with children from the surrounding neighborhoods to search for and identify the food sources they frequent. This novel integration of traditional data collection and technology enables researchers to gather highly accurate information on food source usage among children in Baltimore City. Funding is provided by the NICHD U-54 Grant #1U54HD070725-02.

  16. Shy and Ticklish Truths as Species of Scientific and Artistic Perception

    African Journals Online (AJOL)

    ... recognize a 'gay science' (Nietzsche) not as an eccentric construction of merely poetic insights and expressions, but as a necessary part of the fundamentals of knowledge. It is a truth of the human condition that its truths are grounded in a personal embodiment of individuality, ontogeny, momentariness and situationality.

  17. Automatic defect detection in video archives: application to Montreux Jazz Festival digital archives

    Science.gov (United States)

    Hanhart, Philippe; Rerabek, Martin; Ivanov, Ivan; Dufaux, Alain; Jones, Caryl; Delidais, Alexandre; Ebrahimi, Touradj

    2013-09-01

    Archival of audio-visual databases has become an important discipline in multimedia. Various defects are typ- ically present in such archives. Among those, one can mention recording related defects such as interference between audio and video signals, optical related artifacts, recording and play out artifacts such as horizontal lines, and dropouts, as well as those due to digitization such as diagonal lines. An automatic or semi-automatic detection to identify such defects is useful, especially for large databases. In this paper, we propose two auto- matic algorithms for detection of horizontal and diagonal lines, as well as dropouts that are among the most typical artifacts encountered. We then evaluate the performance of these algorithms by making use of ground truth scores obtained by human subjects.

  18. A new device for acquiring ground truth on the absorption of light by turbid waters

    Science.gov (United States)

    Klemas, V. (Principal Investigator); Srna, R.; Treasure, W.

    1974-01-01

    The author has identified the following significant results. A new device, called a Spectral Attenuation Board, has been designed and tested, which enables ERTS-1 sea truth collection teams to monitor the attenuation depths of three colors continuously, as the board is being towed behind a boat. The device consists of a 1.2 x 1.2 meter flat board held below the surface of the water at a fixed angle to the surface of the water. A camera mounted above the water takes photographs of the board. The resulting film image is analyzed by a micro-densitometer trace along the descending portion of the board. This yields information on the rate of attenuation of light penetrating the water column and the Secchi depth. Red and green stripes were painted on the white board to approximate band 4 and band 5 of the ERTS MSS so that information on the rate of light absorption by the water column of light in these regions of the visible spectrum could be concurrently measured. It was found that information from a red, green, and white stripe may serve to fingerprint the composition of the water mass. A number of these devices, when automated, could also be distributed over a large region to provide a cheap method of obtaining valuable satellite ground truth data at present time intervals.

  19. Automatic Evaluation of Photovoltaic Power Stations from High-Density RGB-T 3D Point Clouds

    Directory of Open Access Journals (Sweden)

    Luis López-Fernández

    2017-06-01

    Full Text Available A low-cost unmanned aerial platform (UAV equipped with RGB (Red, Green, Blue and thermographic sensors is used for the acquisition of all the data needed for the automatic detection and evaluation of thermal pathologies on photovoltaic (PV surfaces and geometric defects in the mounting on photovoltaic power stations. RGB imagery is used for the generation of a georeferenced 3D point cloud through digital image preprocessing, photogrammetric and computer vision algorithms. The point cloud is complemented with temperature values measured by the thermographic sensor and with intensity values derived from the RGB data in order to obtain a multidimensional product (5D: 3D geometry plus temperature and intensity on the visible spectrum. A segmentation workflow based on the proper integration of several state-of-the-art geomatic and mathematic techniques is applied to the 5D product for the detection and sizing of thermal pathologies and geometric defects in the mounting in the PV panels. It consists of a three-step segmentation procedure, involving first the geometric information, then the radiometric (RGB information, and last the thermal data. No configuration of parameters is required. Thus, the methodology presented contributes to the automation of the inspection of PV farms, through the maximization of the exploitation of the data acquired in the different spectra (visible and thermal infrared bands. Results of the proposed workflow were compared with a ground truth generated according to currently established protocols and complemented with a topographic survey. The proposed methodology was able to detect all pathologies established by the ground truth without adding any false positives. Discrepancies in the measurement of damaged surfaces regarding established ground truth, which can reach the 5% of total panel surface for the visual inspection by an expert operator, decrease with the proposed methodology under the 2%. The geometric evaluation

  20. Assessment of MTI Water Temperature Retrievals with Ground Truth from the Comanche Peak Steam Electric Station Cooling Lake

    International Nuclear Information System (INIS)

    Kurzeja, R.J.

    2002-01-01

    Surface water temperatures calculated from Multispectral Thermal Imager (MTI) brightness temperatures and the robust retrieval algorithm, developed by the Los Alamos National Laboratory (LANL), are compared with ground truth measurements at the Squaw Creek reservoir at the Comanche Peak Steam Electric Station near Granbury Texas. Temperatures calculated for thirty-four images covering the period May 2000 to March 2002 are compared with water temperatures measured at 10 instrumented buoy locations supplied by the Savannah River Technology Center. The data set was used to examine the effect of image quality on temperature retrieval as well as to document any bias between the sensor chip arrays (SCA's). A portion of the data set was used to evaluate the influence of proximity to shoreline on the water temperature retrievals. This study found errors in daytime water temperature retrievals of 1.8 C for SCA 2 and 4.0 C for SCA 1. The errors in nighttime water temperature retrievals were 3.8 C for SCA 1. Water temperature retrievals for nighttime appear to be related to image quality with the largest positive bias for the highest quality images and the largest negative bias for the lowest quality images. The daytime data show no apparent relationship between water temperature retrieval error and image quality. The average temperature retrieval error near open water buoys was less than corresponding values for the near-shore buoys. After subtraction of the estimated error in the ground truth data, the water temperature retrieval error was 1.2 C for the open-water buoys compared to 1.8 C for the near-shore buoys. The open-water error is comparable to that found at Nauru

  1. ARCOCT: Automatic detection of lumen border in intravascular OCT images.

    Science.gov (United States)

    Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos

    2017-11-01

    Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border

  2. Behavioral pragmatism: No place for reality and truth

    Science.gov (United States)

    Barnes-Holmes, Dermot

    2000-01-01

    The current article begins by reviewing L. J. Hayes's claim that pragmatism relies on a correspondence-based truth criterion. To evaluate her claim, the concept of the observation sentence, proposed by the pragmatist philosopher W. V. Quine, is examined. The observation sentence appears to remove the issue of correspondence from Quine's pragmatist philosophy. Nevertheless, the issue of correspondence reemerges, as the problem of homology, when Quine appeals to agreement between or among observation sentences as the basis for truth. Quine also argues, however, that the problem of homology (i.e., correspondence) should be ignored on pragmatic grounds. Because the problem is simply ignored, but not resolved, there appears to be some substance to Hayes's claim that pragmatism relies ultimately on correspondence as a truth criterion. Behavioral pragmatism is then introduced to circumvent both Hayes's claim and Quine's implicit appeal to correspondence. Behavioral pragmatism avoids correspondence by appealing to the personal goals (i.e., the behavior) of the scientist or philosopher as the basis for establishing truth. One consequence of this approach, however, is that science and philosophy are robbed of any final or absolute objectives and thus may not be a satisfactory solution to philosophers. On balance, behavioral pragmatism avoids any appeal to correspondence-based truth, and thus it cannot be criticized for generating the same philosophical problems that have come to be associated with this truth criterion. PMID:22478346

  3. Unveiling the truth: warnings reduce the repetition-based truth effect.

    Science.gov (United States)

    Nadarevic, Lena; Aßfalg, André

    2017-07-01

    Typically, people are more likely to consider a previously seen or heard statement as true compared to a novel statement. This repetition-based "truth effect" is thought to rely on fluency-truth attributions as the underlying cognitive mechanism. In two experiments, we tested the nature of the fluency-attribution mechanism by means of warning instructions, which informed participants about the truth effect and asked them to prevent it. In Experiment 1, we instructed warned participants to consider whether a statement had already been presented in the experiment to avoid the truth effect. However, warnings did not significantly reduce the truth effect. In Experiment 2, we introduced control questions and reminders to ensure that participants understood the warning instruction. This time, warning reduced, but did not eliminate the truth effect. Assuming that the truth effect relies on fluency-truth attributions, this finding suggests that warned participants could control their attributions but did not disregard fluency altogether when making truth judgments. Further, we found no evidence that participants overdiscount the influence of fluency on their truth judgments.

  4. "#Factsmustfall"?--Education in a Post-Truth, Post-Truthful World

    Science.gov (United States)

    Horsthemke, Kai

    2017-01-01

    Taking its inspiration from the name of the recent "#FeesMustFall" movement on South African university campuses, this paper takes stock of the apparent disrepute into which truth, facts and also rationality have fallen in recent times. In the post-truth world, the blurring of borders between truth and deception, truthfulness and…

  5. Automatic Scheduling and Planning (ASAP) in future ground control systems

    Science.gov (United States)

    Matlin, Sam

    1988-01-01

    This report describes two complementary approaches to the problem of space mission planning and scheduling. The first is an Expert System or Knowledge-Based System for automatically resolving most of the activity conflicts in a candidate plan. The second is an Interactive Graphics Decision Aid to assist the operator in manually resolving the residual conflicts which are beyond the scope of the Expert System. The two system designs are consistent with future ground control station activity requirements, support activity timing constraints, resource limits and activity priority guidelines.

  6. Biometric correspondence between reface computerized facial approximations and CT-derived ground truth skin surface models objectively examined using an automated facial recognition system.

    Science.gov (United States)

    Parks, Connie L; Monson, Keith L

    2018-05-01

    This study employed an automated facial recognition system as a means of objectively evaluating biometric correspondence between a ReFace facial approximation and the computed tomography (CT) derived ground truth skin surface of the same individual. High rates of biometric correspondence were observed, irrespective of rank class (R k ) or demographic cohort examined. Overall, 48% of the test subjects' ReFace approximation probes (n=96) were matched to his or her corresponding ground truth skin surface image at R 1 , a rank indicating a high degree of biometric correspondence and a potential positive identification. Identification rates improved with each successively broader rank class (R 10 =85%, R 25 =96%, and R 50 =99%), with 100% identification by R 57 . A sharp increase (39% mean increase) in identification rates was observed between R 1 and R 10 across most rank classes and demographic cohorts. In contrast, significantly lower (p0.05) performance differences were observed across demographic cohorts or CT scan protocols. Performance measures observed in this research suggest that ReFace approximations are biometrically similar to the actual faces of the approximated individuals and, therefore, may have potential operational utility in contexts in which computerized approximations are utilized as probes in automated facial recognition systems. Copyright © 2018. Published by Elsevier B.V.

  7. Evaluation of digital image correlation techniques using realistic ground truth speckle images

    International Nuclear Information System (INIS)

    Cofaru, C; Philips, W; Van Paepegem, W

    2010-01-01

    Digital image correlation (DIC) has been acknowledged and widely used in recent years in the field of experimental mechanics as a contactless method for determining full field displacements and strains. Even though several sub-pixel motion estimation algorithms have been proposed in the literature, little is known about their accuracy and limitations in reproducing complex underlying motion fields occurring in real mechanical tests. This paper presents a new method for evaluating sub-pixel motion estimation algorithms using ground truth speckle images that are realistically warped using artificial motion fields that were obtained following two distinct approaches: in the first, the horizontal and vertical displacement fields are created according to theoretical formulas for the given type of experiment while the second approach constructs the displacements through radial basis function interpolation starting from real DIC results. The method is applied in the evaluation of five DIC algorithms with results indicating that the gradient-based DIC methods generally have a quality advantage when using small sized blocks and are a better choice for calculating very small displacements and strains. The Newton–Raphson is the overall best performing method with a notable quality advantage when large block sizes are employed and in experiments where large strain fields are of interest

  8. Status of the undisturbed mangroves at Brunei Bay, East Malaysia: a preliminary assessment based on remote sensing and ground-truth observations

    Directory of Open Access Journals (Sweden)

    Behara Satyanarayana

    2018-02-01

    Full Text Available Brunei Bay, which receives freshwater discharge from four major rivers, namely Limbang, Sundar, Weston and Menumbok, hosts a luxuriant mangrove cover in East Malaysia. However, this relatively undisturbed mangrove forest has been less scientifically explored, especially in terms of vegetation structure, ecosystem services and functioning, and land-use/cover changes. In the present study, mangrove areal extent together with species composition and distribution at the four notified estuaries was evaluated through remote sensing (Advanced Land Observation Satellite—ALOS and ground-truth (Point-Centred Quarter Method—PCQM observations. As of 2010, the total mangrove cover was found to be ca. 35,183.74 ha, of which Weston and Menumbok occupied more than two-folds (58%, followed by Sundar (27% and Limbang (15%. The medium resolution ALOS data were efficient for mapping dominant mangrove species such as Nypa fruticans, Rhizophora apiculata, Sonneratia caseolaris, S. alba and Xylocarpus granatum in the vicinity (accuracy: 80%. The PCQM estimates found a higher basal area at Limbang and Menumbok—suggestive of more mature vegetation, compared to Sundar and Weston. Mangrove stand structural complexity (derived from the complexity index was also high in the order of Limbang > Menumbok > Sundar > Weston and supporting the perspective of less/undisturbed vegetation at two former locations. Both remote sensing and ground-truth observations have complementarily represented the distribution of Sonneratia spp. as pioneer vegetation at shallow river mouths, N. fruticans in the areas of strong freshwater discharge, R. apiculata in the areas of strong neritic incursion and X. granatum at interior/elevated grounds. The results from this study would be able to serve as strong baseline data for future mangrove investigations at Brunei Bay, including for monitoring and management purposes locally at present.

  9. Status of the undisturbed mangroves at Brunei Bay, East Malaysia: a preliminary assessment based on remote sensing and ground-truth observations

    Science.gov (United States)

    Izzaty Horsali, Nurul Amira; Mat Zauki, Nurul Ashikin; Otero, Viviana; Nadzri, Muhammad Izuan; Ibrahim, Sulong; Husain, Mohd-Lokman; Dahdouh-Guebas, Farid

    2018-01-01

    Brunei Bay, which receives freshwater discharge from four major rivers, namely Limbang, Sundar, Weston and Menumbok, hosts a luxuriant mangrove cover in East Malaysia. However, this relatively undisturbed mangrove forest has been less scientifically explored, especially in terms of vegetation structure, ecosystem services and functioning, and land-use/cover changes. In the present study, mangrove areal extent together with species composition and distribution at the four notified estuaries was evaluated through remote sensing (Advanced Land Observation Satellite—ALOS) and ground-truth (Point-Centred Quarter Method—PCQM) observations. As of 2010, the total mangrove cover was found to be ca. 35,183.74 ha, of which Weston and Menumbok occupied more than two-folds (58%), followed by Sundar (27%) and Limbang (15%). The medium resolution ALOS data were efficient for mapping dominant mangrove species such as Nypa fruticans, Rhizophora apiculata, Sonneratia caseolaris, S. alba and Xylocarpus granatum in the vicinity (accuracy: 80%). The PCQM estimates found a higher basal area at Limbang and Menumbok—suggestive of more mature vegetation, compared to Sundar and Weston. Mangrove stand structural complexity (derived from the complexity index) was also high in the order of Limbang > Menumbok > Sundar > Weston and supporting the perspective of less/undisturbed vegetation at two former locations. Both remote sensing and ground-truth observations have complementarily represented the distribution of Sonneratia spp. as pioneer vegetation at shallow river mouths, N. fruticans in the areas of strong freshwater discharge, R. apiculata in the areas of strong neritic incursion and X. granatum at interior/elevated grounds. The results from this study would be able to serve as strong baseline data for future mangrove investigations at Brunei Bay, including for monitoring and management purposes locally at present. PMID:29479500

  10. Status of the undisturbed mangroves at Brunei Bay, East Malaysia: a preliminary assessment based on remote sensing and ground-truth observations.

    Science.gov (United States)

    Satyanarayana, Behara; M Muslim, Aidy; Izzaty Horsali, Nurul Amira; Mat Zauki, Nurul Ashikin; Otero, Viviana; Nadzri, Muhammad Izuan; Ibrahim, Sulong; Husain, Mohd-Lokman; Dahdouh-Guebas, Farid

    2018-01-01

    Brunei Bay, which receives freshwater discharge from four major rivers, namely Limbang, Sundar, Weston and Menumbok, hosts a luxuriant mangrove cover in East Malaysia. However, this relatively undisturbed mangrove forest has been less scientifically explored, especially in terms of vegetation structure, ecosystem services and functioning, and land-use/cover changes. In the present study, mangrove areal extent together with species composition and distribution at the four notified estuaries was evaluated through remote sensing (Advanced Land Observation Satellite-ALOS) and ground-truth (Point-Centred Quarter Method-PCQM) observations. As of 2010, the total mangrove cover was found to be ca. 35,183.74 ha, of which Weston and Menumbok occupied more than two-folds (58%), followed by Sundar (27%) and Limbang (15%). The medium resolution ALOS data were efficient for mapping dominant mangrove species such as Nypa fruticans , Rhizophora apiculata , Sonneratia caseolaris , S. alba and Xylocarpus granatum in the vicinity (accuracy: 80%). The PCQM estimates found a higher basal area at Limbang and Menumbok-suggestive of more mature vegetation, compared to Sundar and Weston. Mangrove stand structural complexity (derived from the complexity index) was also high in the order of Limbang > Menumbok > Sundar > Weston and supporting the perspective of less/undisturbed vegetation at two former locations. Both remote sensing and ground-truth observations have complementarily represented the distribution of Sonneratia spp. as pioneer vegetation at shallow river mouths, N. fruticans in the areas of strong freshwater discharge, R. apiculata in the areas of strong neritic incursion and X. granatum at interior/elevated grounds. The results from this study would be able to serve as strong baseline data for future mangrove investigations at Brunei Bay, including for monitoring and management purposes locally at present.

  11. Land Use and Land Cover, Existing land use derived from orthoimagery. Ground-truthing from discussion with local plan commission members., Published in 2000, 1:12000 (1in=1000ft) scale, Portage County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Land Use and Land Cover dataset current as of 2000. Existing land use derived from orthoimagery. Ground-truthing from discussion with local plan commission members..

  12. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs

    International Nuclear Information System (INIS)

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den

    2010-01-01

    Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4±1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images

  13. Geographic information system for fusion and analysis of high-resolution remote sensing and ground truth data

    Science.gov (United States)

    Freeman, Anthony; Way, Jo Bea; Dubois, Pascale; Leberl, Franz

    1992-01-01

    We seek to combine high-resolution remotely sensed data with models and ground truth measurements, in the context of a Geographical Information System, integrated with specialized image processing software. We will use this integrated system to analyze the data from two Case Studies, one at a bore Al forest site, the other a tropical forest site. We will assess the information content of the different components of the data, determine the optimum data combinations to study biogeophysical changes in the forest, assess the best way to visualize the results, and validate the models for the forest response to different radar wavelengths/polarizations. During the 1990's, unprecedented amounts of high-resolution images from space of the Earth's surface will become available to the applications scientist from the LANDSAT/TM series, European and Japanese ERS-1 satellites, RADARSAT and SIR-C missions. When the Earth Observation Systems (EOS) program is operational, the amount of data available for a particular site can only increase. The interdisciplinary scientist, seeking to use data from various sensors to study his site of interest, may be faced with massive difficulties in manipulating such large data sets, assessing their information content, determining the optimum combinations of data to study a particular parameter, visualizing his results and validating his model of the surface. The techniques to deal with these problems are also needed to support the analysis of data from NASA's current program of Multi-sensor Airborne Campaigns, which will also generate large volumes of data. In the Case Studies outlined in this proposal, we will have somewhat unique data sets. For the Bonanza Creek Experimental Forest (Case I) calibrated DC-8 SAR data and extensive ground truth measurement are already at our disposal. The data set shows documented evidence to temporal change. The Belize Forest Experiment (Case II) will produce calibrated DC-8 SAR and AVIRIS data, together with

  14. Southwest U.S. Seismo-Acoustic Network: An Autonomous Data Aggregation, Detection, Localization and Ground-Truth Bulletin for the Infrasound Community

    Science.gov (United States)

    Jones, K. R.; Arrowsmith, S.

    2013-12-01

    The Southwest U.S. Seismo-Acoustic Network (SUSSAN) is a collaborative project designed to produce infrasound event detection bulletins for the infrasound community for research purposes. We are aggregating a large, unique, near real-time data set with available ground truth information from seismo-acoustic arrays across New Mexico, Utah, Nevada, California, Texas and Hawaii. The data are processed in near real-time (~ every 20 minutes) with detections being made on individual arrays and locations determined for networks of arrays. The detection and location data are then combined with any available ground truth information and compiled into a bulletin that will be released to the general public directly and eventually through the IRIS infrasound event bulletin. We use the open source Earthworm seismic data aggregation software to acquire waveform data either directly from the station operator or via the Incorporated Research Institutions for Seismology Data Management Center (IRIS DMC), if available. The data are processed using InfraMonitor, a powerful infrasound event detection and localization software program developed by Stephen Arrowsmith at Los Alamos National Laboratory (LANL). Our goal with this program is to provide the infrasound community with an event database that can be used collaboratively to study various natural and man-made sources. We encourage participation in this program directly or by making infrasound array data available through the IRIS DMC or other means. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. R&A 5317326

  15. Automatic segmentation of the left ventricle in a cardiac MR short axis image using blind morphological operation

    Science.gov (United States)

    Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat

    2018-04-01

    Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.

  16. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  17. Automatic Measurement of Fetal Brain Development from Magnetic Resonance Imaging: New Reference Data.

    Science.gov (United States)

    Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna

    2018-01-01

    Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.

  18. A procedure used for a ground truth study of a land use map of North Alabama generated from LANDSAT data

    Science.gov (United States)

    Downs, S. W., Jr.; Sharma, G. C.; Bagwell, C.

    1977-01-01

    A land use map of a five county area in North Alabama was generated from LANDSAT data using a supervised classification algorithm. There was good overall agreement between the land use designated and known conditions, but there were also obvious discrepancies. In ground checking the map, two types of errors were encountered - shift and misclassification - and a method was developed to eliminate or greatly reduce the errors. Randomly selected study areas containing 2,525 pixels were analyzed. Overall, 76.3 percent of the pixels were correctly classified. A contingency coefficient of correlation was calculated to be 0.7 which is significant at the alpha = 0.01 level. The land use maps generated by computers from LANDSAT data are useful for overall land use by regional agencies. However, care must be used when making detailed analysis of small areas. The procedure used for conducting the ground truth study together with data from representative study areas is presented.

  19. Automatic bone detection and soft tissue aware ultrasound-CT registration for computer-aided orthopedic surgery.

    Science.gov (United States)

    Wein, Wolfgang; Karamalis, Athanasios; Baumgartner, Adrian; Navab, Nassir

    2015-06-01

    The transfer of preoperative CT data into the tracking system coordinates within an operating room is of high interest for computer-aided orthopedic surgery. In this work, we introduce a solution for intra-operative ultrasound-CT registration of bones. We have developed methods for fully automatic real-time bone detection in ultrasound images and global automatic registration to CT. The bone detection algorithm uses a novel bone-specific feature descriptor and was thoroughly evaluated on both in-vivo and ex-vivo data. A global optimization strategy aligns the bone surface, followed by a soft tissue aware intensity-based registration to provide higher local registration accuracy. We evaluated the system on femur, tibia and fibula anatomy in a cadaver study with human legs, where magnetically tracked bone markers were implanted to yield ground truth information. An overall median system error of 3.7 mm was achieved on 11 datasets. Global and fully automatic registration of bones aquired with ultrasound to CT is feasible, with bone detection and tracking operating in real time for immediate feedback to the surgeon.

  20. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    International Nuclear Information System (INIS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Vermandel, Maximilien; Baillet, Clio

    2015-01-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging.Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used.Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results.The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging. (paper)

  1. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  2. Comparative Analysis of Automatic Exudate Detection between Machine Learning and Traditional Approaches

    Science.gov (United States)

    Sopharak, Akara; Uyyanonvara, Bunyarit; Barman, Sarah; Williamson, Thomas

    To prevent blindness from diabetic retinopathy, periodic screening and early diagnosis are neccessary. Due to lack of expert ophthalmologists in rural area, automated early exudate (one of visible sign of diabetic retinopathy) detection could help to reduce the number of blindness in diabetic patients. Traditional automatic exudate detection methods are based on specific parameter configuration, while the machine learning approaches which seems more flexible may be computationally high cost. A comparative analysis of traditional and machine learning of exudates detection, namely, mathematical morphology, fuzzy c-means clustering, naive Bayesian classifier, Support Vector Machine and Nearest Neighbor classifier are presented. Detected exudates are validated with expert ophthalmologists' hand-drawn ground-truths. The sensitivity, specificity, precision, accuracy and time complexity of each method are also compared.

  3. Comparison Of Semi-Automatic And Automatic Slick Detection Algorithms For Jiyeh Power Station Oil Spill, Lebanon

    Science.gov (United States)

    Osmanoglu, B.; Ozkan, C.; Sunar, F.

    2013-10-01

    After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.

  4. Design of Wireless Automatic Synchronization for the Low-Frequency Coded Ground Penetrating Radar

    Directory of Open Access Journals (Sweden)

    Zhenghuan Xia

    2015-01-01

    Full Text Available Low-frequency coded ground penetrating radar (GPR with a pair of wire dipole antennas has some advantages for deep detection. Due to the large distance between the two antennas, the synchronization design is a major challenge of implementing the GPR system. This paper proposes a simple and stable wireless automatic synchronization method based on our developed GPR system, which does not need any synchronization chips or modules and reduces the cost of the hardware system. The transmitter omits the synchronization preamble and pseudorandom binary sequence (PRBS at an appropriate time interval, while receiver automatically estimates the synchronization time and receives the returned signal from the underground targets. All the processes are performed in a single FPGA. The performance of the proposed synchronization method is validated with experiment.

  5. Crowd-sourced data collection to support automatic classification of building footprint data

    Science.gov (United States)

    Hecht, Robert; Kalla, Matthias; Krüger, Tobias

    2018-05-01

    Human settlements are mainly formed by buildings with their different characteristics and usage. Despite the importance of buildings for the economy and society, complete regional or even national figures of the entire building stock and its spatial distribution are still hardly available. Available digital topographic data sets created by National Mapping Agencies or mapped voluntarily through a crowd via Volunteered Geographic Information (VGI) platforms (e.g. OpenStreetMap) contain building footprint information but often lack additional information on building type, usage, age or number of floors. For this reason, predictive modeling is becoming increasingly important in this context. The capabilities of machine learning allow for the prediction of building types and other building characteristics and thus, the efficient classification and description of the entire building stock of cities and regions. However, such data-driven approaches always require a sufficient amount of ground truth (reference) information for training and validation. The collection of reference data is usually cost-intensive and time-consuming. Experiences from other disciplines have shown that crowdsourcing offers the possibility to support the process of obtaining ground truth data. Therefore, this paper presents the results of an experimental study aiming at assessing the accuracy of non-expert annotations on street view images collected from an internet crowd. The findings provide the basis for a future integration of a crowdsourcing component into the process of land use mapping, particularly the automatic building classification.

  6. Frege on Truths, Truth and the True

    Directory of Open Access Journals (Sweden)

    Wolfgang Künne

    2008-08-01

    Full Text Available The founder of modern logic and grandfather of analytic philosophy was 70 years old when he published his paper 'Der Gedanke' (The Thought in 1918. This essay contains some of Gottlob Frege's deepest and most provocative reflections on the concept of truth, and it will play a prominent role in my lectures. The plan for my lectures is as follows. What is it that is (primarily true or false? 'Thoughts', is Frege's answer. In §1, I shall explain and defend this answer. In §2, I shall briefly consider his enthymematic argument for the conclusion that the word 'true' resists any attempt at defining it. In §3, I shall discuss his thesis that the thought that things are thus and so is identical with the thought that it is true that things are thus and so. The reasons we are offered for this thesis will be found wanting. In §4, I shall comment extensively on Frege's claim that, in a non-formal language like the one I am currently trying to speak, we can say whatever we want to say without ever using the word 'true' or any of its synonyms. I will reject the propositional-redundancy claim, endorse the assertive-redundancy claim and deny the connection Frege ascribes to them. In his classic 1892 paper 'Über Sinn und Bedeutung' (On Sense and Signification Frege argues that truth-values are objects. In §5, I shall scrutinize his argument. In §6, I will show that in Frege's ideography (Begriffsschrift truth, far from being redundant, is omnipresent. The final §7 is again on truth-bearers, this time as a topic in the theory of intentionality and in metaphysics. In the course of discussing Frege's views on the objecthood, the objectivity of thoughts and the timelessness of truth(s, I will plead for a somewhat mitigated Platonism.

  7. Redefining Religious Truth as a Challenge for Philosophy of Religion

    NARCIS (Netherlands)

    Jonkers, P.H.A.I.

    2012-01-01

    One of the most important features of contemporary Western societies is the rise of (religious) pluralism. Whereas (philosophical) theism used to serve as a common ground to discuss the truth-claims of religion, this approach seems to have lost much of its plausibility. What I want to argue in this

  8. Automatic Detection and Positioning of Ground Control Points Using TerraSAR-X Multiaspect Acquisitions

    Science.gov (United States)

    Montazeri, Sina; Gisinger, Christoph; Eineder, Michael; Zhu, Xiao xiang

    2018-05-01

    Geodetic stereo Synthetic Aperture Radar (SAR) is capable of absolute three-dimensional localization of natural Persistent Scatterer (PS)s which allows for Ground Control Point (GCP) generation using only SAR data. The prerequisite for the method to achieve high precision results is the correct detection of common scatterers in SAR images acquired from different viewing geometries. In this contribution, we describe three strategies for automatic detection of identical targets in SAR images of urban areas taken from different orbit tracks. Moreover, a complete work-flow for automatic generation of large number of GCPs using SAR data is presented and its applicability is shown by exploiting TerraSAR-X (TS-X) high resolution spotlight images over the city of Oulu, Finland and a test site in Berlin, Germany.

  9. The Truth of Wikipedia

    Directory of Open Access Journals (Sweden)

    Nathaniel Tkacz

    2012-05-01

    Full Text Available What does it mean to assert that Wikipedia has a relation to truth? That there is, despite regular claims to the contrary, an entire apparatus of truth in Wikipedia? In this article, I show that Wikipedia has in fact two distinct relations to truth: one which is well known and forms the basis of existing popular and scholarly commentaries, and another which refers to equally well-known aspects of Wikipedia, but has not been understood in terms of truth. I demonstrate Wikipedia's dual relation to truth through a close analysis of the Neutral Point of View core content policy (and one of the project's 'Five Pillars'. I conclude by indicating what is at stake in the assertion that Wikipedia has a regime of truth and what bearing this has on existing commentaries.

  10. Ground-Truthing of Airborne LiDAR Using RTK-GPS Surveyed Data in Coastal Louisiana's Wetlands

    Science.gov (United States)

    Lauve, R. M.; Alizad, K.; Hagen, S. C.

    2017-12-01

    Airborne LiDAR (Light Detection and Ranging) data are used by engineers and scientists to create bare earth digital elevation models (DEM), which are essential to modeling complex coastal, ecological, and hydrological systems. However, acquiring accurate bare earth elevations in coastal wetlands is difficult due to the density of marsh grasses that prevent the sensors reflection off the true ground surface. Previous work by Medeiros et al. [2015] developed a technique to assess LiDAR error and adjust elevations according to marsh vegetation density and index. The aim of this study is the collection of ground truth points and the investigation on the range of potential errors found in existing LiDAR datasets within coastal Louisiana's wetlands. Survey grids were mapped out in an area dominated by Spartina alterniflora and a survey-grade Trimble Real Time Kinematic (RTK) GPS device was employed to measure bare earth ground elevations in the marsh system adjacent to Terrebonne Bay, LA. Elevations were obtained for 20 meter-spaced surveyed grid points and were used to generate a DEM. The comparison between LiDAR derived and surveyed data DEMs yield an average difference of 23 cm with a maximum difference of 68 cm. Considering the local tidal range of 45 cm, these differences can introduce substantial error when the DEM is used for ecological modeling [Alizad et al., 2016]. Results from this study will be further analyzed and implemented in order to adjust LiDAR-derived DEMs closer to their true elevation across Louisiana's coastal wetlands. ReferencesAlizad, K., S. C. Hagen, J. T. Morris, S. C. Medeiros, M. V. Bilskie, and J. F. Weishampel (2016), Coastal wetland response to sea-level rise in a fluvial estuarine system, Earth's Future, 4(11), 483-497, 10.1002/2016EF000385. Medeiros, S., S. Hagen, J. Weishampel, and J. Angelo (2015), Adjusting Lidar-Derived Digital Terrain Models in Coastal Marshes Based on Estimated Aboveground Biomass Density, Remote Sensing, 7

  11. Automatic design of digital synthetic gene circuits.

    Directory of Open Access Journals (Sweden)

    Mario A Marchisio

    2011-02-01

    Full Text Available De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input-output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions.

  12. Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning.

    Science.gov (United States)

    Wang, Guan; Sun, Yu; Wang, Jianxin

    2017-01-01

    Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.

  13. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach.

    Science.gov (United States)

    Avendi, Michael R; Kheradvar, Arash; Jafarkhani, Hamid

    2017-12-01

    This study aims to accurately segment the right ventricle (RV) from cardiac MRI using a fully automatic learning-based method. The proposed method uses deep learning algorithms, i.e., convolutional neural networks and stacked autoencoders, for automatic detection and initial segmentation of the RV chamber. The initial segmentation is then combined with the deformable models to improve the accuracy and robustness of the process. We trained our algorithm using 16 cardiac MRI datasets of the MICCAI 2012 RV Segmentation Challenge database and validated our technique using the rest of the dataset (32 subjects). An average Dice metric of 82.5% along with an average Hausdorff distance of 7.85 mm were achieved for all the studied subjects. Furthermore, a high correlation and level of agreement with the ground truth contours for end-diastolic volume (0.98), end-systolic volume (0.99), and ejection fraction (0.93) were observed. Our results show that deep learning algorithms can be effectively used for automatic segmentation of the RV. Computed quantitative metrics of our method outperformed that of the existing techniques participated in the MICCAI 2012 challenge, as reported by the challenge organizers. Magn Reson Med 78:2439-2448, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Dual-model automatic detection of nerve-fibres in corneal confocal microscopy images.

    Science.gov (United States)

    Dabbah, M A; Graham, J; Petropoulos, I; Tavakoli, M; Malik, R A

    2010-01-01

    Corneal Confocal Microscopy (CCM) imaging is a non-invasive surrogate of detecting, quantifying and monitoring diabetic peripheral neuropathy. This paper presents an automated method for detecting nerve-fibres from CCM images using a dual-model detection algorithm and compares the performance to well-established texture and feature detection methods. The algorithm comprises two separate models, one for the background and another for the foreground (nerve-fibres), which work interactively. Our evaluation shows significant improvement (p approximately 0) in both error rate and signal-to-noise ratio of this model over the competitor methods. The automatic method is also evaluated in comparison with manual ground truth analysis in assessing diabetic neuropathy on the basis of nerve-fibre length, and shows a strong correlation (r = 0.92). Both analyses significantly separate diabetic patients from control subjects (p approximately 0).

  15. Towards ground-truthing of spaceborne estimates of above-ground biomass and leaf area index in tropical rain forests

    Science.gov (United States)

    Köhler, P.; Huth, A.

    2010-05-01

    The canopy height of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or lidar. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground biomass (AGB) (and thus carbon content of vegetation) and leaf area index (LAI). The process-based forest growth model FORMIND2.0 was applied to simulate (a) undisturbed forest growth and (b) a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia) in South-East Asia. It is found that for undisturbed forest and a variety of disturbed forests situations AGB can be expressed as a power-law function of canopy height h (AGB=a·hb) with an r2~60% for a spatial resolution of 20 m×20 m (0.04 ha, also called plot size). The regression is becoming significant better for the hectare wide analysis of the disturbed forest sites (r2=91%). There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2~60%) between AGB and the area fraction in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot data from the same region and with the large-scale forest inventory in Lambir. We conclude that the spaceborne remote sensing techniques have the potential to

  16. Truth as a Mathematical Object

    Directory of Open Access Journals (Sweden)

    Jean-Yves Béziau

    2010-04-01

    Full Text Available In this paper we discuss in which sense truth is considered as a mathematical object in propositional logic. After clarifying how this concept is used in classical logic, through the notions of truth-table, truth-function and bivaluation, we examine some generalizations of it in non-classical logics: many-valued matrix semantics with three and four values, non-truth-functional bivalent semantics, Kripke possible world semantics.

  17. Truth Obviousness in Ancient Greek Philosophy

    Directory of Open Access Journals (Sweden)

    Halyna I. Budz

    2013-01-01

    Full Text Available The article examines the features of the axiomatic approach to the truth understanding in ancient Greek philosophy. Truth in the works by ancient philosophers has axiomatic essence, basing on divine origin of truth. As the truth has a divine origin, it is in reality. The reality, created by Gods is the solemn reality. Therefore, understanding of reality by man is the display of divine reality, which is true and clever. In of the context of ancient Greek philosophy, to know truth is to know something, existing in reality, in other words, something, truly existing, eternal reality. Consequently, to know truth is it to know the substantial reality base. That’s why the justification of the reality origin is the axiomatic doctrine of truth at the same time, because only fundamental principle “truly” exists and is the truth itself. The idea of fundamental principle in ancient Greek philosophy is the axiom, universal principle, which is the base of reality as a substance from ontological perspective and is realized as the truth from gnosiological perspective. Fundamental principle, as Greeks understand it, coincides with the truth, in other words, reality and thinking are identical. The idea of reality source is the universal criterion of world perception at the same time, in other words, it is the truth, which is perceived axiomatically.

  18. Truthful Monadic Abstractions

    DEFF Research Database (Denmark)

    Brock-Nannestad, Taus; Schürmann, Carsten

    2012-01-01

    indefinitely, finding neither a proof nor a disproof of a given subgoal. In this paper we characterize a family of truth-preserving abstractions from intuitionistic first-order logic to the monadic fragment of classical first-order logic. Because they are truthful, these abstractions can be used to disprove...

  19. Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Guan Wang

    2017-01-01

    Full Text Available Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.

  20. Truth and Methods.

    Science.gov (United States)

    Dasenbrock, Reed Way

    1995-01-01

    Examines literary theory's displacing of "method" in the New Historicist criticism. Argues that Stephen Greenblatt and Lee Paterson imply that no objective historical truth is possible and as a result do not give methodology its due weight in their criticism. Questions the theory of "truth" advanced in this vein of literary…

  1. An existential theory of truth

    Directory of Open Access Journals (Sweden)

    Dale Cannon

    1993-01-01

    Full Text Available This article is an attempt to present a simplified account of the theory of truth expressed in the writings of certain existentialist writers - namely, Kierkegaard, Heidegger, Jaspers, and Marcel. It is designed to serve as a supplement to conventional textbook treatments of the nature of truth, which typically ignore the contributions that existentialists have made to the topic. An existential theory of truth stresses the epistemological (not ontological indeterminateness of meaning and truth, apart from one’s personal participation in determining them. Contrary to superficial interpretations, this theory does not do away either with a transcendent reality or with objectivity. What is rejected is anything that would circumvent the necessary task of participating, oneself, in the epistemological determination of truth.

  2. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    Science.gov (United States)

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Automatic orientation and 3D modelling from markerless rock art imagery

    Science.gov (United States)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  4. Automated intraoperative calibration for prostate cancer brachytherapy

    International Nuclear Information System (INIS)

    Kuiran Chen, Thomas; Heffter, Tamas; Lasso, Andras; Pinter, Csaba; Abolmaesumi, Purang; Burdette, E. Clif; Fichtinger, Gabor

    2011-01-01

    Purpose: Prostate cancer brachytherapy relies on an accurate spatial registration between the implant needles and the TRUS image, called ''calibration''. The authors propose a new device and a fast, automatic method to calibrate the brachytherapy system in the operating room, with instant error feedback. Methods: A device was CAD-designed and precision-engineered, which mechanically couples a calibration phantom with an exact replica of the standard brachytherapy template. From real-time TRUS images acquired from the calibration device and processed by the calibration system, the coordinate transformation between the brachytherapy template and the TRUS images was computed automatically. The system instantly generated a report of the target reconstruction accuracy based on the current calibration outcome. Results: Four types of validation tests were conducted. First, 50 independent, real-time calibration trials yielded an average of 0.57 ± 0.13 mm line reconstruction error (LRE) relative to ground truth. Second, the averaged LRE was 0.37 ± 0.25 mm relative to ground truth in tests with six different commercial TRUS scanners operating at similar imaging settings. Furthermore, testing with five different commercial stepper systems yielded an average of 0.29 ± 0.16 mm LRE relative to ground truth. Finally, the system achieved an average of 0.56 ± 0.27 mm target registration error (TRE) relative to ground truth in needle insertion tests through the template in a water tank. Conclusions: The proposed automatic, intraoperative calibration system for prostate cancer brachytherapy has achieved high accuracy, precision, and robustness.

  5. The end of truth?

    OpenAIRE

    C. W. du Toit

    1997-01-01

    As we are approaching the end of the century, many ideas, systems, and certainties, previously taken for granted, seem to be questioned, altered and rejected. One of these is the notion of truth, which pervades the very fibre of Western thinking. Rejecting the relevant critique as simply a postmodem fad, this article proceeds to give attention to the questions regarding the end of religious, scientific, and metaphysical truth. Truth and power are dealt with, as well as the narrative nature of...

  6. THEOLOGICAL AND PHILOSOPHICAL THEORIES OF TRUTH

    Directory of Open Access Journals (Sweden)

    Hans-Peter Grosshans

    2012-02-01

    Full Text Available Examining some theological and philosophical theories of truth, the author concentrates his attention on the experience of giving concrete reality to the Christian discourse about truth and at the same time contrasting this search with the attempts of philosophy to define truth. He draws the reader’s attention to the understanding of truth in language and communication. In his article he discusses the essential theories of truth which are characteristic of western philosophy: classical, correspondent, coherent, pragmatic, communicative and ontic. The author notes the specific traits of a theological understanding of truth and contends that it is based on an ontologically higher level than that of the classic definition of truth viewed simply in relation to reality and the understanding. The knowledge of God given to the Christian faith by the activity of the triune God, is in itself perfect and therefore in no need of further development. It is on this basis that theology develops its knowledge of faith, sweeping aside everything which is not in accord with this fundamental affirmation of faith or with the witness of revealed truth

  7. Automatic Quantification of Radiographic Wrist Joint Space Width of Patients With Rheumatoid Arthritis.

    Science.gov (United States)

    Huo, Yinghe; Vincken, Koen L; van der Heijde, Desiree; de Hair, Maria J H; Lafeber, Floris P; Viergever, Max A

    2017-11-01

    Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints are located around the scaphoid bone, viz. the multangular-navicular, capitate-navicular-lunate, and radiocarpal joints. Methods: The joint space around the scaphoid bone is detected by using consecutive searches of separate path segments, where each segment location aids in constraining the subsequent one. For joint margin delineation, first the boundary not affected by X-ray projection is extracted, followed by a backtrace process to obtain the actual joint margin. The accuracy of the quantified JSW is evaluated by comparison with the manually obtained ground truth. Results: Two of the 50 radiographs used for evaluation of the method did not yield a correct path through all three wrist joints. The delineated joint margins of the remaining 48 radiographs were used for JSW quantification. It was found that 90% of the joints had a JSW deviating less than 20% from the mean JSW of manual indications, with the mean JSW error less than 10%. Conclusion: The proposed method is able to automatically quantify the JSW of radiographic wrist joints reliably. The proposed method may aid clinical researchers to study the progression of wrist joint damage in RA studies. Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints

  8. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    International Nuclear Information System (INIS)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu

    2013-01-01

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  9. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu [Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan, 62102 (China); Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut 06510-3220 (United States)

    2013-01-15

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  10. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    Science.gov (United States)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  11. Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer

    International Nuclear Information System (INIS)

    Thariat, Juliette; Ramus, Liliane; Maingon, Philippe; Odin, Guillaume; Gregoire, Vincent; Darcourt, Vincent; Guevara, Nicolas; Orlanducci, Marie-Helene; Marcie, Serge; Poissonnet, Gilles; Marcy, Pierre-Yves

    2012-01-01

    Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas–based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as ground truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.

  12. Image simulation for automatic license plate recognition

    Science.gov (United States)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  13. Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting

    Science.gov (United States)

    Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2012-02-01

    We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.

  14. Towards ground-truthing of spaceborne estimates of above-ground life biomass and leaf area index in tropical rain forests

    Directory of Open Access Journals (Sweden)

    P. Köhler

    2010-08-01

    Full Text Available The canopy height h of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or LIDAR. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground life biomass (AGB (and thus carbon content of vegetation and leaf area index (LAI and identify how correlation and uncertainty vary for two different spatial scales. The process-based forest growth model FORMIND2.0 was applied to simulate (a undisturbed forest growth and (b a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia in South-East Asia. In both undisturbed and disturbed forests AGB can be expressed as a power-law function of canopy height h (AGB = a · hb with an r2 ~ 60% if data are analysed in a spatial resolution of 20 m × 20 m (0.04 ha, also called plot size. The correlation coefficient of the regression is becoming significant better in the disturbed forest sites (r2 = 91% if data are analysed hectare wide. There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2 ~ 60% between AGB and the area fraction of gaps in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a

  15. Towards ground-truthing of spaceborne estimates of above-ground life biomass and leaf area index in tropical rain forests

    Science.gov (United States)

    Köhler, P.; Huth, A.

    2010-08-01

    The canopy height h of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or LIDAR. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground life biomass (AGB) (and thus carbon content of vegetation) and leaf area index (LAI) and identify how correlation and uncertainty vary for two different spatial scales. The process-based forest growth model FORMIND2.0 was applied to simulate (a) undisturbed forest growth and (b) a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia) in South-East Asia. In both undisturbed and disturbed forests AGB can be expressed as a power-law function of canopy height h (AGB = a · hb) with an r2 ~ 60% if data are analysed in a spatial resolution of 20 m × 20 m (0.04 ha, also called plot size). The correlation coefficient of the regression is becoming significant better in the disturbed forest sites (r2 = 91%) if data are analysed hectare wide. There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2 ~ 60%) between AGB and the area fraction of gaps in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot (PSP) data from the same region and with the

  16. Automatic generation of statistical pose and shape models for articulated joints.

    Science.gov (United States)

    Xin Chen; Graham, Jim; Hutchinson, Charles; Muir, Lindsay

    2014-02-01

    Statistical analysis of motion patterns of body joints is potentially useful for detecting and quantifying pathologies. However, building a statistical motion model across different subjects remains a challenging task, especially for a complex joint like the wrist. We present a novel framework for simultaneous registration and segmentation of multiple 3-D (CT or MR) volumes of different subjects at various articulated positions. The framework starts with a pose model generated from 3-D volumes captured at different articulated positions of a single subject (template). This initial pose model is used to register the template volume to image volumes from new subjects. During this process, the Grow-Cut algorithm is used in an iterative refinement of the segmentation of the bone along with the pose parameters. As each new subject is registered and segmented, the pose model is updated, improving the accuracy of successive registrations. We applied the algorithm to CT images of the wrist from 25 subjects, each at five different wrist positions and demonstrated that it performed robustly and accurately. More importantly, the resulting segmentations allowed a statistical pose model of the carpal bones to be generated automatically without interaction. The evaluation results show that our proposed framework achieved accurate registration with an average mean target registration error of 0.34 ±0.27 mm. The automatic segmentation results also show high consistency with the ground truth obtained semi-automatically. Furthermore, we demonstrated the capability of the resulting statistical pose and shape models by using them to generate a measurement tool for scaphoid-lunate dissociation diagnosis, which achieved 90% sensitivity and specificity.

  17. The vital role of transcendental truth in science.

    Science.gov (United States)

    Charlton, Bruce G

    2009-04-01

    I have come to believe that science depends for its long-term success on an explicit and pervasive pursuit of the ideal of transcendental truth. 'Transcendental' implies that a value is ideal and ultimate - it is aimed-at but can only imperfectly be known, achieved or measured. So, transcendental truth is located outside of science; beyond scientific methods, processes and peer consensus. Although the ultimate scientific authority of a transcendental value of truth was a view held almost universally by the greatest scientists throughout recorded history, modern science has all-but banished references to truth from professional scientific discourse - these being regarded as wishful, mystical and embarrassing at best, and hypocritical or manipulative at worst. With truth excluded, the highest remaining evaluation mechanism is 'professional consensus' or peer review - beyond which there is no higher court of appeal. Yet in Human accomplishment, Murray argues that cultures which foster great achievement need transcendental values (truth, beauty and virtue) to be a live presence in the culture; such that great artists and thinkers compete to come closer to the ideal. So a scientific system including truth as a live presence apparently performs better than a system which excludes truth. Transcendental truth therefore seems to be real in the pragmatic sense that it makes a difference. To restore the primacy of truth to science a necessary step would be to ensure that only truth-seekers were recruited to the key scientific positions, and to exclude from leadership those who are untruthful or exhibit insufficient devotion to the pursuit of truth. In sum, to remain anchored in its proper role, science should through 'truth talk' frequently be referencing normal professional practice to transcendental truth values. Ultimately, science should be conducted at every level, from top to bottom, on the basis of what Bronowski termed the 'habit of truth'. Such a situation currently

  18. Evaluating the truth brand.

    Science.gov (United States)

    Evans, W Douglas; Price, Simani; Blahut, Steven

    2005-03-01

    The American Legacy Foundation developed the truth campaign, an aspirational antismoking brand for adolescents. This study tested whether a multidimensional scale, brand equity in truth, mediates the relationship between campaign exposure and youth smoking. We collected brand equity responses from 2,306 youth on a nationally representative telephone survey. Factor analysis indicates that the scale has excellent psychometric properties and effectively measures brand equity. We developed a structural equation model to test the mediation hypothesis. Results show that brand equity mediates the relationship between truth and smoking. Analyses of potential cofounders show this relationship is robust. Behavioral branding (brands about a behavior or a lifestyle) is an important public health strategy.

  19. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    Science.gov (United States)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  20. Withholding truth from patients.

    LENUS (Irish Health Repository)

    O'Sullivan, Elizabeth

    2012-01-31

    The issue of whether patients should always be told the truth regarding their diagnosis and prognosis has afforded much debate in healthcare literature. This article examines telling the truth from an ethical perspective. It puts forward arguments for and against being honest with patients, using a clinical example to illustrate each point.

  1. TRUTH AS DETERMINANT OF RELIGIOUS FAITH

    African Journals Online (AJOL)

    Admin

    of values like any other institution”. Our concern here is how religious truth that ought to be absolute has become relative thus producing many different religions in the world. Relativity of Religious Truths As Determinant. Of Religious Faith. Truth has been defined as that which conforms to essential reality, but is it absolute?

  2. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    Science.gov (United States)

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. The use of the truth and deception in dementia care amongst general hospital staff.

    Science.gov (United States)

    Turner, Alex; Eccles, Fiona; Keady, John; Simpson, Jane; Elvish, Ruth

    2017-08-01

    Deceptive practice has been shown to be endemic in long-term care settings. However, little is known about the use of deception in dementia care within general hospitals and staff attitudes towards this practice. This study aimed to develop understanding of the experiences of general hospital staff and explore their decision-making processes when choosing whether to tell the truth or deceive a patient with dementia. This qualitative study drew upon a constructivist grounded theory approach to analyse data gathered from semi-structured interviews with a range of hospital staff. A model, grounded in participant experiences, was developed to describe their decision-making processes. Participants identified particular triggers that set in motion the need for a response. Various mediating factors influenced how staff chose to respond to these triggers. Overall, hospital staff were reluctant to either tell the truth or to lie to patients. Instead, 'distracting' or 'passing the buck' to another member of staff were preferred strategies. The issue of how truth and deception are defined was identified. The study adds to the growing research regarding the use of lies in dementia care by considering the decision-making processes for staff in general hospitals. Various factors influence how staff choose to respond to patients with dementia and whether deception is used. Similarities and differences with long-term dementia care settings are discussed. Clinical and research implications include: opening up the topic for further debate, implementing staff training about communication and evaluating the impact of these processes.

  4. The Incoherence of Post-Truth

    OpenAIRE

    Taylor, Dom

    2018-01-01

    Ostensibly, there has been a recent rise in ‘post-truth’ thinking (Higgins, 2016; Rochlin, 2017; Speed & Mannion, 2017; Suiter, 2016). The Oxford English Dictionary, which made ‘post-truth’ its word of the year for 2016, defines post-truth as “[r]elating to or denoting circumstances in which objective facts are less influential in shaping political debate or public opinion than appeals to emotion and personal belief” (“Post-truth,” 2017). Going into more detail, post-truth is described not ju...

  5. An existential theoiy of truth

    African Journals Online (AJOL)

    of the theory of truth expressed in the writings of certain existentialist writers ... gical) indeterminateness of meaning and truth, apart from one's .... dual human perspective and the unavoidable existential tasks of deciphering for oneself what is ...

  6. Small UAV Automatic Ground Collision Avoidance System Design Considerations and Flight Test Results

    Science.gov (United States)

    Sorokowski, Paul; Skoog, Mark; Burrows, Scott; Thomas, SaraKatie

    2015-01-01

    The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center Small Unmanned Aerial Vehicle (SUAV) Automatic Ground Collision Avoidance System (Auto GCAS) project demonstrated several important collision avoidance technologies. First, the SUAV Auto GCAS design included capabilities to take advantage of terrain avoidance maneuvers flying turns to either side as well as straight over terrain. Second, the design also included innovative digital elevation model (DEM) scanning methods. The combination of multi-trajectory options and new scanning methods demonstrated the ability to reduce the nuisance potential of the SUAV while maintaining robust terrain avoidance. Third, the Auto GCAS algorithms were hosted on the processor inside a smartphone, providing a lightweight hardware configuration for use in either the ground control station or on board the test aircraft. Finally, compression of DEM data for the entire Earth and successful hosting of that data on the smartphone was demonstrated. The SUAV Auto GCAS project demonstrated that together these methods and technologies have the potential to dramatically reduce the number of controlled flight into terrain mishaps across a wide range of aviation platforms with similar capabilities including UAVs, general aviation aircraft, helicopters, and model aircraft.

  7. Truth and the Capability of Learning

    Science.gov (United States)

    Hinchliffe, Geoffrey

    2007-01-01

    This paper examines learning as a capability, taking as its starting point the work of Amartya Sen and Martha Nussbaum. The paper is concerned to highlight the relation between learning and truth, and it does so by examining the idea of a genealogy of truth and also Donald Davidson's coherence theory. Thus the notion of truth is understood to be…

  8. Normativity and deflationary theories of truth

    Directory of Open Access Journals (Sweden)

    Bruno Mölder

    2008-12-01

    Full Text Available It has been argued that deflationary theories of truth stumble over the normativity of truth. This paper maintains that the normativity objection does not pose problems to at least one version of deflationism, minimalism. The rest of the paper discusses truth-related norms, showing that either they do not hold or they are not troublesome for deflationism.

  9. Japanese attitudes towards truth disclosure in cancer.

    Science.gov (United States)

    Tanida, N

    1994-03-01

    Despite the increasing concerns of truth disclosure, most cancer patients are not told the truth about their disease in Japan. The author has tried to provide some insight into this issue by evaluating results from questionnaires given to hospital patients, clients in a mass cancer survey, and doctors of a college hospital. Results showed that 72% of patients and 83% of clients wanted to be told the truth, but only 33% and 34% of them thought that the truth should be told to cancer patients. These attitudes of patients and clients regarding truth disclosure were more positive than those of the general public and health care workers in previous studies. At present, 13% of doctors inform cancer patients of their disease. These trends indicate that the Japanese attitude toward avoiding truth disclosure stems primarily from paternalism but is also influenced by social characteristics including insufficient understanding of this issue. Open discussion involving all factions of society is necessary to attain a better understanding of this issue and to promote eventual truth disclosure.

  10. A comparative study on change vector analysis based change ...

    Indian Academy of Sciences (India)

    1Department of Electronics Engineering, Punjab Technical University, ... et al 1998), a semi-automatic double-window flexible pace search (DFPS) threshold ... tion, ground truth data availability, time and money constraints, knowledge and ...

  11. Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.

    Directory of Open Access Journals (Sweden)

    Stephen M Plaza

    Full Text Available The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1 a probabilistic measure that evaluates segmentation without ground truth and 2 a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

  12. Automatic structural parcellation of mouse brain MRI using multi-atlas label fusion.

    Directory of Open Access Journals (Sweden)

    Da Ma

    Full Text Available Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework.

  13. Assessment of Machine Learning Algorithms for Automatic Benthic Cover Monitoring and Mapping Using Towed Underwater Video Camera and High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Hassan Mohamed

    2018-05-01

    Full Text Available Benthic habitat monitoring is essential for many applications involving biodiversity, marine resource management, and the estimation of variations over temporal and spatial scales. Nevertheless, both automatic and semi-automatic analytical methods for deriving ecologically significant information from towed camera images are still limited. This study proposes a methodology that enables a high-resolution towed camera with a Global Navigation Satellite System (GNSS to adaptively monitor and map benthic habitats. First, the towed camera finishes a pre-programmed initial survey to collect benthic habitat videos, which can then be converted to geo-located benthic habitat images. Second, an expert labels a number of benthic habitat images to class habitats manually. Third, attributes for categorizing these images are extracted automatically using the Bag of Features (BOF algorithm. Fourth, benthic cover categories are detected automatically using Weighted Majority Voting (WMV ensembles for Support Vector Machines (SVM, K-Nearest Neighbor (K-NN, and Bagging (BAG classifiers. Fifth, WMV-trained ensembles can be used for categorizing more benthic cover images automatically. Finally, correctly categorized geo-located images can provide ground truth samples for benthic cover mapping using high-resolution satellite imagery. The proposed methodology was tested over Shiraho, Ishigaki Island, Japan, a heterogeneous coastal area. The WMV ensemble exhibited 89% overall accuracy for categorizing corals, sediments, seagrass, and algae species. Furthermore, the same WMV ensemble produced a benthic cover map using a Quickbird satellite image with 92.7% overall accuracy.

  14. Ground-truthing the Foraminifera-bound Nitrogen Isotope Paleo-proxy in the Modern Sargasso Sea

    Science.gov (United States)

    Smart, S.; Ren, H. A.; Fawcett, S. E.; Conte, M. H.; Rafter, P. A.; Ellis, K. K.; Weigand, M. A.; Sigman, D. M.

    2016-02-01

    We present the nitrogen isotope ratios (δ15N) of planktonic foraminifera, a type of calcifying zooplankton, collected from surface ocean net tows, moored sediment traps and core-top sediments at the Bermuda Atlantic Time-series Study site in the Sargasso Sea between 2009 and 2013. Consistent with previous measurements from low-latitude core-top sediments, the annually averaged δ15N of organic matter bound within the shells of euphotic zone-dwelling foraminifera approximates that of thermocline nitrate, the dominant source of new nitrogen to Sargasso Sea surface waters. Based on net tow collections in the upper 200 m of the water column, we observe no systematic difference between the biomass δ15N and shell-bound δ15N of a given foraminifera species. For multiple species, the δ15N of net tow-collected upper ocean shells is lower than shells from sediment traps (by 0.5-2.1‰) and lower than shells from seafloor sediments (by 0.5-1.4‰). We are currently investigating whether these differences reflect actual processes affecting shell-bound δ15N or instead relate to the different time periods over which the three sample types integrate. The foraminiferal biomass δ15N time-series from the surface Sargasso Sea exhibits significant seasonal variations, with the lowest values in fall and the highest values in spring. The roles of hydrography, biogeochemistry, and ecosystem dynamics in driving these seasonal variations will be discussed. These data from the modern subtropical ocean form part of a greater effort to ground-truth the use of foram-bound δ15N to reconstruct past nutrient conditions, not only as a recorder of the isotopic composition of nitrogen supply in oligotrophic environments but also as a recorder of the degree of nitrate consumption in high-latitude regions such as the Southern Ocean.

  15. PSNet: prostate segmentation on MRI based on a convolutional neural network.

    Science.gov (United States)

    Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei

    2018-04-01

    Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.

  16. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G.; Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-01-01

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  17. Lying relies on the truth

    NARCIS (Netherlands)

    Debey, E.; De Houwer, J.; Verschuere, B.

    2014-01-01

    Cognitive models of deception focus on the conflict-inducing nature of the truth activation during lying. Here we tested the counterintuitive hypothesis that the truth can also serve a functional role in the act of lying. More specifically, we examined whether the construction of a lie can involve a

  18. Automatic coronary calcium scoring using noncontrast and contrast CT images

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Guanyu, E-mail: yang.list@seu.edu.cn; Chen, Yang; Shu, Huazhong [Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, No. 2, Si Pai Lou, Nanjing 210096 (China); Centre de Recherche en Information Biomédicale Sino-Français (LIA CRIBs), Nanjing 210096 (China); Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096 (China); Ning, Xiufang; Sun, Qiaoyu [Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, No. 2, Si Pai Lou, Nanjing 210096 (China); Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096 (China); Coatrieux, Jean-Louis [INSERM-U1099, Rennes F-35000 (France); Labotatoire Traitement du Signal et de l’Image (LTSI), Université de Rennes 1, Campus de Beaulieu, Bat. 22, Rennes 35042 Cedex (France); Centre de Recherche en Information Biomédicale Sino-Français (LIA CRIBs), Nanjing 210096 (China)

    2016-05-15

    Purpose: Calcium scoring is widely used to assess the risk of coronary heart disease (CHD). Accurate coronary artery calcification detection in noncontrast CT image is a prerequisite step for coronary calcium scoring. Currently, calcified lesions in the coronary arteries are manually identified by radiologists in clinical practice. Thus, in this paper, a fully automatic calcium scoring method was developed to alleviate the work load of the radiologists or cardiologists. Methods: The challenge of automatic coronary calcification detection is to discriminate the calcification in the coronary arteries from the calcification in the other tissues. Since the anatomy of coronary arteries is difficult to be observed in the noncontrast CT images, the contrast CT image of the same patient is used to extract the regions of the aorta, heart, and coronary arteries. Then, a patient-specific region-of-interest (ROI) is generated in the noncontrast CT image according to the segmentation results in the contrast CT image. This patient-specific ROI focuses on the regions in the neighborhood of coronary arteries for calcification detection, which can eliminate the calcifications in the surrounding tissues. A support vector machine classifier is applied finally to refine the results by removing possible image noise. Furthermore, the calcified lesions in the noncontrast images belonging to the different main coronary arteries are identified automatically using the labeling results of the extracted coronary arteries. Results: Forty datasets from four different CT machine vendors were used to evaluate their algorithm, which were provided by the MICCAI 2014 Coronary Calcium Scoring (orCaScore) Challenge. The sensitivity and positive predictive value for the volume of detected calcifications are 0.989 and 0.948. Only one patient out of 40 patients had been assigned to the wrong risk category defined according to Agatston scores (0, 1–100, 101–300, >300) by comparing with the ground

  19. AND THE TRUTH SHALL SET YOU FREE. TRUTH COMMISSIONS AND CIVIL-MILITARY RELATIONS

    Directory of Open Access Journals (Sweden)

    Michael DELOACH

    2015-04-01

    Full Text Available For societies suffering in the wake of a repressive regime, truth commissions may be a necessary compromise regarding the form of transitional justice pursued, but they can still play a far-reaching role in the democratization of civil-military relations. Because the perpetrators of past abuses are likely to continue to wield some level of power at the time of transition, prosecution of these members may be politically infeasible. Lacking the mandate to prosecute guilty parties or implement recommendations, truth commissions can still lay the foundation for a new era of civil-military relations. By distinguishing contemporary institutions from their past acts, revealing the patterns that allowed abuses to be carried out , and helping garner the political will for reforms, truth commissions can provide the impetus for the security sector reforms necessary to ensure a democratic future.

  20. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    Science.gov (United States)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  1. A Bayesian truth serum for subjective data.

    Science.gov (United States)

    Prelec, Drazen

    2004-10-15

    Subjective judgments, an essential information source for science and policy, are problematic because there are no public criteria for assessing judgmental truthfulness. I present a scoring method for eliciting truthful subjective data in situations where objective truth is unknowable. The method assigns high scores not to the most common answers but to the answers that are more common than collectively predicted, with predictions drawn from the same population. This simple adjustment in the scoring criterion removes all bias in favor of consensus: Truthful answers maximize expected score even for respondents who believe that their answer represents a minority view.

  2. Goedel, truth and proof

    International Nuclear Information System (INIS)

    Peregrin, Jaroslav

    2007-01-01

    The usual way of interpreting Goedel's (1931) incompleteness result is as showing that there is a gap between truth and provability, i.e. that we can never prove everything that is true. Moreover, this result is supposed to show that there are unprovable truths which we can know to be true. This, so the story goes, shows that we are more than machines that are restricted to acting as proof systems. Hence our minds are 'not mechanical'

  3. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    Science.gov (United States)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  4. TRUTH AS DETERMINANT OF RELIGIOUS FAITH

    African Journals Online (AJOL)

    Admin

    engendered by either the myths of the religion or the historical personages. The truth we intend to .... personage and source of Islamic religion was an orphan boy Muhammad, born ... The Fundamental Truth about Some Religions of the World.

  5. Heart Truth

    Science.gov (United States)

    ... health! Get a free badge or banner to post to your website or blog. Are you at risk for heart disease? Here's how to find out . Planning to use The Heart Truth logo? Check out our logo guidelines and downloads. ...

  6. The Logic of Truth in Paraconsistent Internal Realism

    Directory of Open Access Journals (Sweden)

    Manuel Bremer

    2008-08-01

    Full Text Available The paper discusses which modal principles should hold for a truth operator answering to the truth theory of internal realism. It turns out that the logic of truth in internal realism is isomorphic to the modal system S4.

  7. [Medicine and truth: between science and narrative].

    Science.gov (United States)

    Materia, Enrico; Baglio, Giovanni

    2009-01-01

    To which idea of truth may medicine refer? Evidence-based medicine (EBM) is rooted in the scientific truth. To explain the meaning and to trace the evolution of scientific truth, this article outlines the history of the Scientific Revolution and of the parable of Modernity, up to the arrival of pragmatism and hermeneutics. Here, the concept of truth becomes somehow discomfiting and the momentum leans towards the integration of different points of view. The fuzzy set theory for the definition of disease, as well as the shift from disease to syndrome (which has operational relevance for geriatrics), seems to refer to a more complex perspective on knowledge, albeit one that is less defined as compared to the nosology in use. Supporters of narrative medicine seek the truth in the interpretation of the patients' stories, and take advantage of the medical humanities to find the truth in words, feelings and contact with the patients. Hence, it is possible to mention the parresia, which is the frank communication espoused by stoicism and epicureanism, a technical and ethical quality which allows one to care in the proper way, a true discourse for one's own moral stance. Meanwhile, EBM and narrative medicine are converging towards a point at which medicine is considered a practical knowledge. It is the perspective of complexity that as a zeitgeist explains these multiple instances and proposes multiplicity and uncertainty as key referents for the truth and the practice of medicine.

  8. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    Science.gov (United States)

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  9. STS, symmetry and post-truth.

    Science.gov (United States)

    Lynch, Michael

    2017-08-01

    This essay takes up a series of questions about the connection between 'symmetry' in Science and Technology Studies (STS) and 'post-truth' in contemporary politics. A recent editorial in this journal by Sergio Sismondo argues that current discussions of 'post-truth' have little to do with conceptions of 'symmetry' or with concerns about 'epistemic democracy' in STS, while others, such as Steve Fuller and Harry Collins, insist that there are such connections. The present essay discusses a series of questions about the meaning of 'post-truth' and 'symmetry', and the connections of those concepts to each other and to 'epistemic democracy'. The essay ends with a series of other questions about STS and contemporary politics, and an invitation to further discussions.

  10. Truth, not truce: "common ground" on abortion, a movement within both movements.

    Science.gov (United States)

    Kelly, J R

    1995-01-01

    This sociological study examines the "common ground" movement that arose among abortion activists in the US during the 1980s. The first application of the term "common ground" to joint efforts by abortion activists on both sides of the issue is traced, and its meaning to early organizers is described. Discussion continues on the complicated and elusive efforts on the part of grassroots organizations and conflict resolution groups to practice the common ground approach to abortion. The five characteristics of the seminal common ground group in St. Louis were that it resulted from a combined pro-life and pro-choice initiative, it involved activists who publicly distinguished common ground from moral compromise or political accommodation, the activists remained loyal to their abortion activities, the activists agreed to cooperate in efforts aimed at reducing the pressures on women to abort, and common ground involved identifying the overlaps in emerging social thinking. The conceptual difficulties involved with use of the term are included in the reasons given for its virtual disappearance from abortion reporting in the press, which was busy relaying incidents of violence at abortion clinics. The election of President Clinton also stole the momentum from the common ground movement. While the future of movements based on the concept of "common ground" as envisioned by the St. Louis group remains precarious, depending for success as it does on actually changing society, this use of the term bears witness that conflicting loyalties do not preclude the promotion of common good. This meaning of the term is worth pursuing in cultural controversies such as that posed by abortion.

  11. Go fly a kite : air truthing replaces ground truthing for environmental investigations

    Energy Technology Data Exchange (ETDEWEB)

    Eaton, S.

    2008-05-15

    This article discussed the use of kite aerial photography (KAP) in oil and gas exploration activities. KAP exhibits a minimal environmental footprint while providing high resolution airborne data of the Earth's surface in infrared and a variety of other media. The cost-effective technology is being employed by Alberta's oil and gas operators as well as by the environmental consulting sector. The kites fly at lower elevations than other remote sensing tools, and yield better spatial resolution on the ground. KAP can map the Earth's surface at a scale of investigation on the order of 5 to 10 centimetres. The images are placed into a geo-referenced mosaic along with poorer resolution remote sensing tools. A KAP kit can be assembled for under $1000. By using infrared KAP images, operators are able to determine the health of muskeg and swamp areas and measure the rate of photosynthesis of plants. KAP is also used to evaluate troublesome wellsite by reclamation groups. The next generation of sensors will include radio-controlled drones and miniature aircraft. 6 figs.

  12. The Truth and Harriet Martineau: Interpreting a Life.

    Science.gov (United States)

    Weiner, Gaby

    This paper explores the difficulty of claims to truth in the analysis of the life of the Victorian feminist, reformer, educationist, and celebrity, Harriet Martineau (1802-76). She was widely known as a truthful person. For example, her contemporary, the poet Elizabeth Barrett Browning, wrote in 1845 that "her love of the truth is proverbial…

  13. Power and Truth in Foucault and Habermas

    OpenAIRE

    Oliveira, Amurabi; Universidade Federal de Santa Catarina

    2014-01-01

    Current paper examines how truth is enmeshed with power in Foucault´s and Habermas´s theories, highlighting similarities and differences within the two theoretical perspectives. If, on the one hand, truth in Foucault is based on a monologic imposition, on the other hand, Habermas insists on the dialogic understanding of the truth, although in both cases, related to power, at opposite positions, as Habermas himself points out in ‘The Philosophical Discourse of Modernity’. Foucault takes on a c...

  14. Truth, body and religion

    Directory of Open Access Journals (Sweden)

    Jarl-Thure Eriksson

    2011-01-01

    Full Text Available This paper is based on the words of welcome to the symposium on Religion and the Body on 16 June 2010. In a religious context ‘truth’ is like a mantra, a certain imperative to believe in sacred things. The concept of truth and falseness arises, when we as humans compare reality, as we experience it through our senses, with the representation we have in our memory, a comparison of new information with stored information. If we look for the truth, we have to search in the human mind. There we will also find religion.

  15. Ethics and Truth in Archival Research

    Science.gov (United States)

    Tesar, Marek

    2015-01-01

    The complexities of the ethics and truth in archival research are often unrecognised or invisible in educational research. This paper complicates the process of collecting data in the archives, as it problematises notions of ethics and truth in the archives. The archival research took place in the former Czechoslovakia and its turbulent political…

  16. Truth and falsehood an inquiry into generalized logical values

    CERN Document Server

    Shramko, Yaroslav

    2012-01-01

    Here is a thoroughly elaborated logical theory of generalized truth-values, presenting the idea of a trilattice of truth values - a specific algebraic structure with information ordering and two distinct logical orderings, one for truth and another for falsity.

  17. Heart Health: The Heart Truth Campaign 2009

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Cover Story Heart Health The Heart Truth Campaign 2009 Past Issues / Winter 2009 Table ... one of the celebrities supporting this year's The Heart Truth campaign. Both R&B singer Ashanti (center) ...

  18. Estimating Daily Maximum and Minimum Land Air Surface Temperature Using MODIS Land Surface Temperature Data and Ground Truth Data in Northern Vietnam

    Directory of Open Access Journals (Sweden)

    Phan Thanh Noi

    2016-12-01

    Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.

  19. Does the Truth Matter in Science?

    Science.gov (United States)

    Lipton, Peter

    2005-01-01

    Is science in the truth business, discovering ever more about an independent and largely unobservable world? Karl Popper and Thomas Kuhn, two of the most important figures in science studies in the 20th century, gave accounts of science that are in some tension with the truth view. Their central claims about science are considered here, along with…

  20. Objective Truth Institution in Criminal Procedure

    Directory of Open Access Journals (Sweden)

    Voltornist O. A.

    2012-11-01

    Full Text Available The article deals with the category of objective truth in criminal procedure, its importance for correct determination of criminal court procedure aims. The author analyzes also the bill draft offered by the RF Committee of Inquiry “On amending in the RF Criminal Procedure Code due to the implementation ofobjective truth institution in criminal procedure”

  1. Receiver operator characteristic (ROC) analysis without truth

    International Nuclear Information System (INIS)

    Henkelman, R.M.; Kay, I.; Bronskill, M.J.

    1990-01-01

    Receiver operator characteristic (ROC) analysis, the preferred method of evaluating diagnostic imaging tests, requires an independent assessment of the true state of disease, which can be difficult to obtain and is often of questionable accuracy. A new method of analysis is described which does not require independent truth data and which can be used when several accurate tests are being compared. This method uses correlative information to estimate the underlying model of multivariate normal distributions of disease-positive and disease-negative patients. The method is shown to give results equivalent to conventional ROC analysis in a comparison of computed tomography, radionuclide scintigraphy, and magnetic resonance imaging for liver metastasis. When independent truth is available, the method can be extended to incorporate truth data or to evaluate the consistency of the truth data with the imaging data

  2. Enhanced Site Characterization of the 618-4 Burial Ground

    Energy Technology Data Exchange (ETDEWEB)

    Murray, Christopher J.; Last, George V.; Chien, Yi-Ju

    2001-09-25

    This report describes the results obtained from deployment of the Enhanced Site Characterization System (ESCS) at the Hanford Site's 618-4 Burial Ground. The objective of this deployment was to use advanced geostatistical methods to integrate and interpret geophysical and ground truth data, to map the physical types of waste materials present in unexcavated portions of the burial ground. One issue of particularly interest was the number of drums (containing depleted uranium metal shavings or uranium-oxide powder) remaining in the burial ground and still requiring removal.Fuzzy adaptive resonance theory (ART), a neural network classification method, was used to cluster the study area into 3 classes based on their geophysical signatures. Multivariate statistical analyses and discriminant function analysis (DFA) indicated that the drum area as well as a second area (the SW anomaly) had similar geophysical signatures that were different from the rest of the burial ground. Further analysis of the drum area suggested that as many as 770 drums to 850 drums may remain in that area. Similarities between the geophysical signatures of the drum area and the SW anomaly suggested that excavation of the SW anomaly area also proceed with caution.Deployment of the ESCS technology was successful in integrating multiple geophysical variables and grouping these observations into clusters that are relevant for planning further excavation of the buried ground. However, the success of the technology could not be fully evaluated because reliable ground truth data were not available to enable calibration of the different geophysical signatures against actual waste types.

  3. This Is My (Post) Truth, Tell Me Yours

    Science.gov (United States)

    Powell, Martin

    2017-01-01

    This is a commentary on the article ‘The rise of post-truth populism in pluralist liberal democracies: challenges for health policy.’ It critically examines two of its key concepts: populism and ‘post truth.’ This commentary argues that there are different types of populism, with unclear links to impacts, and that in some ways, ‘post-truth’ has resonances with arguments advanced in the period at the beginning of the British National Health Service (NHS). In short, ‘post-truth’ populism’ may be ‘déjà vu all over again,’ and there are multiple (post) truths: this is my (post) truth, tell me yours. PMID:29172380

  4. DESIGN AND DEVELOP A COMPUTER AIDED DESIGN FOR AUTOMATIC EXUDATES DETECTION FOR DIABETIC RETINOPATHY SCREENING

    Directory of Open Access Journals (Sweden)

    C. A. SATHIYAMOORTHY

    2016-04-01

    Full Text Available Diabetic Retinopathy is a severe and widely spread eye disease which can lead to blindness. One of the main symptoms for vision loss is Exudates and it could be prevented by applying an early screening process. In the Existing systems, a Fuzzy C-Means Clustering technique is used for detecting the exudates for analyzation. The main objective of this paper is, to improve the efficiency of the Exudates detection in diabetic retinopathy images. To do this, a three Stage – [TS] approach is introduced for detecting and extracting the exudates automatically from the retinal images for screening the Diabetic retinopathy. TS functions on the image in three levels such as Pre-processing the image, enhancing the image and detecting the Exudates accurately. After successful detection, the detected exudates are classified using GLCM method for finding the accuracy. The TS approach is experimented using MATLAB software and the performance evaluation can be proved by comparing the results with the existing approach’s result and with the hand-drawn ground truths images from the expert ophthalmologist.

  5. La verità scientifica - Scientific truth

    Directory of Open Access Journals (Sweden)

    Marco Mazzeo

    2012-10-01

    Full Text Available A scientific theory is not a speculation. On the contrary it is based on facts and observations. Nevertheless the facts and the observations are unable to show us the truth about the world. Indeed to understand the facts or even to discover them through experiments we need a starting theory about the world. Therefore the world is not only discovered by us, but we can say that it is created by our brain. Facts are the constraints for the possible theories and theories are creations of our minds to understand the facts. There are no facts without a theory in mind, and there are no scientific theories about the world without facts. It is obvious therefore that science cannot give any absolute truth but “only” temporary truths which will change with new discoveries and theories. The scientific truth is therefore unstable: after few decades the concepts become unable to explain the news discoveries and become old, but the new concepts will include the old ones. This is called science progress. In this work we analyze all these points discussing the historical creation of the gravitational theory from Aristotle to Newton.

  6. Automatic detection of axillary lymphadenopathy on CT scans of untreated chronic lymphocytic leukemia patients

    Science.gov (United States)

    Liu, Jiamin; Hua, Jeremy; Chellappa, Vivek; Petrick, Nicholas; Sahiner, Berkman; Farooqui, Mohammed; Marti, Gerald; Wiestner, Adrian; Summers, Ronald M.

    2012-03-01

    Patients with chronic lymphocytic leukemia (CLL) have an increased frequency of axillary lymphadenopathy. Pretreatment CT scans can be used to upstage patients at the time of presentation and post-treatment CT scans can reduce the number of complete responses. In the current clinical workflow, the detection and diagnosis of lymph nodes is usually performed manually by examining all slices of CT images, which can be time consuming and highly dependent on the observer's experience. A system for automatic lymph node detection and measurement is desired. We propose a computer aided detection (CAD) system for axillary lymph nodes on CT scans in CLL patients. The lung is first automatically segmented and the patient's body in lung region is extracted to set the search region for lymph nodes. Multi-scale Hessian based blob detection is then applied to detect potential lymph nodes within the search region. Next, the detected potential candidates are segmented by fast level set method. Finally, features are calculated from the segmented candidates and support vector machine (SVM) classification is utilized for false positive reduction. Two blobness features, Frangi's and Li's, are tested and their free-response receiver operating characteristic (FROC) curves are generated to assess system performance. We applied our detection system to 12 patients with 168 axillary lymph nodes measuring greater than 10 mm. All lymph nodes are manually labeled as ground truth. The system achieved sensitivities of 81% and 85% at 2 false positives per patient for Frangi's and Li's blobness, respectively.

  7. Fully automatic detection of deep white matter T1 hypointense lesions in multiple sclerosis

    Science.gov (United States)

    Spies, Lothar; Tewes, Anja; Suppa, Per; Opfer, Roland; Buchert, Ralph; Winkler, Gerhard; Raji, Alaleh

    2013-12-01

    A novel method is presented for fully automatic detection of candidate white matter (WM) T1 hypointense lesions in three-dimensional high-resolution T1-weighted magnetic resonance (MR) images. By definition, T1 hypointense lesions have similar intensity as gray matter (GM) and thus appear darker than surrounding normal WM in T1-weighted images. The novel method uses a standard classification algorithm to partition T1-weighted images into GM, WM and cerebrospinal fluid (CSF). As a consequence, T1 hypointense lesions are assigned an increased GM probability by the standard classification algorithm. The GM component image of a patient is then tested voxel-by-voxel against GM component images of a normative database of healthy individuals. Clusters (≥0.1 ml) of significantly increased GM density within a predefined mask of deep WM are defined as lesions. The performance of the algorithm was assessed on voxel level by a simulation study. A maximum dice similarity coefficient of 60% was found for a typical T1 lesion pattern with contrasts ranging from WM to cortical GM, indicating substantial agreement between ground truth and automatic detection. Retrospective application to 10 patients with multiple sclerosis demonstrated that 93 out of 96 T1 hypointense lesions were detected. On average 3.6 false positive T1 hypointense lesions per patient were found. The novel method is promising to support the detection of hypointense lesions in T1-weighted images which warrants further evaluation in larger patient samples.

  8. A framework for automatic information quality ranking of diabetes websites.

    Science.gov (United States)

    Belen Sağlam, Rahime; Taskaya Temizel, Tugba

    2015-01-01

    Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).

  9. Verifiably Truthful Mechanisms

    DEFF Research Database (Denmark)

    Branzei, Simina; Procaccia, Ariel D.

    2015-01-01

    the computational sense). Our approach involves three steps: (i) specifying the structure of mechanisms, (ii) constructing a verification algorithm, and (iii) measuring the quality of verifiably truthful mechanisms. We demonstrate this approach using a case study: approximate mechanism design without money...

  10. Can partisan voting lead to truth?

    Science.gov (United States)

    Masuda, Naoki; Redner, S.

    2011-02-01

    We study an extension of the voter model in which each agent is endowed with an innate preference for one of two states that we term as 'truth' or 'falsehood'. Due to interactions with neighbors, an agent that innately prefers truth can be persuaded to adopt a false opinion (and thus be discordant with its innate preference) or the agent can possess an internally concordant 'true' opinion. Parallel states exist for agents that inherently prefer falsehood. We determine the conditions under which a population of such agents can ultimately reach a consensus for the truth, reach a consensus for falsehood, or reach an impasse where an agent tends to adopt the opinion that is in internal concordance with its innate preference with the outcome that consensus is never achieved.

  11. The Medawar Lecture 2004 the truth about science.

    Science.gov (United States)

    Lipton, Peter

    2005-06-29

    The attitudes of scientists towards the philosophy of science is mixed and includes considerable indifference and some hostility. This may be due in part to unrealistic expectation and to misunderstanding. Philosophy is unlikely directly to improve scientific practices, but scientists may find the attempt to explain how science works and what it achieves of considerable interest nevertheless. The present state of the philosophy of science is illustrated by recent work on the 'truth hypothesis', according to which, science is generating increasingly accurate representations of a mind-independent and largely unobservable world. According to Karl Popper, although truth is the aim of science, it is impossible to justify the truth hypothesis. According to Thomas Kuhn, the truth hypothesis is false, because scientists can only describe a world that is partially constituted by their own theories and hence not mind-independent. The failure of past scientific theories has been used to argue against the truth hypothesis; the success of the best current theories has been used to argue for it. Neither argument is sound.

  12. On authenticity: the question of truth in construction and autobiography.

    Science.gov (United States)

    Collins, Sara

    2011-12-01

    Freud was occupied with the question of truth and its verification throughout his work. He looked to archaeology for an evidence model to support his ideas on reconstruction. He also referred to literature regarding truth in reconstruction, where he saw shifts between historical fact and invention, and detected such swings in his own case histories. In his late work Freud pondered over the impossibility of truth in reconstruction by juxtaposing truth with 'probability'. Developments on the role of fantasy and myth in reconstruction and contemporary debates over objectivity have increasingly highlighted the question of 'truth' in psychoanalysis. I will argue that 'authenticity' is a helpful concept in furthering the discussion over truth in reconstruction. Authenticity denotes that which is genuine, trustworthy and emotionally accurate in a reconstruction, as observed within the immediacy of the analyst/patient interaction. As authenticity signifies genuineness in a contemporary context its origins are verifiable through the analyst's own observations of the analytic process itself. Therefore, authenticity is about the likelihood and approximation of historical truth rather than its certainty. In that respect it links with Freud's musings over 'probability'. Developments on writing 'truths' in autobiography mirror those in reconstruction, and lend corroborative support from another source. Copyright © 2011 Institute of Psychoanalysis.

  13. An inconvenient truth

    International Nuclear Information System (INIS)

    Al, Gore

    2007-01-01

    Our climate crisis may at times appear to be happening slowly, but in fact it is happening very quickly-and has become a true planetary emergency. The Chinese expression for crisis consists of two characters. The first is a symbol for danger; the second is a symbol for opportunity. In order to face down the danger that is stalking us and move through it, we first have to recognize that we are facing a crisis. So why is it that our leaders seem not to hear such clarion warnings? Are they resisting the truth because they know that the moment they acknowledge it, they will face a moral imperative to act? Is it simply more convenient to ignore the warnings? Perhaps, but inconvenient truths do not go away just because they are not seen. Indeed, when they are responded to, their significance does not diminish; it grows. (author)

  14. This Is My (Post) Truth, Tell Me Yours Comment on "The Rise of Post-truth Populism in Pluralist Liberal Democracies: Challenges for Health Policy".

    Science.gov (United States)

    Powell, Martin

    2017-05-15

    This is a commentary on the article 'The rise of post-truth populism in pluralist liberal democracies: challenges for health policy.' It critically examines two of its key concepts: populism and 'post truth.' This commentary argues that there are different types of populism, with unclear links to impacts, and that in some ways, 'post-truth' has resonances with arguments advanced in the period at the beginning of the British National Health Service (NHS). In short, 'post-truth' populism' may be 'déjà vu all over again,' and there are multiple (post) truths: this is my (post) truth, tell me yours. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  15. Accuracy evaluation of automatic quantification of the articular cartilage surface curvature from MRI

    DEFF Research Database (Denmark)

    Folkesson, Jenny; Dam, Erik B; Olsen, Ole F

    2007-01-01

    for intersubject comparisons. Digital phantoms were created to establish the accuracy of the curvature estimation methods. RESULTS: A comparison of the two curvature estimation methods to ground truth yielded absolute pairwise differences of 1.1%, and 4.8%, respectively. The interscan reproducibility for the two...

  16. Effects of the truth FinishIt brand on tobacco outcomes

    OpenAIRE

    Evans, W. Douglas; Rath, Jessica M.; Hair, Elizabeth C.; Snider, Jeremy Williams; Pitzer, Lindsay; Greenberg, Marisa; Xiao, Haijun; Cantrell, Jennifer; Vallone, Donna

    2017-01-01

    Since 2000, the truth campaign has grown as a social marketing brand. Back then, truth employed branding to compete directly with the tobacco industry. In 2014, the launch of truth FinishIt reflected changes in the brand's strategy, the tobacco control environment, and youth/young adult behavior.Building on a previous validation study, the current study examined brand equity in truth FinishIt, as measured by validated multi-dimensional scales, and tobacco related attitudes, beliefs, and behav...

  17. The Philosophical Problem of Truth in Librarianship

    Science.gov (United States)

    Labaree, Robert V.; Scimeca, Ross

    2008-01-01

    The authors develop a framework for addressing the question of truth in librarianship and in doing so attempt to move considerations of truth closer to the core of philosophical debates within the profession. After establishing ways in which philosophy contributes to social scientific inquiry in library science, the authors examine concepts of…

  18. Review of commonly used remote sensing and ground-based ...

    African Journals Online (AJOL)

    This review provides an overview of the use of remote sensing data, the development of spectral reflectance indices for detecting plant water stress, and the usefulness of field measurements for ground-truthing purposes. Reliable measurements of plant water stress over large areas are often required for management ...

  19. Ground truthing for methane hotspots at Railroad Valley, NV - application to Mars

    Science.gov (United States)

    Detweiler, A. M.; Kelley, C. A.; Bebout, B.; McKay, C. P.; DeMarines, J.; Yates, E. L.; Iraci, L. T.

    2011-12-01

    .7%. Temperature and relative humidity sensors were placed in the playa at 5, 20, and 30 cm below the surface. Since the relative humidity neared 100% (down to 20 cm below the surface), high enough to support microbial life, the observed absence of methane production in the playa itself is likely due to the low POC content, compared to other methane-producing environments. The spatial distribution of methane in combination with the spectral reflectance at the RRV dry lakebed makes it a good Mars analog. The ground truthing and satellite calibration work accomplished at RRV is a good exercise in preparation to identifying the origins of methane observed in the atmosphere of Mars during the upcoming 2012 Mars Science Laboratory and 2016 ExoMars Trace Gas Orbiter missions.

  20. Automatic lung segmentation using control feedback system: morphology and texture paradigm.

    Science.gov (United States)

    Noor, Norliza M; Than, Joel C M; Rijal, Omar M; Kassim, Rosminah M; Yunus, Ashari; Zeki, Amir A; Anzidei, Michele; Saba, Luca; Suri, Jasjit S

    2015-03-01

    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.

  1. Automatic Segmentation and Online virtualCT in Head-and-Neck Adaptive Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Peroni, Marta, E-mail: marta.peroni@mail.polimi.it [Department of Bioengineering, Politecnico di Milano, Milano (Italy); Ciardo, Delia [Advanced Radiotherapy Center, European Institute of Oncology, Milano (Italy); Spadea, Maria Francesca [Department of Experimental and Clinical Medicine, Universita degli Studi Magna Graecia, Catanzaro (Italy); Riboldi, Marco [Department of Bioengineering, Politecnico di Milano, Milano (Italy); Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy); Comi, Stefania; Alterio, Daniela [Advanced Radiotherapy Center, European Institute of Oncology, Milano (Italy); Baroni, Guido [Department of Bioengineering, Politecnico di Milano, Milano (Italy); Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy); Orecchia, Roberto [Advanced Radiotherapy Center, European Institute of Oncology, Milano (Italy); Universita degli Studi di Milano, Milano (Italy); Medical Department, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy)

    2012-11-01

    Purpose: The purpose of this work was to develop and validate an efficient and automatic strategy to generate online virtual computed tomography (CT) scans for adaptive radiation therapy (ART) in head-and-neck (HN) cancer treatment. Method: We retrospectively analyzed 20 patients, treated with intensity modulated radiation therapy (IMRT), for an HN malignancy. Different anatomical structures were considered: mandible, parotid glands, and nodal gross tumor volume (nGTV). We generated 28 virtualCT scans by means of nonrigid registration of simulation computed tomography (CTsim) and cone beam CT images (CBCTs), acquired for patient setup. We validated our approach by considering the real replanning CT (CTrepl) as ground truth. We computed the Dice coefficient (DSC), center of mass (COM) distance, and root mean square error (RMSE) between correspondent points located on the automatically segmented structures on CBCT and virtualCT. Results: Residual deformation between CTrepl and CBCT was below one voxel. Median DSC was around 0.8 for mandible and parotid glands, but only 0.55 for nGTV, because of the fairly homogeneous surrounding soft tissues and of its small volume. Median COM distance and RMSE were comparable with image resolution. No significant correlation between RMSE and initial or final deformation was found. Conclusion: The analysis provides evidence that deformable image registration may contribute significantly in reducing the need of full CT-based replanning in HN radiation therapy by supporting swift and objective decision-making in clinical practice. Further work is needed to strengthen algorithm potential in nGTV localization.

  2. Automatic segmentation and online virtualCT in head-and-neck adaptive radiation therapy.

    Science.gov (United States)

    Peroni, Marta; Ciardo, Delia; Spadea, Maria Francesca; Riboldi, Marco; Comi, Stefania; Alterio, Daniela; Baroni, Guido; Orecchia, Roberto

    2012-11-01

    The purpose of this work was to develop and validate an efficient and automatic strategy to generate online virtual computed tomography (CT) scans for adaptive radiation therapy (ART) in head-and-neck (HN) cancer treatment. We retrospectively analyzed 20 patients, treated with intensity modulated radiation therapy (IMRT), for an HN malignancy. Different anatomical structures were considered: mandible, parotid glands, and nodal gross tumor volume (nGTV). We generated 28 virtualCT scans by means of nonrigid registration of simulation computed tomography (CTsim) and cone beam CT images (CBCTs), acquired for patient setup. We validated our approach by considering the real replanning CT (CTrepl) as ground truth. We computed the Dice coefficient (DSC), center of mass (COM) distance, and root mean square error (RMSE) between correspondent points located on the automatically segmented structures on CBCT and virtualCT. Residual deformation between CTrepl and CBCT was below one voxel. Median DSC was around 0.8 for mandible and parotid glands, but only 0.55 for nGTV, because of the fairly homogeneous surrounding soft tissues and of its small volume. Median COM distance and RMSE were comparable with image resolution. No significant correlation between RMSE and initial or final deformation was found. The analysis provides evidence that deformable image registration may contribute significantly in reducing the need of full CT-based replanning in HN radiation therapy by supporting swift and objective decision-making in clinical practice. Further work is needed to strengthen algorithm potential in nGTV localization. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. On a Philosophical Motivation for Mutilating Truth Tables

    Directory of Open Access Journals (Sweden)

    Marcos Silva

    2016-06-01

    Full Text Available One of the reasons colours, or better the conceptual organisation of the colour system, could be relevant to the philosophy of logic is that they necessitate some mutilation of truth tables by restricting truth functionality. This paper argues that the so-called ‘Colour Exclusion Problem’, the first great challenge for Wittgenstein’s Tractatus, is a legitimate philosophical motivation for a systematic mutilation of truth tables. It shows how one can express, through these mutilations, some intensional logical relations usually expressed by the Aristotelian Square of Oppositions, as contrariety and subcontrariety.

  4. CRITERIA OF TRUTHFULNESS AND THE SCIENTIFIC QUALITY IN POST-MODERN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    Olga Mukha

    2012-02-01

    Full Text Available This article examines the criteria of truth in post-modern philosophy, taking into account the ways it is defined in both the classical and non-classical traditions. Specific to post-modern philosophy is the absence of a universal language of narration and the traditional methods in which knowledge is recognized as legitimate. Basing himself on these concepts, the author examines the problem of the ideal of scientific quality and the transformations this idea has undergone in contemporary philosophy. Truth is understood basically through two means which govern our relation to truth: the will to truth and the concern for truth. These also appear as defining factors of truth in various types of post-modern philosophy: social-operative, social-political, and aesthetic

  5. Truth and opinion in climate change discourse: the Gore-Hansen disagreement.

    Science.gov (United States)

    Russill, Chris

    2011-11-01

    In this paper, I discuss the "inconvenient truth" strategy of Al Gore. I argue that Gore's notion of truth upholds a conception of science and policy that narrows our understanding of climate change discourse. In one notable exchange, Gore and NASA scientist, James Hansen, disagreed about whether scientific statements based on Hansen's computer simulations were truth or opinion. This exchange is featured in An Inconvenient Truth, yet the disagreement is edited from the film and presented simply as an instance of Hansen speaking "inconvenient truth". In this article, I compare the filmic representation of Hansen's testimony with the congressional record. I place their exchange in a broader historical perspective on climate change disputation in order to discuss the implications of Gore's perspective on truth.

  6. DID RAMSEY EVER ENDORSE A REDUNDANCY THEORY OF TRUTH?

    Directory of Open Access Journals (Sweden)

    María J. Frápolli

    2013-11-01

    Full Text Available This paper deals with Ramsey´s theory of truth and its aim is twofold: on the one hand, it will explain what position about truth Ramsey actually defended, and, on the other hand, we will pursue Ramsey’s insight in the XXth century. When the name of Frank Ramsey is mentioned, one of the things that comes to mind is the theory of truth as redundancy. In the following pages we will argue that Ramsey never supported such a theory, but rather an analysis of truth noticeably similar to the prosentential account. In fact, the very word “pro-sentence” appears for the first time in the XXth Century in Ramsey´s unfinished work “The nature of truth”, written around 1929. Besides, we will show that the prosentential account of truth is a neglected trend throughout the history of analytic philosophy, even though relevant analytic philosophers, such as Prior, Strawson, Williams, Grover and Brandom, have endorsed it.

  7. Women's Heart Disease: Join the Heart Truth Community

    Science.gov (United States)

    ... this page please turn JavaScript on. Feature: Women's Heart Disease Join The Heart Truth Community Past Issues / Winter 2014 Table of Contents National Symbol The centerpiece of The Heart Truth ® is The Red Dress ® which was introduced ...

  8. Automatic polyp detection in colonoscopy videos

    Science.gov (United States)

    Yuan, Zijie; IzadyYazdanabadi, Mohammadhassan; Mokkapati, Divya; Panvalkar, Rujuta; Shin, Jae Y.; Tajbakhsh, Nima; Gurudu, Suryakanth; Liang, Jianming

    2017-02-01

    Colon cancer is the second cancer killer in the US [1]. Colonoscopy is the primary method for screening and prevention of colon cancer, but during colonoscopy, a significant number (25% [2]) of polyps (precancerous abnormal growths inside of the colon) are missed; therefore, the goal of our research is to reduce the polyp miss-rate of colonoscopy. This paper presents a method to detect polyp automatically in a colonoscopy video. Our system has two stages: Candidate generation and candidate classification. In candidate generation (stage 1), we chose 3,463 frames (including 1,718 with-polyp frames) from real-time colonoscopy video database. We first applied processing procedures, namely intensity adjustment, edge detection and morphology operations, as pre-preparation. We extracted each connected component (edge contour) as one candidate patch from the pre-processed image. With the help of ground truth (GT) images, 2 constraints were implemented on each candidate patch, dividing and saving them into polyp group and non-polyp group. In candidate classification (stage 2), we trained and tested convolutional neural networks (CNNs) with AlexNet architecture [3] to classify each candidate into with-polyp or non-polyp class. Each with-polyp patch was processed by rotation, translation and scaling for invariant to get a much robust CNNs system. We applied leave-2-patients-out cross-validation on this model (4 of 6 cases were chosen as training set and the rest 2 were as testing set). The system accuracy and sensitivity are 91.47% and 91.76%, respectively.

  9. The truth on journalism: relations between its practice and discourse

    Directory of Open Access Journals (Sweden)

    Daiane Bertasso Ribeiro

    2011-04-01

    Full Text Available This article proposes a theoretically approach over the relations that journalism establishes with the concept of truth. The notion of truth in Foucault leads the debate. This reflection centers on how journalism builds discursive strategies that produces effects of truth on itsreports. The journalist discourse presents itselfas truthful, although its constructive discourse of the world is a result of rules, practices and values. The debate of “truth” allows us to comprehend the complexity and particularities of journalism as a discursive practice that has reflection in the social knowledge of reality.

  10. Multi-feature-based plaque characterization in ex vivo MRI trained by registration to 3D histology

    DEFF Research Database (Denmark)

    Engelen, Arna van; Niessen, Wiro J.; Klein, Stefan

    2012-01-01

    . The ground truth for fibrous, necrotic and calcified tissue was provided by histology and micro-CT in 12 carotid plaque specimens. Semi-automatic registration of a 3D stack of histological slices and micro-CT images to MRI allowed for 3D rotations and inplane deformations of histology. By basing voxelwise...

  11. Realism without truth: a review of Giere's science without laws and scientific perspectivism.

    Science.gov (United States)

    Hackenberg, Timothy D

    2009-05-01

    An increasingly popular view among philosophers of science is that of science as action-as the collective activity of scientists working in socially-coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science-as-social-process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science.

  12. REALISM WITHOUT TRUTH: A REVIEW OF GIERE'S SCIENCE WITHOUT LAWS AND SCIENTIFIC PERSPECTIVISM

    Science.gov (United States)

    Hackenberg, Timothy D

    2009-01-01

    An increasingly popular view among philosophers of science is that of science as action—as the collective activity of scientists working in socially-coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science-as-social-process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science. PMID:19949495

  13. Truthfulness in science teachers’ bodily and verbal actions

    DEFF Research Database (Denmark)

    Daugbjerg, Peer

    2013-01-01

    A dramaturgical approach to teacher’s personal bodily and verbal actions is applied through the vocabulary of truthfulness. Bodily and verbal actions have been investigated among Danish primary and lower secondary school science teachers based on their narratives and observations of their classroom...... actions. The analysis shows how science teachers engage truthfully in pupil relations through an effort of applying classroom management, among other things. In all, this indicates that if science education research wants to understand science teachers’ personal relations to teaching science it could...... be beneficial to address the truthfulness of science teachers’ narratives and actions....

  14. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    Science.gov (United States)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  15. The Truth of Sacred Scripture

    Directory of Open Access Journals (Sweden)

    Tomasz Jelonek

    2006-09-01

    Full Text Available Article presents the history of contradiction between science and the Bible and how it was solved in Dogmatic Constitution on Divine Revelation Dei Verbum of the II Vatican Council. Since biblical truth was given to us “for the sake of our salvation,” and not in order to teach us natural science or history for their own sake, Sacred Scripture cannot be fairly judged to be in error when it sometimes presents historical or scientific truth in a less complete, less detailed, more popular, or more imprecise (i.e. merely approximate fashion than would be acceptable in modern texts dedicated formally to those disciplines.

  16. The Metaphysical Assumptions of the Conception of Truth in Martin Smiglecki’s Logic

    Directory of Open Access Journals (Sweden)

    Tomasz Pawlikowski

    2015-06-01

    Full Text Available The central element of the concept of truth in Smiglecki’s Logica (1618 is his approach to formulating definitions. Where the establishing of the truth is concerned, he always points to compliance at the level of the community (conformitas in respect of whether the intellectual recognition of a thing or things is in accordance with its intellectual equivalent, or the principles behind the latter, where these are understood as designating the corresponding idea inherent in the intellect of God. This is a form of the classical definition of truth --- similar to that used by St. Thomas Aquinas --- with a wide scope of applicability: to the field of existence (transcendental truth, to cognition and language (logical truth, and even to moral beliefs (moral rightness. Smiglecki distinguishes three types of truth: truth assigned to being, truth assigned to cognition, and truth assigned to moral convictions. Of these, the first is identified with transcendental truth, while the second is attributed not only to propositions and sentences, but also to concepts. The truth of concepts results from compliance with things by way of representation, while the truth of propositions and sentences issues from a compliance with things involving the implementation of some form of expression or other. Logical truth pertains to propositions rather than concepts. The kind of moral truth he writes about is what we would now be more likely to call “truthfulness”. With the exception of moral truth, which he defined as compliance of a statement with someone’s internal thoughts, Smiglecki considers every kind of truth to be a conditioned state of the object of knowledge. He says (a that the ultimate object of reference of human cognitive functioning is a real being, absolutely true by virtue of compliance with its internal principles and their idea as present in the intellect of God, and (b that the compatibility of human cognition with a real being is the ultimate

  17. A Robust Bayesian Truth Serum for Non-binary Signals

    OpenAIRE

    Radanovic, Goran; Faltings, Boi

    2013-01-01

    Several mechanisms have been proposed for incentivizing truthful reports of a private signals owned by rational agents, among them the peer prediction method and the Bayesian truth serum. The robust Bayesian truth serum (RBTS) for small populations and binary signals is particularly interesting since it does not require a common prior to be known to the mechanism. We further analyze the problem of the common prior not known to the mechanism and give several results regarding the restrictions ...

  18. Descartes on the Creation of the Eternal Truths

    Directory of Open Access Journals (Sweden)

    Danielle Macbeth

    2017-06-01

    Full Text Available On 15 April 1630, in a letter to Mersenne, Descartes announced that on his view God creates the truths of mathematics. Descartes returned to the theme in subsequent letters and some of his Replies but nowhere is the view systematically developed and defended. It is not clear why Descartes came to espouse the creation doctrine, nor even what exactly it is. Some have argued that his motivation was theological, that God creates the eternal truths, including the truths of logic, because and insofar as God is omnipotent andthe creator of all things. I develop and defend a different reading according to which Descartes was led to espouse the creation doctrine by a fundamental shift in his understanding of the correct mode of inquiry in metaphysics and mathematics: by 1630, the God-created truths came to play the role in inquiry that until then, in the Rules for the Direction of the Mind, had been played by images.

  19. The source of the truth bias: Heuristic processing?

    Science.gov (United States)

    Street, Chris N H; Masip, Jaume

    2015-06-01

    People believe others are telling the truth more often than they actually are; this is called the truth bias. Surprisingly, when a speaker is judged at multiple points across their statement the truth bias declines. Previous claims argue this is evidence of a shift from (biased) heuristic processing to (reasoned) analytical processing. In four experiments we contrast the heuristic-analytic model (HAM) with alternative accounts. In Experiment 1, the decrease in truth responding was not the result of speakers appearing more deceptive, but was instead attributable to the rater's processing style. Yet contrary to HAMs, across three experiments we found the decline in bias was not related to the amount of processing time available (Experiments 1-3) or the communication channel (Experiment 2). In Experiment 4 we found support for a new account: that the bias reflects whether raters perceive the statement to be internally consistent. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  20. Truth in Philosophy

    Directory of Open Access Journals (Sweden)

    Tibor R. Machan

    2011-03-01

    Full Text Available Can there be truth in philosophy? A problem: it is philosophy, its various schools, that advances what counts as true versus false, how to go about making the distinction. This is what I wish to focus on here and see if some coherent, sensible position could be reached on the topic.

  1. Keep Changing Your Beliefs, Aiming for the Truth

    NARCIS (Netherlands)

    Baltag, Alexandru; Smets, Sonja

    We investigate the process of truth-seeking by iterated belief revision with higher-level doxastic information. We elaborate further on the main results in Baltag and Smets (Proceedings of TARK, 2009a, Proceedings of WOLLIC'09 LNAI 5514, 2009b), applying them to the issue of convergence to truth. We

  2. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.

    Science.gov (United States)

    Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej

    2015-09-01

    CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

  3. Initiating GrabCut by Color Difference for Automatic Foreground Extraction of Passport Imagery

    DEFF Research Database (Denmark)

    Sangüesa, Adriá Arbués; Jørgensen, Nicolai Krogh; Larsen, Christian Aagaard

    2016-01-01

    photo. Having gathered our own dataset and generated ground truth images, promising results are obtained in terms of F1-scores, with a maximum mean of 0.975 among all the images, improving the performance of GrabCut in all cases. Some future work directions are given for those unsolved issues that were...

  4. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    Science.gov (United States)

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-12-01

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Helping medical students to acquire a deeper understanding of truth-telling.

    Science.gov (United States)

    Hurst, Samia A; Baroffio, Anne; Ummel, Marinette; Burn, Carine Layat

    2015-01-01

    Truth-telling is an important component of respect for patients' self-determination, but in the context of breaking bad news, it is also a distressing and difficult task. We investigated the long-term influence of a simulated patient-based teaching intervention, integrating learning objectives in communication skills and ethics into students' attitudes and concerns regarding truth-telling. We followed two cohorts of medical students from the preclinical third year to their clinical rotations (fifth year). Open-ended responses were analysed to explore medical students' reported difficulties in breaking bad news. This intervention was implemented during the last preclinical year of a problem-based medical curriculum, in collaboration between the doctor-patient communication and ethics programs. Over time, concerns such as empathy and truthfulness shifted from a personal to a relational focus. Whereas 'truthfulness' was a concern for the content of the message, 'truth-telling' included concerns on how information was communicated and how realistically it was received. Truth-telling required empathy, adaptation to the patient, and appropriate management of emotions, both for the patient's welfare and for a realistic understanding of the situation. Our study confirms that an intervention confronting students with a realistic situation succeeds in making them more aware of the real issues of truth-telling. Medical students deepened their reflection over time, acquiring a deeper understanding of the relational dimension of values such as truth-telling, and honing their view of empathy.

  6. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    Science.gov (United States)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  7. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    Science.gov (United States)

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a

  8. News, truth and crime: the Westray disaster and its aftermath

    Energy Technology Data Exchange (ETDEWEB)

    McMullan, J.L. [Saint Mary' s University, Halifax, NS (Canada). Department of Sociology and Criminology

    2005-07-01

    A study of the way the media portrayed the Westray Mine disaster and its aftermath over the period 1992 to 2002 is presented. The chapters titles are; power, discourse, and the production of news as truth; the explosion and its aftermath; studying the press and Westray; the press and the presentation of Westray's truth; and the politics of truth and the invisibility of corporate crime. News articles reporting the accident and outcome were sampled, coded, and evaluated by content analysis. It is concluded that the various media represented alternative truths, but did not label the corporation as criminal. This was missing from the media's reporting of the disaster.

  9. Semi-automatic handling of meteorological ground measurements using WeatherProg: prospects and practical implications

    Science.gov (United States)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; De Mascellis, Roberto; Manna, Piero; Terribile, Fabio

    2016-04-01

    WeatherProg is a computer program for the semi-automatic handling of data measured at ground stations within a climatic network. The program performs a set of tasks ranging from gathering raw point-based sensors measurements to the production of digital climatic maps. Originally the program was developed as the baseline asynchronous engine for the weather records management within the SOILCONSWEB Project (LIFE08 ENV/IT/000408), in which daily and hourly data where used to run water balance in the soil-plant-atmosphere continuum or pest simulation models. WeatherProg can be configured to automatically perform the following main operations: 1) data retrieval; 2) data decoding and ingestion into a database (e.g. SQL based); 3) data checking to recognize missing and anomalous values (using a set of differently combined checks including logical, climatological, spatial, temporal and persistence checks); 4) infilling of data flagged as missing or anomalous (deterministic or statistical methods); 5) spatial interpolation based on alternative/comparative methods such as inverse distance weighting, iterative regression kriging, and a weighted least squares regression (based on physiography), using an approach similar to PRISM. 6) data ingestion into a geodatabase (e.g. PostgreSQL+PostGIS or rasdaman). There is an increasing demand for digital climatic maps both for research and development (there is a gap between the major of scientific modelling approaches that requires digital climate maps and the gauged measurements) and for practical applications (e.g. the need to improve the management of weather records which in turn raises the support provided to farmers). The demand is particularly burdensome considering the requirement to handle climatic data at the daily (e.g. in the soil hydrological modelling) or even at the hourly time step (e.g. risk modelling in phytopathology). The key advantage of WeatherProg is the ability to perform all the required operations and

  10. Some Notes (with Badiou and Žižek on Event/Truth/Subject/Militant Community in Jean-Paul Sartre's Political Thought

    Directory of Open Access Journals (Sweden)

    Erik M. Vogt

    2015-12-01

    Full Text Available The main object of this paper is to examine the new philosophical frame proposed by Alain Badiou and Slavoj Žižek and to show that it implies some traces of Sartre's philosophical and political heritage. According the project of Alain Badiou and Slavoj Žižek one should no longer accept today's constellation of freedom, particularistic truth and democracy, but to (reinscribe the issues of freedom and universal truth into a political project that attempts to re-activate a thinking of revolution. Their thinking consists in the wager that it is still possible to provide a philosophical frame for this leftist emancipatory position that claims the dimension of the universal against the vicious circle of capitalist globalization-cum-particularization and, by following Marx's claim that there are formal affinities between the ambitions of emancipatory politics and the working mode of capitalism, takes up the struggle of universalism against globalization (capital. It is only through this struggle for the universal that the intertwined processes of a constant expansion of the automatism of capital and "a process of fragmentation into closed identities," accompanied by "the culturalist and relativist ideology" (Badiou can be suspended. It is precisely this constellation of revolutionary act, universal truth, subject, and militant community, that reveal some similarities with Sartre's concepts of the subject, the revolutionary action, the militant community a.o.

  11. Auditory Figure-Ground Segregation is Impaired by High Visual Load

    OpenAIRE

    Lavie, Nilli; Chait, Maria; Molloy, Katharine

    2017-01-01

    Figure-ground segregation is fundamental to listening in complex acoustic environments. An ongoing debate pertains to whether segregation requires attention or is 'automatic' and pre-attentive. In this magnetoencephalography (MEG) study we tested a prediction derived from Load Theory of attention (1) that segregation requires attention, but can benefit from the automatic allocation of any 'leftover' capacity under low load. Complex auditory scenes were modelled with Stochastic Figure Ground s...

  12. Bargaining for Truth and Reconciliation in South Africa: A Game ...

    African Journals Online (AJOL)

    Bargaining for Truth and Reconciliation in South Africa: A Game-Theoretic Analysis. ... Using game-theoreticanalysis, the authors model the truth-amnesty game and predict the optimal commission strategy. ... AJOL African Journals Online.

  13. The Autobiographical Photo-textual Devices. Rhetorics and Truth

    Directory of Open Access Journals (Sweden)

    Roberta Coglitore

    2014-05-01

    Full Text Available The case of autobiographical photo-texts is to be analyzed, first of all, as an autobiographical writing that feels the need to express by other means; secondly, as a specific rhetoric practice that chooses the image, next to the word, as a further persuasive force; finally, as a very special case of icono-texts, which uses some variety of the connection between the verbal and the visual. It is not only a matter of analyzing how it works the cooperation between photographs and autobiographical writing, that is, through which connectors – frames, white space, overlays and captions –, but also of understanding what are the functions of the photographs in relation to literature. This is in order to understand what truth is affirmed in the examples chosen: Franca Valeri, Grégoire Bouillier, Roland Barthes, Winfried G. Sebald, Lalla Romano, Jovanotti, Edward Said, Azar Nafisi, Vladimir Nabokov, André Breton, Hannah Höch, Annie Ernaux. Do photographs expose, confirm, add or resist the truth expressed by the literary side? And if narrative expresses the truth and the resistance to the truth of the author himself, what does the photo resist to, while showing?

  14. Design and Feasibility Testing of the truth FinishIt Tobacco Countermarketing Brand Equity Scale.

    Science.gov (United States)

    Evans, W Douglas; Rath, Jessica; Pitzer, Lindsay; Hair, Elizabeth C; Snider, Jeremy; Cantrell, Jennifer; Vallone, Donna

    2016-07-01

    The original truth campaign was a branded, national smoking prevention mass media effort focused on at-risk youth ages 12-17. Today the truth brand focuses on the goal of finishing tobacco (truth FinishIt). There have been significant changes in the tobacco control landscape, leading FinishIt to focus on 15- to 21-year-olds. The present article reports on formative research and media monitoring data collected to pilot test a new truth FinishIt brand equity scale. The goals of this study were to (a) content analyze truth FinishIt mass media ads, (b) assess truth's social media and followers' perceptions of truth's digital brand identity, and (c) develop and feasibility test a new version of the truth FinishIt brand equity scale using data from an existing Truth Initiative media monitoring study. Through factor analysis, we identified a brand equity scale, as in previous research, consisting of 4 main constructs: brand loyalty, leadership/satisfaction, personality, and awareness. Targeted truth attitudes and beliefs about social perceptions, acceptability, and industry-related beliefs were regressed on the higher order factor and each of the 4 individual brand equity factors. Ordinary least squares regression models generally showed associations in the expected directions (positive for anti-tobacco and negative for pro-tobacco) between targeted attitudes/beliefs and truth FinishIt brand equity. This study succeeded in developing and validating a new truth FinishIt brand equity scale. The scale may be a valuable metric for future campaign evaluation. Future studies should examine the effects of truth FinishIt brand equity on tobacco use behavioral outcomes over time.

  15. Bargaining for Truth and Reconciliation in South Africa: A Game ...

    African Journals Online (AJOL)

    Bargaining for Truth and Reconciliation in South Africa: A Game-Theoretic Analysis. ... Using game-theoretic analysis, the authors model the truth-amnesty game and predict the optimal commission strategy. ... AJOL African Journals Online.

  16. Automatic classification techniques for type of sediment map from multibeam sonar data

    Science.gov (United States)

    Zakariya, R.; Abdullah, M. A.; Che Hasan, R.; Khalil, I.

    2018-02-01

    Sediment map can be important information for various applications such as oil drilling, environmental and pollution study. A study on sediment mapping was conducted at a natural reef (rock) in Pulau Payar using Sound Navigation and Ranging (SONAR) technology which is Multibeam Echosounder R2-Sonic. This study aims to determine sediment type by obtaining backscatter and bathymetry data from multibeam echosounder. Ground truth data were used to verify the classification produced. The method used to analyze ground truth samples consists of particle size analysis (PSA) and dry sieving methods. Different analysis being carried out due to different sizes of sediment sample obtained. The smaller size was analyzed using PSA with the brand CILAS while bigger size sediment was analyzed using sieve. For multibeam, data acquisition includes backscatter strength and bathymetry data were processed using QINSy, Qimera, and ArcGIS. This study shows the capability of multibeam data to differentiate the four types of sediments which are i) very coarse sand, ii) coarse sand, iii) very coarse silt and coarse silt. The accuracy was reported as 92.31% overall accuracy and 0.88 kappa coefficient.

  17. Thoughts on Chemistry and Scientific Truth in Post-Factual Times.

    Science.gov (United States)

    Schreiner, Peter R

    2018-04-19

    "… The value and meaning of scientific truth has not been overcome by postmodernism or post-factual tendencies. Just because politics is mostly a representation of opinions, this does not imply that truth has become irrelevant. Quite the opposite, the value of truth is growing in turbulent times and for scientists it constitutes the currency of credibility and accountability …" Read more in the Guest Editorial by Peter R. Schreiner. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Truth telling in medical practice: students' opinions versus their observations of attending physicians' clinical practice.

    Science.gov (United States)

    Tang, Woung-Ru; Fang, Ji-Tseng; Fang, Chun-Kai; Fujimori, Maiko

    2013-07-01

    Truth telling or transmitting bad news is a problem that all doctors must frequently face. The purpose of this cross-sectional study was to investigate if medical students' opinions of truth telling differed from their observations of attending physicians' actual clinical practice. The subjects were 275 medical clerks/interns at a medical center in northern Taiwan. Data were collected on medical students' opinions of truth telling, their observations of physicians' clinical practice, students' level of satisfaction with truth telling practiced by attending physicians, and cancer patients' distress level when they were told the truth. Students' truth-telling awareness was significantly higher than the clinical truth-telling practice of attending physicians (pmedical students' opinions on truth telling and attending physicians' actual clinical practice. More research is needed to objectively assess physicians' truth telling in clinical practice and to study the factors affecting the method of truth telling used by attending physicians in clinical practice. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Political Corruption as Deformities of Truth

    Directory of Open Access Journals (Sweden)

    Yann Allard-Tremblay

    2014-01-01

    Full Text Available This paper presents a conception of corruption informed by epistemic democratic theory. I first explain the view of corruption as a disease of the political body. Following this view, we have to consider the type of actions that debase a political entity of its constitutive principal in order to assess corruption. Accordingly, we need to consider what the constitutive principle of democracy is. This is the task I undertake in the second section where I explicate democratic legitimacy. I present democracy as a procedure of social inquiry about what ought to be done that includes epistemic and practical considerations. In the third section, I argue that the problem of corruption for a procedural conception of democracy is that the epistemic value of the procedure is diminished by corrupted agents’ lack of concern for truth. Corruption, according to this view, consists in two deformities of truth: lying and bullshit. These deformities corrupt since they conceal private interests under the guise of a concern for truth. In the fourth section, I discuss the difficulties a procedural account may face in formulating solutions to the problem of corruption.

  20. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning.

    Science.gov (United States)

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2015-12-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors

  1. Experience, Poetry and Truth

    DEFF Research Database (Denmark)

    Gahrn-Andersen, Rasmus

    2017-01-01

    of philosophical thinking. Specifically, I show that, beneath a highly poetic and obscure prose, Jünger posits how subjective experience and poetry allow individuals to realize truth. I relate parts of Jünger’s insights to contributions by Husserl, Heidegger and Merleau-Ponty, arguing that Jünger offers a unique...

  2. Helping medical students to acquire a deeper understanding of truth-telling

    Directory of Open Access Journals (Sweden)

    Samia A. Hurst

    2015-11-01

    Full Text Available Problem: Truth-telling is an important component of respect for patients’ self-determination, but in the context of breaking bad news, it is also a distressing and difficult task. Intervention: We investigated the long-term influence of a simulated patient-based teaching intervention, integrating learning objectives in communication skills and ethics into students’ attitudes and concerns regarding truth-telling. We followed two cohorts of medical students from the preclinical third year to their clinical rotations (fifth year. Open-ended responses were analysed to explore medical students’ reported difficulties in breaking bad news. Context: This intervention was implemented during the last preclinical year of a problem-based medical curriculum, in collaboration between the doctor–patient communication and ethics programs. Outcome: Over time, concerns such as empathy and truthfulness shifted from a personal to a relational focus. Whereas ‘truthfulness’ was a concern for the content of the message, ‘truth-telling’ included concerns on how information was communicated and how realistically it was received. Truth-telling required empathy, adaptation to the patient, and appropriate management of emotions, both for the patient's welfare and for a realistic understanding of the situation. Lessons learned: Our study confirms that an intervention confronting students with a realistic situation succeeds in making them more aware of the real issues of truth-telling. Medical students deepened their reflection over time, acquiring a deeper understanding of the relational dimension of values such as truth-telling, and honing their view of empathy.

  3. IceMap250—Automatic 250 m Sea Ice Extent Mapping Using MODIS Data

    Directory of Open Access Journals (Sweden)

    Charles Gignac

    2017-01-01

    Full Text Available The sea ice cover in the North evolves at a rapid rate. To adequately monitor this evolution, tools with high temporal and spatial resolution are needed. This paper presents IceMap250, an automatic sea ice extent mapping algorithm using MODIS reflective/emissive bands. Hybrid cloud-masking using both the MOD35 mask and a visibility mask, combined with downscaling of Bands 3–7 to 250 m, are utilized to delineate sea ice extent using a decision tree approach. IceMap250 was tested on scenes from the freeze-up, stable cover, and melt seasons in the Hudson Bay complex, in Northeastern Canada. IceMap250 first product is a daily composite sea ice presence map at 250 m. Validation based on comparisons with photo-interpreted ground-truth show the ability of the algorithm to achieve high classification accuracy, with kappa values systematically over 90%. IceMap250 second product is a weekly clear sky map that provides a synthesis of 7 days of daily composite maps. This map, produced using a majority filter, makes the sea ice presence map even more accurate by filtering out the effects of isolated classification errors. The synthesis maps show spatial consistency through time when compared to passive microwave and national ice services maps.

  4. The seventh servant: the implications of a truth drive in Bion's theory of 'O'.

    Science.gov (United States)

    Grotstein, James S

    2004-10-01

    Drawing upon Bion's published works on the subjects of truth, dreaming, alpha-function and transformations in 'O', the author independently postulates that there exists a 'truth instinctual drive' that subserves a truth principle, the latter of which is associated with the reality principle. Further, he suggests, following Bion's postulation, that 'alpha-function' and dreaming/phantasying constitute unconscious thinking processes and that they mediate the activity of this 'truth drive' (quest, pulsion), which the author hypothesizes constitutes another aspect of a larger entity that also includes the epistemophilic component drive. It purportedly seeks and transmits as well as includes what Bion (1965, pp. 147-9) calls 'O', the 'Absolute Truth, Ultimate Reality, O' (also associated with infinity, noumena or things-in-themselves, and 'godhead') (1970, p. 26). It is further hypothesized that the truth drive functions in collaboration with an 'unconscious consciousness' that is associated with the faculty of 'attention', which is also known as 'intuition'. It is responsive to internal psychical reality and constitutes Bion's 'seventh servant'. O, the ultimate landscape of psychoanalysis, has many dimensions, but the one that seems to interest Bion is that of the emotional experience of the analysand's and the analyst's 'evolving O' respectively (1970, p. 52) during the analytic session. The author thus hypothesizes that a sense of truth presents itself to the subject as a quest for truth which has the quality and force of an instinctual drive and constitutes the counterpart to the epistemophilic drive. This 'truth quest' or 'drive' is hypothesized to be the source of the generation of the emotional truth of one's ongoing experiences, both conscious and unconscious. It is proposed that emotions are beacons of truth in regard to the acceptance of reality. The concepts of an emotional truth drive and a truth principle would help us understand why analysands are able to

  5. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and

  6. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different

  7. The Experience of Truth in Jazz Improvisation

    DEFF Research Database (Denmark)

    Olsen, Jens Skou

    2015-01-01

    This is a book on truth, experience, and the interrelations between these two fundamental philosophical notions. The questions of truth and experience have their roots at the very heart of philosophy, both historically and thematically. This book gives an insight into how philosophers working...... in the fields of philosophical phenomenology and hermeneutics respond to challenges posed by these questions, not only in relation to the history of philosophy, but to philosophy itself. The book contains texts written by distinguished professors and in particular by young scholars. It is the result...

  8. Truth, laws and the progress of science

    Directory of Open Access Journals (Sweden)

    Mauro Dorato

    2011-06-01

    Full Text Available In this paper I analyze the difficult question of the truth of mature scientific theories by tackling the problem of the truth of laws. After introducing the main philosophical positions in the field of scientific realism, I discuss and then counter the two main arguments against realism, namely the pessimistic meta-induction and the abstract and idealized character of scientific laws. I conclude by defending the view that well-confirmed physical theories are true only relatively to certain values of the variables that appear in the laws.

  9. How Does Telling the Truth Help Educational Action Research?

    Science.gov (United States)

    Blair, Erik

    2010-01-01

    A number of key constructs underpin educational action research. This paper focuses on the concept of "truth" and by doing so hopes to highlight some debate in this area. In reflecting upon what "truth" might mean to those involved in action research, I shall critically evaluate Thorndike's "Law of Effect" and Bruner's "Three Forms of…

  10. TESTING OF THE TRUTH IN ANDREI PLATONOV’S TALE THE FOUNDATION PIT

    Directory of Open Access Journals (Sweden)

    Marina Vladimirovna Zavarkina

    2014-11-01

    Full Text Available Analyzing the manuscript of Andrei Platonov’s tale The Foundation Pit and his early journalism the author traces the evolution of Platonov’s views on the problem of the search for the truth. Analysis of the dynamic transcription of the The Foundation Pit manuscript enabled to show that by the end of the 1920s Platonov had abandoned rationalistic interpretations of the concept of «truth», including in its "Bogdanov’s edition". Platonov increasingly questioned the capabilities of socialist science, based on materialism, to fi nd out the truth about the world, and changed his views on work as the only method of learning the truth. He departed from the materialistic concept of learning and shifted towards the religious and philosophical tradition: in the end the formula of P. Florensky "truth-estina (to be in existence" is getting more and more essential and it is outlined a shift of the concept of “truth” from a cognitive-materialistic category (“invent” / “to do” truth to an ontological and moral category.

  11. What Justice for Rwanda? Gacaca versus Truth Commission?

    OpenAIRE

    Reuchamps, Min

    2008-01-01

    In post-genocide Rwanda, in addition to gacaca courts, a truth commission is needed in order to promote justice and foster reconciliation. In the context of transitional justice, retributive justice, which seeks justice and focuses on the perpetrators, appears to be inadequate to lead a society towards reconciliation. Therefore, some forms of restorative justice, which emphasize the healing of the whole society, seem necessary. In Rwanda, gacaca courts and a truth commission are complementary...

  12. Sojourner Truth as an Essential Part of Rhetorical Theory.

    Science.gov (United States)

    Romans, Bevin A.

    To affirm Sojourner Truth as a powerful rhetor who advanced the equality and empowerment of women, a study examined several of her speeches on women's suffrage. Although the value of using such role models as Sojourner Truth has been demonstrated in various grade levels, and in the study of history and English, the approach is too seldom employed…

  13. Does Truth Exist? Insights from Applied Linguistics for the Rationalism/Postmodern Debate

    Science.gov (United States)

    Ross, David A.

    2008-01-01

    The question of whether or not truth exists is at the center of the rationalism versus postmodern debate. Noting the difficulty of defining truth, the author uses the principles of linguistics to show that semantic skewing has resulted in the concept of truth being encoded as a noun, while it is really an attribute (true). The introduction of a…

  14. Knowledge does not protect against illusory truth.

    Science.gov (United States)

    Fazio, Lisa K; Brashier, Nadia M; Payne, B Keith; Marsh, Elizabeth J

    2015-10-01

    In daily life, we frequently encounter false claims in the form of consumer advertisements, political propaganda, and rumors. Repetition may be one way that insidious misconceptions, such as the belief that vitamin C prevents the common cold, enter our knowledge base. Research on the illusory truth effect demonstrates that repeated statements are easier to process, and subsequently perceived to be more truthful, than new statements. The prevailing assumption in the literature has been that knowledge constrains this effect (i.e., repeating the statement "The Atlantic Ocean is the largest ocean on Earth" will not make you believe it). We tested this assumption using both normed estimates of knowledge and individuals' demonstrated knowledge on a postexperimental knowledge check (Experiment 1). Contrary to prior suppositions, illusory truth effects occurred even when participants knew better. Multinomial modeling demonstrated that participants sometimes rely on fluency even if knowledge is also available to them (Experiment 2). Thus, participants demonstrated knowledge neglect, or the failure to rely on stored knowledge, in the face of fluent processing experiences. (c) 2015 APA, all rights reserved).

  15. Automatically Identifying Fusion Events between GLUT4 Storage Vesicles and the Plasma Membrane in TIRF Microscopy Image Sequences

    Directory of Open Access Journals (Sweden)

    Jian Wu

    2015-01-01

    Full Text Available Quantitative analysis of the dynamic behavior about membrane-bound secretory vesicles has proven to be important in biological research. This paper proposes a novel approach to automatically identify the elusive fusion events between VAMP2-pHluorin labeled GLUT4 storage vesicles (GSVs and the plasma membrane. The differentiation is implemented to detect the initiation of fusion events by modified forward subtraction of consecutive frames in the TIRFM image sequence. Spatially connected pixels in difference images brighter than a specified adaptive threshold are grouped into a distinct fusion spot. The vesicles are located at the intensity-weighted centroid of their fusion spots. To reveal the true in vivo nature of a fusion event, 2D Gaussian fitting for the fusion spot is used to derive the intensity-weighted centroid and the spot size during the fusion process. The fusion event and its termination can be determined according to the change of spot size. The method is evaluated on real experiment data with ground truth annotated by expert cell biologists. The evaluation results show that it can achieve relatively high accuracy comparing favorably to the manual analysis, yet at a small fraction of time.

  16. Literature and Truth : Imaginative Writing as a Medium for Ideas

    NARCIS (Netherlands)

    Lansdown, Richard

    2017-01-01

    In Literature and Truth Richard Lansdown continues a discussion concerning the truth-bearing status of imaginative literature that pre-dates Plato. The book opens with a general survey of contemporary approaches in philosophical aesthetics, and a discussion of the contribution to the question made

  17. Effects of the truth FinishIt brand on tobacco outcomes.

    Science.gov (United States)

    Evans, W Douglas; Rath, Jessica M; Hair, Elizabeth C; Snider, Jeremy Williams; Pitzer, Lindsay; Greenberg, Marisa; Xiao, Haijun; Cantrell, Jennifer; Vallone, Donna

    2018-03-01

    Since 2000, the truth campaign has grown as a social marketing brand. Back then, truth employed branding to compete directly with the tobacco industry. In 2014, the launch of truth FinishIt reflected changes in the brand's strategy, the tobacco control environment, and youth/young adult behavior. Building on a previous validation study, the current study examined brand equity in truth FinishIt , as measured by validated multi-dimensional scales, and tobacco related attitudes, beliefs, and behavior based on two waves of the Truth Longitudinal Cohort data from 2015 and 2016. A fixed effects logistic regression was used to estimate the change in brand equity between panel survey waves 3 and 4 on past 30-day smoking among ever and current smokers. Additional models determined the effects of brand equity predicting tobacco attitudes/use at follow up among the full sample. All analyses controlled for demographic factors. A one-point increase in the brand equity scale between the two waves was associated with a 66% greater chance of not smoking among ever smokers (OR 1.66, CI 1.11-2.48, p  effects of brand equity on tobacco use and how tobacco control can optimize the use of branding in campaigns.

  18. [Truth telling and advance care planning at the end of life].

    Science.gov (United States)

    Hu, Wen-Yu; Yang, Chia-Ling

    2009-02-01

    One of the core values in terminal care the respect of patient 'autonomy'. This essay begins with a discussion of medical ethics principles and the Natural Death Act in Taiwan and then summarizes two medical ethical dilemmas, truth telling and advance care planning (ACP), faced in the development of hospice and palliative care in Taiwan. The terminal truth telling process incorporates the four basic principles of Assessment and preparation, Communication with family, Truth-telling process, and Support and follow up (the so-called "ACTs"). Many experts suggest practicing ACP by abiding by the following five steps: (1) presenting and illustrating topics; (2) facilitating a structured discussion; (3) completing documents with advanced directives (ADs); (4) reviewing and updating ADs; and (5) applying ADs in clinical circumstances. Finally, the myths and challenges in truth telling and ADs include the influence of healthcare system procedures and priorities, inadequate communication skills, and the psychological barriers of medical staffs. Good communication skills are critical to truth telling and ACP. Significant discussion about ACP should help engender mutual trust between patients and the medical staffs who take the time to establish such relationships. Promoting patient autonomy by providing the opportunity of a good death is an important goal of truth telling and ACP in which patients have opportunities to choose their terminal treatment.

  19. Cross-Cultural Differences in Children’s Choices, Categorizations, and Evaluations of Truths and Lies

    Science.gov (United States)

    Fu, Genyue; Xu, Fen; Cameron, Catherine Ann; Heyman, Gail; Lee, Kang

    2008-01-01

    This study examined cross-cultural differences and similarities in children’s moral understanding of individual- or collective-oriented lies and truths. Seven-, 9-, and 11-year-old Canadian and Chinese children were read stories about story characters facing moral dilemmas about whether to lie or tell the truth to help a group but harm an individual or vice versa. Participants chose to lie or to tell the truth as if they were the character (Experiments 1 and 2) and categorized and evaluated the story characters’ truthful and untruthful statements (Experiments 3 and 4). Most children in both cultures labeled lies as lies and truths as truths. The major cultural differences lay in choices and moral evaluations. Chinese children chose lying to help a collective but harm an individual, and they rated it less negatively than lying with opposite consequences. Chinese children rated truth telling to help an individual but harm a group less positively than the alternative. Canadian children did the opposite. These findings suggest that cross-cultural differences in emphasis on groups versus individuals affect children’s choices and moral judgments about truth and deception. PMID:17352539

  20. Micro CT based truth estimation of nodule volume

    Science.gov (United States)

    Kinnard, L. M.; Gavrielides, M. A.; Myers, K. J.; Zeng, R.; Whiting, B.; Lin-Gibson, S.; Petrick, N.

    2010-03-01

    With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible, minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical, spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean= -11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).

  1. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...

  2. Effects of the truth FinishIt brand on tobacco outcomes

    Directory of Open Access Journals (Sweden)

    W. Douglas Evans

    2018-03-01

    Full Text Available Since 2000, the truth campaign has grown as a social marketing brand. Back then, truth employed branding to compete directly with the tobacco industry. In 2014, the launch of truth FinishIt reflected changes in the brand's strategy, the tobacco control environment, and youth/young adult behavior.Building on a previous validation study, the current study examined brand equity in truth FinishIt, as measured by validated multi-dimensional scales, and tobacco related attitudes, beliefs, and behavior based on two waves of the Truth Longitudinal Cohort data from 2015 and 2016. A fixed effects logistic regression was used to estimate the change in brand equity between panel survey waves 3 and 4 on past 30-day smoking among ever and current smokers. Additional models determined the effects of brand equity predicting tobacco attitudes/use at follow up among the full sample. All analyses controlled for demographic factors.A one-point increase in the brand equity scale between the two waves was associated with a 66% greater chance of not smoking among ever smokers (OR 1.66, CI 1.11–2.48, p<0.05 and an 80% greater chance of not smoking among current smokers (OR 1.80, CI 1.05–3.10, p<0.05. Higher overall truth brand equity at wave 3 predicted less smoking at wave 4 and more positive anti-tobacco attitudes. Being male, younger, and non-white predicted some of the tobacco related attitudes.Future research should examine long-term effects of brand equity on tobacco use and how tobacco control can optimize the use of branding in campaigns. Keywords: Tobacco, Smoking, Social marketing, Branding, Prevention

  3. Public relations and journalism: truth, trust, transparency and integrity

    OpenAIRE

    Davies, Frank

    2008-01-01

    Truth, trust, integrity and reputation are key concepts for understanding the relationship between journalists and public relations practitioners. This the paper: first, considers the current debate on the inter-relationship between journalism and public relations; second distinguishes varieties of public relations and journalism; third, analyses the Editorial Intelligence controversy; fourth, deconstructs aspects of "truth" and "trust" in the context of that debate; fifth, considers why the ...

  4. Culture, Truth, and Science After Lacan.

    Science.gov (United States)

    Gillett, Grant

    2015-12-01

    Truth and knowledge are conceptually related and there is a way of construing both that implies that they cannot be solely derived from a description that restricts itself to a set of scientific facts. In the first section of this essay, I analyse truth as a relation between a praxis, ways of knowing, and the world. In the second section, I invoke the third thing-the objective reality on which we triangulate as knowing subjects for the purpose of complex scientific endeavours like medical science and clinical care. Such praxes develop robust methods of "keeping in touch" with disease and illness (like biomarkers). An analysis drawing on philosophical semantics motivates the needed (anti-scientistic) account of meaning and truth (and therefore knowledge) and underpins the following argument: (i) the formulation and dissemination of knowledge rests on language; (ii) language is selective in what it represents in any given situation; (iii) the praxes of a given (sub)culture are based on this selectivity; but (iv) human health and illness involve whole human beings in a human life-world; therefore, (v) medical knowledge should reflectively transcend, where required, biomedical science towards a more inclusive view. Parts three and four argue that a post-structuralist (Lacanian) account of the human subject can avoid both scientism and idealism or unconstrained relativism.

  5. Love the Truth in the Franciscan School (XIIIth century

    Directory of Open Access Journals (Sweden)

    Manuel Lázaro Pulido

    2013-11-01

    Full Text Available Love to the truth is a fundamental question in the Franciscan School. It has your origin on the Franciscan practical needs to transmit the evangelical message to all the men. The universality of the message inspires the concept of wisdom as a base to love the truth. The truth appears as occasion of reference to God, the significatio never subordinates to the res. The article exposes the fundamental milestones of this construction from the origins of the Franciscan School to the ends of the 13th century with Gonzalo Hispano, indicating the common points and the internal discussions of a School according Anthony of Lisbon/Padua, Alexander of Hales, Odo Rigaldus, William of Melitona, Robert Grosseteste, Roger Bacon, Bonaventure, Matthew of Aquasparta, Peter John Olivi and Gonsalvus of Spain

  6. Surface Properties and Characteristics of Mars Landing Sites from Remote Sensing Data and Ground Truth

    Science.gov (United States)

    Golombek, M. P.; Haldemann, A. F.; Simpson, R. A.; Furgason, R. L.; Putzig, N. E.; Huertas, A.; Arvidson, R. E.; Heet, T.; Bell, J. F.; Mellon, M. T.; McEwen, A. S.

    2008-12-01

    Surface characteristics at the six sites where spacecraft have successfully landed on Mars can be related favorably to their signatures in remotely sensed data from orbit and from the Earth. Comparisons of the rock abundance, types and coverage of soils (and their physical properties), thermal inertia, albedo, and topographic slope all agree with orbital remote sensing estimates and show that the materials at the landing sites can be used as ground truth for the materials that make up most of the equatorial and mid- to moderately high-latitude regions of Mars. The six landing sites sample two of the three dominant global thermal inertia and albedo units that cover ~80% of the surface of Mars. The Viking, Spirit, Mars Pathfinder, and Phoenix landing sites are representative of the moderate to high thermal inertia and intermediate to high albedo unit that is dominated by crusty, cloddy, blocky or frozen soils (duricrust that may be layered) with various abundances of rocks and bright dust. The Opportunity landing site is representative of the moderate to high thermal inertia and low albedo surface unit that is relatively dust free and composed of dark eolian sand and/or increased abundance of rocks. Rock abundance derived from orbital thermal differencing techniques in the equatorial regions agrees with that determined from rock counts at the surface and varies from ~3-20% at the landing sites. The size-frequency distributions of rocks >1.5 m diameter fully resolvable in HiRISE images of the landing sites follow exponential models developed from lander measurements of smaller rocks and are continuous with these rock distributions indicating both are part of the same population. Interpretation of radar data confirms the presence of load bearing, relatively dense surfaces controlled by the soil type at the landing sites, regional rock populations from diffuse scattering similar to those observed directly at the sites, and root-mean-squared slopes that compare favorably

  7. Competence and Performance in Belief-Desire Reasoning across Two Cultures: The Truth, the Whole Truth and Nothing but the Truth about False Belief?

    Science.gov (United States)

    Yazdi, Amir Amin; German, Tim P.; Defeyter, Margaret Anne; Siegal, Michael

    2006-01-01

    There is a change in false belief task performance across the 3-5 year age range, as confirmed in a recent meta-analysis [Wellman, H. M., Cross, D., & Watson, J. (2001). Meta-analysis of theory mind development: The truth about false-belief. "Child Development," 72, 655-684]. This meta-analysis identified several performance factors influencing…

  8. This Is My (Post) Truth, Tell Me Yours

    OpenAIRE

    Powell, Martin

    2017-01-01

    This is a commentary on the article ‘The rise of post-truth populism in pluralist liberal democracies: challenges for health policy.’ It critically examines two of its key concepts: populism and ‘post truth.’ This commentary argues that there are different types of populism, with unclear links to impacts, and that in some ways, ‘post-truth’ has resonances with arguments advanced in the period at the beginning of the British National Health Service (NHS). In short, ‘post-truth’ populism’ may b...

  9. Truth as determinant of religious faith | Emeng | Global Journal of ...

    African Journals Online (AJOL)

    This study investigates how varying religious truth has determined different religious faiths in the world. One God created all human kind and placed them in their different environments, but the allegiance, service, worship and honour to him varies due to the different truths at the foundations of the many faiths. This article ...

  10. THE RULE OF TRUTH AND “WHITE LIE” IN MODERN MEDICINE

    Directory of Open Access Journals (Sweden)

    Zhanna V. Chashina

    2016-06-01

    Full Text Available Introduction. The article focuses on a topical conflict in contemporary medical practice – truthful patient’s information. Analysis of the problem is considered in historical perspective, by bringing the views of both domestic and foreign authors. Various opinions and arguments for and against providing information to the patient about his health discussions to this day by physicians, despite the legalization provision of truthful information about the state of health. Materials and Methods. Material for the article was the ethical and regulatory documents that include provisions on the right of the patient in receiving truthful information. On the basis of the dialectical approach in the article, the object of the research is a rule of truthfulness in medicine, the analysis which takes place in the framework of medical ethics and its modern model of bioethics. The application of integrated approach allowed us to consider the problem from a position of morality and law, society, medicine and the individual. Using the following methods: comparative-historical, axiological, document analysis, and synthesis of the functionality, efficiency and appropriateness of the rules of veracity at the present stage of development of medicine. Results. In the course of the study it was revealed that the issue of the rule of truthfulness in modern medical practice, justifying from the perspective of modern model of medical ethics – bioethics and the law, is the inalienable right of the patient. In addition, you specify the inextricable link between the rule and the truthfulness rule of informed consent. These regulations removed the controversial question of the presence of the “holy lie” in medicine. But due to the absolutely inviolable rules of “no harm” in medicine it is necessary to consider the ethical and legal aspects of rules of veracity: the duty, the right, the opportunity and feasibility to speak the truth, allowing not only the law

  11. Towards a pragmatics of non-fictional narrative truth: Gricean and ...

    African Journals Online (AJOL)

    This paper focuses on a particular kind of truth that falls within this category, namely non-fictional narrative truth. “Narrative truth” is defined as a judgement of verisimilitude accorded to the meaning of a narrative as a whole. This narrative meaning is neither rationally nor empirically verifiable, but rather arrived at by a ...

  12. Truth and modes of cognition in Boethius: a Neoplatonic approach

    Directory of Open Access Journals (Sweden)

    José María Zamora Calvo

    2017-07-01

    Full Text Available Boethius does not accept the principle of realism that considers truth as the adaptation – or adequation – of the subject to the knowable object, and instead defends that knowledge should be studied by relating it to the capacity of the cognoscente subject. Thus, truth is relative to the faculty or level of knowledge in which we stand, since each faculty -each level of knowledge- has its own object: the material figure for the senses, the figure without matter for the Imagination, the universal for reason and the simple form for intelligence. But this epistemological relativism is moderate, precisely because of its hierarchical character. Therefore, although in a sense truth is manifold, the perfect truth, proper to divine knowledge, includes and surpasses all others. In order to cement the architecture of this system of relativisation of knowledge, Boethius starts from a Neoplatonic interpretation of the simile of the line of the Republic (VI.510a-b and Plato's Timaeus, but not completely tied to it. The beings endowed with knowledge are ordered according to the Neoplatonic hierarchy of cosmic realities.

  13. The Social Sciences and their compromisse with truth and justice

    Directory of Open Access Journals (Sweden)

    Mauro W. Barbosa de Almeida

    2015-06-01

    Full Text Available This paper discusses the social scientists responsability in relation to justice and truth, based in the practical and theoretical experiences of the author in the field of Social Anthropology. Although the text adresses the Social Sciences from the perspective of Social Anthropology, it deals with topics in which the researchers and ativists activities require a cooperative action of lawyers, engineers and biologists among the work of sociologists and geographers – all that is involved in the situations when it is necessary to tell the truth and also to judge about justice and injustice in social life. Justice and truth notions are social scientists weapons and they can not be abandoned in the hands of conservative thought.

  14. Evaluating the Limits of Network Topology Inference Via Virtualized Network Emulation

    Science.gov (United States)

    2015-06-01

    virtualized environment. First, we automatically build topological ground truth according to various network generation models and create emulated Cisco ...to various network generation models and create emulated Cisco router networks by leveraging and modifying existing emulation software. We then au... markets , to verifying compliance with policy, as in recent “network neutrality” rules established in the United States. The Internet is a network of

  15. Truth Commissions, Education, and Positive Peace: An Analysis of Truth Commission Final Reports (1980-2015)

    Science.gov (United States)

    Paulson, Julia; Bellino, Michelle J.

    2017-01-01

    Transitional justice and education both occupy increasingly prominent space on the international peacebuilding agenda, though less is known about the ways they might reinforce one another to contribute towards peace. This paper presents a cross-national analysis of truth commission (TC) reports spanning 1980-2015, exploring the range of…

  16. Truth Commissions in Latin America. The hope of a new future

    Directory of Open Access Journals (Sweden)

    Nelson Molina Valencia

    2017-01-01

    Full Text Available This article presents the implementation of the right to the truth of the victims, through the creation of eleven Commissions of Truth, established in Argentina, Chile, El Salvador, Guatemala, Uruguay, Peru, Paraguay, Colombia, Ecuador, Honduras and Brasil, which emerged as the product of peace agreements or transitional processes. The Commissions of Truth received the assignment to investigate violations of Human Rights and breaches of International Humanitarian Law by military dictatorships, authoritarian regimes or internal armed conflicts. This review shows, that in addition to the subjects that constitute the Commissions, they work due to eight conditions: determined duration; legitimacy; themes; working methodologies; media of dissemination of results; attention to Disarmament, Demobilization and Reintegration processes; repair strategies, request for forgiveness and reconciliation. The existence of the Commissions of Truth, while transforming the conflicts they serve, have not reached, as a strategy, the integral promotion of connivance and reconciliation.

  17. Finding a single point of truth

    Energy Technology Data Exchange (ETDEWEB)

    Sokolov, S.; Thijssen, H. [Autodesk Inc, Toronto, ON (Canada); Laslo, D.; Martin, J. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Electric utilities collect large volumes of data at every level of their business, including SCADA, Smart Metering and Smart Grid initiatives, LIDAR and other 3D imagery surveys. Different types of database systems are used to store the information, rendering data flow within the utility business process extremely complicated. The industry trend has been to endure redundancy of data input and maintenance of multiple copies of the same data across different solution data sets. Efforts have been made to improve the situation with point to point interfaces, but with the tools and solutions available today, a single point of truth can be achieved. Consolidated and validated data can be published into a data warehouse at the right point in the process, making the information available to all other enterprise systems and solutions. This paper explained how the single point of truth spatial data warehouse and process automation services can be configured to streamline the flow of data within the utility business process using the initiate-plan-execute-close (IPEC) utility workflow model. The paper first discussed geospatial challenges faced by utilities and then presented the approach and technology aspects. It was concluded that adoption of systems and solutions that can function with and be controlled by the IPEC workflow can provide significant improvement for utility operations, particularly if those systems are coupled with the spatial data warehouse that reflects a single point of truth. 6 refs., 3 figs.

  18. Gödel, Truth and Proof

    Czech Academy of Sciences Publication Activity Database

    Peregrin, Jaroslav

    -, č. 82 (2007), s. 1-10 E-ISSN 1742-6596 R&D Projects: GA ČR(CZ) GA401/04/0117 Institutional research plan: CEZ:AV0Z90090514 Keywords : Gödel * incompleteness of arithmetic * proof vs. truth Subject RIV: AA - Philosophy ; Religion http://www.iop.org/EJ/ toc /1742-6596/82/1

  19. Fuzzy logic of quasi-truth an algebraic treatment

    CERN Document Server

    Di Nola, Antonio; Turunen, Esko

    2016-01-01

    This book presents the first algebraic treatment of quasi-truth fuzzy logic and covers the algebraic foundations of many-valued logic. It offers a comprehensive account of basic techniques and reports on important results showing the pivotal role played by perfect many-valued algebras (MV-algebras). It is well known that the first-order predicate Łukasiewicz logic is not complete with respect to the canonical set of truth values. However, it is complete with respect to all linearly ordered MV –algebras. As there are no simple linearly ordered MV-algebras in this case, infinitesimal elements of an MV-algebra are allowed to be truth values. The book presents perfect algebras as an interesting subclass of local MV-algebras and provides readers with the necessary knowledge and tools for formalizing the fuzzy concept of quasi true and quasi false. All basic concepts are introduced in detail to promote a better understanding of the more complex ones. It is an advanced and inspiring reference-guide for graduate s...

  20. Automatic classification of unexploded ordnance applied to Spencer Range live site for 5x5 TEMTADS sensor

    Science.gov (United States)

    Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon

    2013-06-01

    This paper details methods for automatic classification of Unexploded Ordnance (UXO) as applied to sensor data from the Spencer Range live site. The Spencer Range is a former military weapons range in Spencer, Tennessee. Electromagnetic Induction (EMI) sensing is carried out using the 5x5 Time-domain Electromagnetic Multi-sensor Towed Array Detection System (5x5 TEMTADS), which has 25 receivers and 25 co-located transmitters. Every transmitter is activated sequentially, each followed by measuring the magnetic field in all 25 receivers, from 100 microseconds to 25 milliseconds. From these data target extrinsic and intrinsic parameters are extracted using the Differential Evolution (DE) algorithm and the Ortho-Normalized Volume Magnetic Source (ONVMS) algorithms, respectively. Namely, the inversion provides x, y, and z locations and a time series of the total ONVMS principal eigenvalues, which are intrinsic properties of the objects. The eigenvalues are fit to a power-decay empirical model, the Pasion-Oldenburg model, providing 3 coefficients (k, b, and g) for each object. The objects are grouped geometrically into variably-sized clusters, in the k-b-g space, using clustering algorithms. Clusters matching a priori characteristics are identified as Targets of Interest (TOI), and larger clusters are automatically subclustered. Ground Truths (GT) at the center of each class are requested, and probability density functions are created for clusters that have centroid TOI using a Gaussian Mixture Model (GMM). The probability functions are applied to all remaining anomalies. All objects of UXO probability higher than a chosen threshold are placed in a ranked dig list. This prioritized list is scored and the results are demonstrated and analyzed.

  1. Full automatic fiducial marker detection on coil arrays for accurate instrumentation placement during MRI guided breast interventions

    Science.gov (United States)

    Filippatos, Konstantinos; Boehler, Tobias; Geisler, Benjamin; Zachmann, Harald; Twellmann, Thorsten

    2010-02-01

    With its high sensitivity, dynamic contrast-enhanced MR imaging (DCE-MRI) of the breast is today one of the first-line tools for early detection and diagnosis of breast cancer, particularly in the dense breast of young women. However, many relevant findings are very small or occult on targeted ultrasound images or mammography, so that MRI guided biopsy is the only option for a precise histological work-up [1]. State-of-the-art software tools for computer-aided diagnosis of breast cancer in DCE-MRI data offer also means for image-based planning of biopsy interventions. One step in the MRI guided biopsy workflow is the alignment of the patient position with the preoperative MR images. In these images, the location and orientation of the coil localization unit can be inferred from a number of fiducial markers, which for this purpose have to be manually or semi-automatically detected by the user. In this study, we propose a method for precise, full-automatic localization of fiducial markers, on which basis a virtual localization unit can be subsequently placed in the image volume for the purpose of determining the parameters for needle navigation. The method is based on adaptive thresholding for separating breast tissue from background followed by rigid registration of marker templates. In an evaluation of 25 clinical cases comprising 4 different commercial coil array models and 3 different MR imaging protocols, the method yielded a sensitivity of 0.96 at a false positive rate of 0.44 markers per case. The mean distance deviation between detected fiducial centers and ground truth information that was appointed from a radiologist was 0.94mm.

  2. Demonstrator for Automatic Target Classification in SAR Imagery

    NARCIS (Netherlands)

    Wit, J.J.M. de; Broek, A.C. van den; Dekker, R.J.

    2006-01-01

    Due to the increasing use of unmanned aerial vehicles (UAV) for reconnaissance, surveillance, and target acquisition applications, the interest in synthetic aperture radar (SAR) systems is growing. In order to facilitate the processing of the enormous amount of SAR data on the ground, automatic

  3. Estimation of snowpack matching ground-truth data and MODIS satellite-based observations by using regression kriging

    Science.gov (United States)

    Juan Collados-Lara, Antonio; Pardo-Iguzquiza, Eulogio; Pulido-Velazquez, David

    2016-04-01

    The estimation of Snow Water Equivalent (SWE) is essential for an appropriate assessment of the available water resources in Alpine catchment. The hydrologic regime in these areas is dominated by the storage of water in the snowpack, which is discharged to rivers throughout the melt season. An accurate estimation of the resources will be necessary for an appropriate analysis of the system operation alternatives using basin scale management models. In order to obtain an appropriate estimation of the SWE we need to know the spatial distribution snowpack and snow density within the Snow Cover Area (SCA). Data for these snow variables can be extracted from in-situ point measurements and air-borne/space-borne remote sensing observations. Different interpolation and simulation techniques have been employed for the estimation of the cited variables. In this paper we propose to estimate snowpack from a reduced number of ground-truth data (1 or 2 campaigns per year with 23 observation point from 2000-2014) and MODIS satellite-based observations in the Sierra Nevada Mountain (Southern Spain). Regression based methodologies has been used to study snowpack distribution using different kind of explicative variables: geographic, topographic, climatic. 40 explicative variables were considered: the longitude, latitude, altitude, slope, eastness, northness, radiation, maximum upwind slope and some mathematical transformation of each of them [Ln(v), (v)^-1; (v)^2; (v)^0.5). Eight different structure of regression models have been tested (combining 1, 2, 3 or 4 explicative variables). Y=B0+B1Xi (1); Y=B0+B1XiXj (2); Y=B0+B1Xi+B2Xj (3); Y=B0+B1Xi+B2XjXl (4); Y=B0+B1XiXk+B2XjXl (5); Y=B0+B1Xi+B2Xj+B3Xl (6); Y=B0+B1Xi+B2Xj+B3XlXk (7); Y=B0+B1Xi+B2Xj+B3Xl+B4Xk (8). Where: Y is the snow depth; (Xi, Xj, Xl, Xk) are the prediction variables (any of the 40 variables); (B0, B1, B2, B3) are the coefficients to be estimated. The ground data are employed to calibrate the multiple regressions. In

  4. ASM Based Synthesis of Handwritten Arabic Text Pages

    Directory of Open Access Journals (Sweden)

    Laslo Dinges

    2015-01-01

    Full Text Available Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  5. ASM Based Synthesis of Handwritten Arabic Text Pages.

    Science.gov (United States)

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  6. Truthfulness in science teachers’ corporeal performances

    DEFF Research Database (Denmark)

    Daugbjerg, Peer

    2014-01-01

    , sincerity and trustworthiness in dealing with classroom management. Jane shows effort, fidelity and honesty in developing outdoor teaching. Simon shows transparency, objectivity and sincerity in his support of colleagues. By addressing the relations in the vocabulary of truthfulness the teachers...

  7. Multi-atlas-based automatic 3D segmentation for prostate brachytherapy in transrectal ultrasound images

    Science.gov (United States)

    Nouranian, Saman; Mahdavi, S. Sara; Spadinger, Ingrid; Morris, William J.; Salcudean, S. E.; Abolmaesumi, P.

    2013-03-01

    One of the commonly used treatment methods for early-stage prostate cancer is brachytherapy. The standard of care for planning this procedure is segmentation of contours from transrectal ultrasound (TRUS) images, which closely follow the prostate boundary. This process is currently performed either manually or using semi-automatic techniques. This paper introduces a fully automatic segmentation algorithm which uses a priori knowledge of contours in a reference data set of TRUS volumes. A non-parametric deformable registration method is employed to transform the atlas prostate contours to a target image coordinates. All atlas images are sorted based on their registration results and the highest ranked registration results are selected for decision fusion. A Simultaneous Truth and Performance Level Estimation algorithm is utilized to fuse labels from registered atlases and produce a segmented target volume. In this experiment, 50 patient TRUS volumes are obtained and a leave-one-out study on TRUS volumes is reported. We also compare our results with a state-of-the-art semi-automatic prostate segmentation method that has been clinically used for planning prostate brachytherapy procedures and we show comparable accuracy and precision within clinically acceptable runtime.

  8. Matters of Fact: Language, Science, and the Status of Truth in Late Colonial Korea

    Directory of Open Access Journals (Sweden)

    Christopher P. Hanscom

    2014-03-01

    Full Text Available This article addresses the status of the fact in literary and historical discourses in late colonial Korea, focusing on the elaboration of the relationship between scientific and literary truths primarily in the work of philosopher and critic Sŏ Insik (1906–?. It points to a growing tendency in late 1930s and early 1940s Korea to question the veracity of the fact (or of empiricism more broadly in an environment where the enunciation of the colonial subject had been rendered problematic and objective statements had arguably lost their connection with social reality. In a period when the relationship between signifier and referent had come into question, how did this major critic understand the relationship between science and literature, or between truth and subjectivity? Sŏ warns against a simplistic apprehension of the notion of truth as unilaterally equivalent with what he calls “scientific truth” (kwahakchŏk chilli—a nomological truth based on objective observation and confirmation by universal principles—and argues that a necessary complement to apparently objective truth is “literary truth” (munhakchŏk chinsil. Against the fixed, conceptual form of scientific thought, literary truth presents itself as an experiential truth that returns to the sensory world of the sociolinguistic subject (chuch’e as a source of credibility.

  9. The socio-rhetorical force of 'truth talk' and lies: The case of 1 John ...

    African Journals Online (AJOL)

    This article canvassed Greek and Roman sources for discussions concerning truth talk and lies. It has investigated what social historians and/or anthropologists are saying about truth talking and lying and has developed a model that will examine the issue of truth and lying in socio-religious terms as defined by the ...

  10. Truthfulness in transplantation: non-heart-beating organ donation

    Directory of Open Access Journals (Sweden)

    Potts Michael

    2007-08-01

    Full Text Available Abstract The current practice of organ transplantation has been criticized on several fronts. The philosophical and scientific foundations for brain death criteria have been crumbling. In addition, donation after cardiac death, or non-heartbeating-organ donation (NHBD has been attacked on grounds that it mistreats the dying patient and uses that patient only as a means to an end for someone else's benefit. Verheijde, Rady, and McGregor attack the deception involved in NHBD, arguing that the donors are not dead and that potential donors and their families should be told that is the case. Thus, they propose abandoning the dead donor rule and allowing NHBD with strict rules concerning adequate informed consent. Such honesty about NHBD should be welcomed. However, NHBD violates a fundamental end of medicine, nonmaleficience, "do no harm." Physicians should not be harming or killing patients, even if it is for the benefit of others. Thus, although Verheijde and his colleages should be congratulated for calling for truthfulness about NHBD, they do not go far enough and call for an elimination of such an unethical procedure from the practice of medicine.

  11. Corporate truth: the limits to transparency

    National Research Council Canada - National Science Library

    Henriques, Adrian

    2007-01-01

    ... plc What the directors' report should cover Conditions of use of the Rio Tinto website Practical complicity 75 83 90 95 98 160 FIGURES 3.1 7.1 7.2 7.3 The ring of truth The global trend in non-fi...

  12. Evidence of the Impact of the truth FinishIt Campaign.

    Science.gov (United States)

    Vallone, Donna; Cantrell, Jennifer; Bennett, Morgane; Smith, Alexandria; Rath, Jessica M; Xiao, Haijun; Greenberg, Marisa; Hair, Elizabeth C

    2018-04-02

    Over the past decade, public education mass media campaigns have been shown to be successful in changing tobacco-related attitudes, intentions, and behaviors among youth and young adults. In 2014, the national truth® campaign re-launched a new phase of the campaign targeted at a broad audience of youth and young adults, aged 15-21, to help end the tobacco epidemic. The study sample for this analysis is drawn from the Truth Longitudinal Cohort (TLC), a probability-based, nationally representative cohort designed to evaluate the relationship between awareness of truth media messages and changes in targeted attitudes, beliefs, and behaviors over time. The sample for this study was limited to those with data at baseline and three subsequent follow-up surveys (n = 7536). Logistic regression models indicate that truth ad awareness is significantly associated with increases in targeted anti-tobacco attitudes as well as reduced intentions to smoke over time, holding constant baseline attitudes and intentions. Results also suggest a dose-response relationship in that higher levels of truth ad awareness were significantly associated with higher likelihood of reporting agreement across all five attitudinal constructs: anti-smoking imagery, anti-social smoking sentiment, anti-tobacco social movement, anti-tobacco industry sentiment, and independence. Longitudinal results indicate a significant dose-response relationship between awareness of the new phase of the truth campaign and campaign-targeted attitudes and intentions not to smoke among youth and young adults. Findings from this study confirm that a carefully designed anti-tobacco public education campaign aimed at youth and young adults is a key population-level intervention within the context of an expanding tobacco product landscape and a cluttered media environment. As tobacco use patterns shift and new products emerge, evidence-based public education campaigns can play a central role in helping the next generation to

  13. Love and Truth in Social Involvement of the Church

    Directory of Open Access Journals (Sweden)

    Henryk Szmulewicz

    2012-09-01

    Full Text Available This study begins with a brief outline of the essence of the whole encyclical Caritas in veritate . Benedict XVI expresses the desire for „the dialogue with the world”. He understands this dialogue as a special kind of the service of the Church towards eternal love and truth, fully revealed in Christ. The dialogue of the Church with the world, in the spirit of love and truth, is accomplished every day at the level of so-called official relations. There are numerous opinions that in the past the Church repeatedly neglected the dialogue with the world. Indeed, the Church historians point out the existence of examples of the fall of the authority of the Holy See in particular countries and circumstances. Similarly, the Church is the sign of objection in the contemporary world. Instructed by past experiences the Church is aware that what is necessary for the renewal of culture and society, is evangelical love and truth.

  14. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    Science.gov (United States)

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  15. Automatic extraction of road features in urban environments using dense ALS data

    Science.gov (United States)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  16. Addressing the social dimensions of citizen observatories: The Ground Truth 2.0 socio-technical approach for sustainable implementation of citizen observatories

    Science.gov (United States)

    Wehn, Uta; Joshi, Somya; Pfeiffer, Ellen; Anema, Kim; Gharesifard, Mohammad; Momani, Abeer

    2017-04-01

    Owing to ICT-enabled citizen observatories, citizens can take on new roles in environmental monitoring, decision making and co-operative planning, and environmental stewardship. And yet implementing advanced citizen observatories for data collection, knowledge exchange and interactions to support policy objectives is neither always easy nor successful, given the required commitment, trust, and data reliability concerns. Many efforts are facing problems with the uptake and sustained engagement by citizens, limited scalability, unclear long-term sustainability and limited actual impact on governance processes. Similarly, to sustain the engagement of decision makers in citizen observatories, mechanisms are required from the start of the initiative in order to have them invest in and, hence, commit to and own the entire process. In order to implement sustainable citizen observatories, these social dimensions therefore need to be soundly managed. We provide empirical evidence of how the social dimensions of citizen observatories are being addressed in the Ground Truth 2.0 project, drawing on a range of relevant social science approaches. This project combines the social dimensions of citizen observatories with enabling technologies - via a socio-technical approach - so that their customisation and deployment is tailored to the envisaged societal and economic impacts of the observatories. The projects consists of the demonstration and validation of six scaled up citizen observatories in real operational conditions both in the EU and in Africa, with a specific focus on flora and fauna as well as water availability and water quality for land and natural resources management. The demonstration cases (4 EU and 2 African) cover the full 'spectrum' of citizen-sensed data usage and citizen engagement, and therefore allow testing and validation of the socio-technical concept for citizen observatories under a range of conditions.

  17. Automatic target detection using binary template matching

    Science.gov (United States)

    Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook

    2005-03-01

    This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.

  18. The Science Behind the Academy for Eating Disorders' Nine Truths About Eating Disorders.

    Science.gov (United States)

    Schaumberg, Katherine; Welch, Elisabeth; Breithaupt, Lauren; Hübel, Christopher; Baker, Jessica H; Munn-Chernoff, Melissa A; Yilmaz, Zeynep; Ehrlich, Stefan; Mustelin, Linda; Ghaderi, Ata; Hardaway, Andrew J; Bulik-Sullivan, Emily C; Hedman, Anna M; Jangmo, Andreas; Nilsson, Ida A K; Wiklund, Camilla; Yao, Shuyang; Seidel, Maria; Bulik, Cynthia M

    2017-11-01

    In 2015, the Academy for Eating Disorders collaborated with international patient, advocacy, and parent organizations to craft the 'Nine Truths About Eating Disorders'. This document has been translated into over 30 languages and has been distributed globally to replace outdated and erroneous stereotypes about eating disorders with factual information. In this paper, we review the state of the science supporting the 'Nine Truths'. The literature supporting each of the 'Nine Truths' was reviewed, summarized and richly annotated. Most of the 'Nine Truths' arise from well-established foundations in the scientific literature. Additional evidence is required to further substantiate some of the assertions in the document. Future investigations are needed in all areas to deepen our understanding of eating disorders, their causes and their treatments. The 'Nine Truths About Eating Disorders' is a guiding document to accelerate global dissemination of accurate and evidence-informed information about eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  19. An Exchange on "Truth and Methods."

    Science.gov (United States)

    Caughie, Pamela L.; Dasenbrock, Reed Way

    1996-01-01

    Takes issue with Reed Way Dasenbrock's criticism of literary theory and the terms under which literary interpretation and discussion take place. Presents Dasenbrock's reply, which discusses his understanding of certain terms (evidence, truth, debate), his description of the problem, and the logical contradictions he finds internal to…

  20. 76 FR 18354 - Truth in Lending

    Science.gov (United States)

    2011-04-04

    ... extent that a creditor imposed charges that were inconsistent with Regulation Z while the account was... amounts charged during the period the account was exempt or to provide disclosures regarding transactions...) amends the Truth in Lending Act (TILA) by increasing the threshold for exempt consumer credit...

  1. 76 FR 11319 - Truth in Lending

    Science.gov (United States)

    2011-03-02

    ... Congress enacted the Truth in Lending Act (TILA) based on findings that economic stability would be... Final Rule Congress enacted TILA based on findings that economic stability would be enhanced and... Economic Recovery Act of 2008 (HERA), also provides that its principal obligation limitations are subject...

  2. On Truth and Emancipation

    Directory of Open Access Journals (Sweden)

    Andreas Hjort Bundgaard

    2012-09-01

    Full Text Available This article has two main currents. First, it argues that an affinity or similarity can be identified between the philosophy of Gianni Vattimo (the so-called “Weak Thinking” and the “Discourse Theory” of Ernesto Laclau and Chantal Mouffe. The two theorizations are engaged with related problems, but have conceptualized them differently; they share central insights, but understand them with different vocabularies. The article furthermore illuminates as regards what this affinity consists in, and it discusses the differences and similarities between the two theoretical positions. The second current of the article takes the ‘postmodern’ philosophical problems of anti-foundationalism and nihilism as its point of departure. It raises the questions of: 1 how it is possible at the same time to take the critique of universality and objectivity seriously and still believe in the value of ethics and science; and, 2 how we are to understand emancipation if there is no necessary relationship between truth and freedom. The article investigates the status, meaning and interconnection of the categories of truth, knowledge, ethics, politics and emancipation in the light of the absence of metaphysical first principles. The article concludes that: A faith can constitute a “weak foundation” of knowledge and ethics; and, B nihilism can be combined with the political and ethical ambitions of universal human emancipation and radical democracy.

  3. Rhetoric and Truth: A Note on Aristotle, Rhetoric 1355a 21-24

    Science.gov (United States)

    Grimaldi, William M. A.

    1978-01-01

    A passage from Aristotle is discussed and interpreted. Rhetoric represents truth and justice in any situation for the auditor through the use of language. The usefulness of rhetoric lies in its ability to assure an adequate and competent articulation of truth and justice. (JF)

  4. Climate science, truth, and democracy.

    Science.gov (United States)

    Keller, Evelyn Fox

    2017-08-01

    This essay was written almost ten years ago when the urgency of America's failure as a nation to respond to the threats of climate change first came to preoccupy me. Although the essay was never published in full, I circulated it informally in an attempt to provoke a more public engagement among my colleagues in the history, philosophy, and sociology of science. In particular, it was written in almost direct response to Philip Kitcher's own book, Science, Truth and Democracy (2001), in an attempt to clarify what was special about Climate Science in its relation to truth and democracy. Kitcher's response was immensely encouraging, and it led to an extended dialogue that resulted, first, in a course we co-taught at Columbia University, and later, to the book The Seasons Alter: How to Save Our Planet in Six Acts (W. W. Norton) published this spring. The book was finished just after the Paris Climate Accord, and it reflects the relative optimism of that moment. Unfortunately events since have begun to evoke, once again, the darker mood of this essay. I am grateful to Greg Radick for suggesting its publication. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The ethics of truth in the era of technological civilization and the information

    Directory of Open Access Journals (Sweden)

    N. V. Nesprava

    2017-01-01

    Full Text Available The technological civilization and information society give a lot of advantages to a modern man in various fields. However, at the same time, we do not see a decrease in hostility and injustice in the society. Moreover, technological progress has led to the fact that human civilization is on the verge of a serious challenge caused by environmental disasters and opportunity for unprecedented disastrous consequences of the World War III. In these circumstances, the search for the causes of the current crisis is actual as well as the development of concepts that can contribute to overcoming this crisis. One of the most promising theories in this context is the ethics of truth. In modern civilization the issue of the importance of truth is shifted to the periphery of the intellectual discourse. Modern civilization uses only substitutes of the issue. Veracity in science together with popular opinions in the field of information circulating are such substitutes of truth. However, these substitutes do not fully reflect the true contents of the truth issue. The truth is not confined to the veracity of a celebrity’s opinion. Analyzing the theories by H. Jonas, В. Hübner, V. Stepin and T. Voronina, the study argues that the processes of losing of the meaning of life are intensifying in the modern civilization. We argue that a lack of a proper attention in modern civilization to the issue of truth inevitably stimulates processes of diluting the meaning of human life. Based on the theories by G. Hegel, H.-G. Gadamer and J. Neidleman T. Osborne, the study demonstrates that without thinking about the issue of the truth one cannot come near to the transcendent, and thus, his life could not be convincingly endowed with meaning, fullness and a purpose of existence. Understanding the issue of truth is especially important in conditions of the modern civilization, because our civilization is facing the possibility of self-destruction as a result of its

  6. A Bridge to Reconciliation: A Critique of the Indian Residential School Truth Commission

    Directory of Open Access Journals (Sweden)

    Marc A. Flisfeder

    2010-05-01

    Full Text Available In the past year, the Government of Canada has established the Indian Residential Schools (IRS Truth and Reconciliation Commission (TRC to address the deleterious effect that the IRS system has had on Aboriginal communities. This paper argues that the TRC as an alternative dispute resolution mechanism is flawed since it focuses too much on truth at the expense of reconciliation. While the proliferation of historical truths is of great importance, without mapping a path to reconciliation, the Canadian public will simply learn about the mistakes of the past without addressing the residual, communal impacts of the IRS system that continue to linger. The Truth and Reconciliation Commission must therefore approach its mandate broadly and in a manner reminiscent of the Royal Commission on Aboriginal Peoples of 1996.

  7. Beauty, a road to the truth

    NARCIS (Netherlands)

    Kuipers, T.A.F.

    In this article I give a naturalistic-cum-formal analysis of the relation between beauty, empirical success, and truth. The analysis is based on the one hand on a hypothetical variant of the so-called 'mere-exposure effect' which has been more or less established in experimental psychology regarding

  8. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  9. Automatic segmentation of meningioma from non-contrasted brain MRI integrating fuzzy clustering and region growing

    Directory of Open Access Journals (Sweden)

    Liao Chun-Chih

    2011-08-01

    Full Text Available Abstract Background In recent years, magnetic resonance imaging (MRI has become important in brain tumor diagnosis. Using this modality, physicians can locate specific pathologies by analyzing differences in tissue character presented in different types of MR images. This paper uses an algorithm integrating fuzzy-c-mean (FCM and region growing techniques for automated tumor image segmentation from patients with menigioma. Only non-contrasted T1 and T2 -weighted MR images are included in the analysis. The study's aims are to correctly locate tumors in the images, and to detect those situated in the midline position of the brain. Methods The study used non-contrasted T1- and T2-weighted MR images from 29 patients with menigioma. After FCM clustering, 32 groups of images from each patient group were put through the region-growing procedure for pixels aggregation. Later, using knowledge-based information, the system selected tumor-containing images from these groups and merged them into one tumor image. An alternative semi-supervised method was added at this stage for comparison with the automatic method. Finally, the tumor image was optimized by a morphology operator. Results from automatic segmentation were compared to the "ground truth" (GT on a pixel level. Overall data were then evaluated using a quantified system. Results The quantified parameters, including the "percent match" (PM and "correlation ratio" (CR, suggested a high match between GT and the present study's system, as well as a fair level of correspondence. The results were compatible with those from other related studies. The system successfully detected all of the tumors situated at the midline of brain. Six cases failed in the automatic group. One also failed in the semi-supervised alternative. The remaining five cases presented noticeable edema inside the brain. In the 23 successful cases, the PM and CR values in the two groups were highly related. Conclusions Results indicated

  10. Automatic Camera Orientation and Structure Recovery with Samantha

    Science.gov (United States)

    Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.

    2011-09-01

    SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  11. 75 FR 81836 - Truth in Lending

    Science.gov (United States)

    2010-12-29

    ... the Truth in Lending Act (TILA) based on findings that economic stability would be enhanced and... MDIA is contained in Sections 2501 through 2503 of the Housing and Economic Recovery Act of 2008, Public Law 110-289, enacted on July 30, 2008. The MDIA was later amended by the Emergency Economic...

  12. Small-Scale Helicopter Automatic Autorotation : Modeling, Guidance, and Control

    NARCIS (Netherlands)

    Taamallah, S.

    2015-01-01

    Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit

  13. A novel rumor diffusion model considering the effect of truth in online social media

    Science.gov (United States)

    Sun, Ling; Liu, Yun; Zeng, Qing-An; Xiong, Fei

    2015-12-01

    In this paper, we propose a model to investigate how truth affects rumor diffusion in online social media. Our model reveals a relation between rumor and truth — namely, when a rumor is diffusing, the truth about the rumor also diffuses with it. Two patterns of the agents used to identify rumor, self-identification and passive learning are taken into account. Combining theoretical proof and simulation analysis, we find that the threshold value of rumor diffusion is negatively correlated to the connectivity between nodes in the network and the probability β of agents knowing truth. Increasing β can reduce the maximum density of the rumor spreaders and slow down the generation speed of new rumor spreaders. On the other hand, we conclude that the best rumor diffusion strategy must balance the probability of forwarding rumor and the probability of agents losing interest in the rumor. High spread rate λ of rumor would lead to a surge in truth dissemination which will greatly limit the diffusion of rumor. Furthermore, in the case of unknown λ, increasing β can effectively reduce the maximum proportion of agents who do not know the truth, but cannot narrow the rumor diffusion range in a certain interval of β.

  14. Experimental investigation of an accelerometer controlled automatic braking system

    Science.gov (United States)

    Dreher, R. C.; Sleeper, R. K.; Nayadley, J. R., Sr.

    1972-01-01

    An investigation was made to determine the feasibility of an automatic braking system for arresting the motion of an airplane by sensing and controlling braked wheel decelerations. The system was tested on a rotating drum dynamometer by using an automotive tire, wheel, and disk-brake assembly under conditions which included two tire loadings, wet and dry surfaces, and a range of ground speeds up to 70 knots. The controlling parameters were the rates at which brake pressure was applied and released and the Command Deceleration Level which governed the wheel deceleration by controlling the brake operation. Limited tests were also made with the automatic braking system installed on a ground vehicle in an effort to provide a more realistic proof of its feasibility. The results of this investigation indicate that a braking system which utilizes wheel decelerations as the control variable to restrict tire slip is feasible and capable of adapting to rapidly changing surface conditions.

  15. Truth and beauty in contemporary urban photography

    Directory of Open Access Journals (Sweden)

    Daniele Colistra

    2014-05-01

    Full Text Available Does city still need photography? Or does it show itself more effectively through other forms of communication? The question brings us back almost two hundred years ago, at the time of the spread of the first daguerreotypes, when the query was: Does city still need painting? The question raises several other issues - truth and beauty, analogical and digital, truth and photo editing - that this essay examines by comparing some images. We are convinced that “the more we can speak of a picture, the more unlikely it is to speak of photography” (R. Barthes. The essay describes the work of some artists/photographers who have addressed the issue of urban photography. Works in which the figurative and visionary component is based on the interaction of traditional shooting techniques and processes of digital post-production.

  16. Ground System Survivability Overview

    Science.gov (United States)

    2012-03-27

    Avoidance Blast Mitigation Optimization Customer ILIR RDT&E Funding 5.0 % 0.5% GSS has a proven, technically proficient workforce that meets...Evaluation of Defensive-Aid Suites (ARMED) Common Automatic Fire Extinguishing System ( CAFES ) Transparent Armor Development Ground Combat Vehicle...Survey TRADOC (WFO, CNA, etc) Voice of the Customer Sy st em s En gi ne er in g Publish overarching MIL-STD, design guidelines, technical

  17. AUTOMATIC EXTRACTION AND TOPOLOGY RECONSTRUCTION OF URBAN VIADUCTS FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2015-08-01

    Full Text Available Urban viaducts are important infrastructures for the transportation system of a city. In this paper, an original method is proposed to automatically extract urban viaducts and reconstruct topology of the viaduct network just with airborne LiDAR point cloud data. It will greatly simplify the effort-taking procedure of viaducts extraction and reconstruction. In our method, the point cloud first is filtered to divide all the points into ground points and none-ground points. Region growth algorithm is adopted to find the viaduct points from the none-ground points by the features generated from its general prescriptive designation rules. Then, the viaduct points are projected into 2D images to extract the centerline of every viaduct and generate cubic functions to represent passages of viaducts by least square fitting, with which the topology of the viaduct network can be rebuilt by combining the height information. Finally, a topological graph of the viaducts network is produced. The full-automatic method can potentially benefit the application of urban navigation and city model reconstruction.

  18. Study on the Feasibility of RGB Substitute CIR for Automatic Removal Vegetation Occlusion Based on Ground Close-Range Building Images

    Science.gov (United States)

    Li, C.; Li, F.; Liu, Y.; Li, X.; Liu, P.; Xiao, B.

    2012-07-01

    Building 3D reconstruction based on ground remote sensing data (image, video and lidar) inevitably faces the problem that buildings are always occluded by vegetation, so how to automatically remove and repair vegetation occlusion is a very important preprocessing work for image understanding, compute vision and digital photogrammetry. In the traditional multispectral remote sensing which is achieved by aeronautics and space platforms, the Red and Near-infrared (NIR) bands, such as NDVI (Normalized Difference Vegetation Index), are useful to distinguish vegetation and clouds, amongst other targets. However, especially in the ground platform, CIR (Color Infra Red) is little utilized by compute vision and digital photogrammetry which usually only take true color RBG into account. Therefore whether CIR is necessary for vegetation segmentation or not has significance in that most of close-range cameras don't contain such NIR band. Moreover, the CIE L*a*b color space, which transform from RGB, seems not of much interest by photogrammetrists despite its powerfulness in image classification and analysis. So, CIE (L, a, b) feature and support vector machine (SVM) is suggested for vegetation segmentation to substitute for CIR. Finally, experimental results of visual effect and automation are given. The conclusion is that it's feasible to remove and segment vegetation occlusion without NIR band. This work should pave the way for texture reconstruction and repair for future 3D reconstruction.

  19. AUTOMATIC AND GENERIC MOSAICING OF MULTISENSOR IMAGES: AN APPLICATION TO PLEIADES HR

    Directory of Open Access Journals (Sweden)

    F. Bignalet-Cazalet

    2012-07-01

    Full Text Available In the early phase of the Pleiades program, the CNES (the French Space Agency specified and developed a fully automatic mosaicing processing unit, in order to generate satellite image mosaics under operational conditions. This tool can automatically put each input image in a common geometry, homogenize the radiometry, and generate orthomosaics using stitching lines. As the image quality commissioning phase of Pleiades1A is on-going, this mosaicing process is being tested for the first time under operational conditions. The French newly launched high resolution satellite can acquire adjacent images for French Civil and Defense User Ground Segments. This paper presents the very firsts results of mosaicing Pleiades1A images. Beyond Pleiades’ use, our mosaicing tool can process a significant variety of images, including other satellites and airborne acquisitions, using automatically-taken or external ground control points, offering time-based image superposition, and more. This paper also presents the design of the mosaicing tool and describes the processing workflow and the additional capabilities and applications.

  20. Lost Academic Souls and the Truth.

    Science.gov (United States)

    Birenbaum, William M.

    The connection between knowing the truth and some version of how men should live has always guided those who would lead the university. Walls around a campus or geographic isolation cannot prevent social pressures from affecting the institution. Colleges and universities have always been politicalized. The danger lies not in that fact but in the…

  1. Computer aided detection in prostate cancer diagnostics: A promising alternative to biopsy? A retrospective study from 104 lesions with histological ground truth.

    Directory of Open Access Journals (Sweden)

    Anika Thon

    Full Text Available Prostate cancer (PCa diagnosis by means of multiparametric magnetic resonance imaging (mpMRI is a current challenge for the development of computer-aided detection (CAD tools. An innovative CAD-software (Watson Elementary™ was proposed to achieve high sensitivity and specificity, as well as to allege a correlate to Gleason grade.To assess the performance of Watson Elementary™ in automated PCa diagnosis in our hospital´s database of MRI-guided prostate biopsies.The evaluation was retrospective for 104 lesions (47 PCa, 57 benign from 79, 64.61±6.64 year old patients using 3T T2-weighted imaging, Apparent Diffusion Coefficient (ADC maps and dynamic contrast enhancement series. Watson Elementary™ utilizes signal intensity, diffusion properties and kinetic profile to compute a proportional Gleason grade predictor, termed Malignancy Attention Index (MAI. The analysis focused on (i the CAD sensitivity and specificity to classify suspect lesions and (ii the MAI correlation with the histopathological ground truth.The software revealed a sensitivity of 46.80% for PCa classification. The specificity for PCa was found to be 75.43% with a positive predictive value of 61.11%, a negative predictive value of 63.23% and a false discovery rate of 38.89%. CAD classified PCa and benign lesions with equal probability (P 0.06, χ2 test. Accordingly, receiver operating characteristic analysis suggests a poor predictive value for MAI with an area under curve of 0.65 (P 0.02, which is not superior to the performance of board certified observers. Moreover, MAI revealed no significant correlation with Gleason grade (P 0.60, Pearson´s correlation.The tested CAD software for mpMRI analysis was a weak PCa biomarker in this dataset. Targeted prostate biopsy and histology remains the gold standard for prostate cancer diagnosis.

  2. Truth-telling contra perfectionist liberalism: Muslim parrhēsíastes in Denmark

    DEFF Research Database (Denmark)

    Renders, Johannes

    In this paper, I first offer a general outline and reflection on the notion of parrhēsía (truth-telling), as popularized by Foucault. Secondly, I discuss Foucault’s history of problematizations, with comments on what he called “games of truth” and the Cartesian conception of truth-telling. Thirdly......, I sketch a trend in the current Danish public and political sphere, defining the notion of “perfectionist liberalism” and how it translates to the Danish context, including concrete examples and notes on “liberal intolerance arguments”. Lastly, I address the condition of Muslim parrhēsíastes (truth...

  3. An inconvenient truth; Une verite qui derange

    Energy Technology Data Exchange (ETDEWEB)

    Al, Gore

    2007-01-15

    Our climate crisis may at times appear to be happening slowly, but in fact it is happening very quickly-and has become a true planetary emergency. The Chinese expression for crisis consists of two characters. The first is a symbol for danger; the second is a symbol for opportunity. In order to face down the danger that is stalking us and move through it, we first have to recognize that we are facing a crisis. So why is it that our leaders seem not to hear such clarion warnings? Are they resisting the truth because they know that the moment they acknowledge it, they will face a moral imperative to act? Is it simply more convenient to ignore the warnings? Perhaps, but inconvenient truths do not go away just because they are not seen. Indeed, when they are responded to, their significance does not diminish; it grows. (author)

  4. [The duty to tell the truth with regard to a person with Alzheimer's disease].

    Science.gov (United States)

    Neyen, Octavie; Cornet, Marielle; Zeringer, Marie; Neyen, Constance

    2014-01-01

    In the framework of a project relating to ethical questioning, pupils in their penultimate year at Mabillon des Ardennes high school gathered testimonies which revealed that the truth is sometimes hidden from people with Alzheimer's disease. Why is this right to the truth not always respected? In what circumstances does it happen? What are the reasons? What are the potential consequences? Reflection is required around the question of the respect of the right to the truth for people with cognitive disorders.

  5. Scientific revolution, incommensurability and truth in theories ...

    African Journals Online (AJOL)

    Scientific revolution, incommensurability and truth in theories: objection to Kuhn's perspective. ... AFRREV STECH: An International Journal of Science and Technology ... The core of our discussion is, ultimately, to provide a clearer and broader picture of the general characteristics of scientific revolution or theory change.

  6. A Linguistic Truth-Valued Temporal Reasoning Formalism and Its Implementation

    Science.gov (United States)

    Lu, Zhirui; Liu, Jun; Augusto, Juan C.; Wang, Hui

    Temporality and uncertainty are important features of many real world systems. Solving problems in such systems requires the use of formal mechanism such as logic systems, statistical methods or other reasoning and decision-making methods. In this paper, we propose a linguistic truth-valued temporal reasoning formalism to enable the management of both features concurrently using a linguistic truth valued logic and a temporal logic. We also provide a backward reasoning algorithm which allows the answering of user queries. A simple but realistic scenario in a smart home application is used to illustrate our work.

  7. Automatic spinal cord localization, robust to MRI contrasts using global curve optimization.

    Science.gov (United States)

    Gros, Charley; De Leener, Benjamin; Dupont, Sara M; Martin, Allan R; Fehlings, Michael G; Bakshi, Rohit; Tummala, Subhash; Auclair, Vincent; McLaren, Donald G; Callot, Virginie; Cohen-Adad, Julien; Sdika, Michaël

    2018-02-01

    mean square error of 1.02 mm. OptiC achieved superior results compared to a state-of-the-art spinal cord localization technique based on the Hough transform, especially on pathological cases with an averaged mean square error of 1.08 mm vs. 13.16 mm (Wilcoxon signed-rank test p-value < .01). Images containing brain regions were identified with a 99% precision, on which brain and spine regions were separated with a distance error of 9.37 mm compared to ground-truth. Validation results on a challenging dataset suggest that OptiC could reliably be used for subsequent quantitative analyses tasks, opening the door to more robust analysis on pathological cases. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The Politics of Violence, Truth, and Reconciliation in the Arab Middle East

    DEFF Research Database (Denmark)

    This book treats political and cultural attempts to create truth and reconciliation processes in the Arab Middle Eat. It contains studies of Morocco, Algeria, Sudan, Lebanon, Palestine, Iraq and Syria.......This book treats political and cultural attempts to create truth and reconciliation processes in the Arab Middle Eat. It contains studies of Morocco, Algeria, Sudan, Lebanon, Palestine, Iraq and Syria....

  9. AUTOMATIC CAMERA ORIENTATION AND STRUCTURE RECOVERY WITH SAMANTHA

    Directory of Open Access Journals (Sweden)

    R. Gherardi

    2012-09-01

    Full Text Available SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  10. Being asked to tell an unpleasant truth about another person activates anterior insula and medial prefrontal cortex.

    Science.gov (United States)

    Littlefield, Melissa M; Dietz, Martin J; Fitzgerald, Des; Knudsen, Kasper J; Tonks, James

    2015-01-01

    "Truth" has been used as a baseline condition in several functional magnetic resonance imaging (fMRI) studies of deception. However, like deception, telling the truth is an inherently social construct, which requires consideration of another person's mental state, a phenomenon known as Theory of Mind. Using a novel ecological paradigm, we examined blood oxygenation level dependent (BOLD) responses during social and simple truth telling. Participants (n = 27) were randomly divided into two competing teams. Post-competition, each participant was scanned while evaluating performances from in-group and out-group members. Participants were asked to be honest and were told that their evaluations would be made public. We found increased BOLD responses in the medial prefrontal cortex, bilateral anterior insula and precuneus when participants were asked to tell social truths compared to simple truths about another person. At the behavioral level, participants were slower at responding to social compared to simple questions about another person. These findings suggest that telling the truth is a nuanced cognitive operation that is dependent on the degree of mentalizing. Importantly, we show that the cortical regions engaged by truth telling show a distinct pattern when the task requires social reasoning.

  11. Prosecuting International Crimes at National Level: Lessons from the Argentine ‘Truth-Finding Trials’

    Directory of Open Access Journals (Sweden)

    Elena Maculan

    2012-01-01

    Full Text Available Truth-finding trials (juicios por la verdad constitute a novel solution devised by the Argentine judicial system to cope with crimes committed by the past military dictatorship. This mechanism uses criminal courts as well as criminal procedure in order to investigate the truth about the dictatorship's crimes; however, the trials allow judges neither to establish criminal responsibility nor to punish the perpetrators of crimes. This limitation is due to the inability, imposed by the Full Stop and Due Obedience Laws, to prosecute the perpetrators of crimes.From the perspective of criminal law, truth-finding trials present two problematic features: firstly, their creation and regulation are set by judges, which has caused the development of many non-homogeneous local solutions and, secondly, their hybrid nature, which entails a possible subversion of conventional forms and goals in the context of the criminal trial.The paper also describes the current situation, since the Argentine impunity laws were declared unconstitutional and criminal proceedings reopened. The new framework provokes questions about the relationship between the reopened criminal trials and the truth-finding investigations, not only with regard to evidentiary issues but also with respect to the reason why the truth-finding investigations are still held.Finally, the shift from a non-punitive approach to the current full criminal accountability seems to suggest that truth-finding trials were merely a temporary solution, while the notion of the full prosecution and punishment of State crimes was never really set aside.

  12. Davidson, Dualism, and Truth

    Directory of Open Access Journals (Sweden)

    Nathaniel Goldberg

    2012-12-01

    Full Text Available Happy accidents happen even in philosophy. Sometimes our arguments yield insights despite missing their target, though when they do others can often spot it more easily. Consider the work of Donald Davidson. Few did more to explore connections among mind, language, and world. Now that we have critical distance from his views, however, we can see that Davidson’s accomplishments are not quite what they seem. First, while Davidson attacked the dualism of conceptual scheme and empirical content, he in fact illustrated a way to hold it. Second, while Davidson used the principle of charity to argue against the dualism, his argument in effect treats the principle as constitutive of a conceptual scheme. And third, while Davidson asserted that he cannot define what truth ultimately is—and while I do not disagree—his work nonetheless allows us to saymore about truth than Davidson himself does. I aim to establish these three claims. Doing so enriches our understanding of issues central to the history of philosophy concerning how, if at all, to divvy up the mental or linguistic contribution, and the worldly contribution, to knowledge. As we see below, Davidson was right in taking his work to be one stage of a dialectic begun by Immanuel Kant.1 He was just wrong about what that stage is. Reconsidering Davidson’s views also moves the current debate forward, as they reveal a previously unrecognized yet intuitive notion of truth—even if Davidson himself remained largely unaware of it. We begin however with scheme/content dualism and Davidson’s argument against it.

  13. About the notion of truth in quantum mechanics

    International Nuclear Information System (INIS)

    Omnes, R.

    1991-01-01

    The meaning of truth in quantum mechanics is considered in order to respond to some objections raised by B. d'Espagnat against a logical interpretation of quantum mechanics recently proposed by the author. A complete answer is given. It is shown that not only can factual data be said to be true, but also some of their logical consequences, so that the definition of truth given by Heisenberg is both extended and refined. Some nontrue but reliable propositions may also be used, but they are somewhat arbitrary because of the complementarity principle. For instance, the propositions expressing wave packet reduction can be either true or reliable, according to the case under study. Separability is also discussed: as far as the true properties of an individual system are concerned, quantum mechanics is separable

  14. A logical approach to fuzzy truth hedges

    Czech Academy of Sciences Publication Activity Database

    Esteva, F.; Godo, L.; Noguera, Carles

    2013-01-01

    Roč. 232, č. 1 (2013), s. 366-385 ISSN 0020-0255 Institutional support: RVO:67985556 Keywords : Mathematical fuzzy logic * Standard completeness * Truth hedges Subject RIV: BA - General Mathematics Impact factor: 3.893, year: 2013 http://library.utia.cas.cz/separaty/2016/MTR/noguera-0469148.pdf

  15. Truth and (self) censorship in military memoirs

    NARCIS (Netherlands)

    Kleinreesink, E.; Soeters, J.M.M.L.

    2016-01-01

    It can be difficult for researchers from outside the military to gain access to the field. However, there is a rich source on the military that is readily available for every researcher: military memoirs. This source does provide some methodological challenges with regard to truth and (self)

  16. The Inconvenient Truth. Part 2

    International Nuclear Information System (INIS)

    Athanasiou, T.

    2007-01-01

    Essay-type of publication on what should happen next after Al Gore's presentations on the Inconvenient Truth about the impacts of climate change. The essay states in the first lines: 'We've seen the movie, so we know the first part - we're in trouble deep. And it's time, past time, for at least some of us to go beyond warning to planning, to start talking seriously about a global crash program to stabilize the climate

  17. Is socioeconomic status associated with awareness of and receptivity to the truth campaign?

    Science.gov (United States)

    Vallone, Donna M; Allen, Jane A; Xiao, Haijun

    2009-10-01

    The truth campaign is credited with preventing approximately 450,000 youth from starting to smoke, from 2000 through 2004 [Farrelly, M.C., Nonnemaker, J., Davis, K.C., Hussin, A., 2009. The Influence of the National Truth Campaign on Smoking Initiation. Am. J. Prev. Med. February 9 [Epub ahead of print

  18. Cost-utility analysis of the National truth campaign to prevent youth smoking.

    Science.gov (United States)

    Holtgrave, David R; Wunderink, Katherine A; Vallone, Donna M; Healton, Cheryl G

    2009-05-01

    In 2005, the American Journal of Public Health published an article that indicated that 22% of the overall decline in youth smoking that occurred between 1999 and 2002 was directly attributable to the truth social marketing campaign launched in 2000. A remaining key question about the truth campaign is whether the economic investment in the program can be justified by the public health outcomes; that question is examined here. Standard methods of cost and cost-utility analysis were employed in accordance with the U.S. Panel on Cost-Effectiveness in Health and Medicine; a societal perspective was employed. During 2000-2002, expenditures totaled just over $324 million to develop, deliver, evaluate, and litigate the truth campaign. The base-case cost-utility analysis result indicates that the campaign was cost saving; it is estimated that the campaign recouped its costs and that just under $1.9 billion in medical costs was averted for society. Sensitivity analysis indicated that the basic determination of cost effectiveness for this campaign is robust to substantial variation in input parameters. This study suggests that the truth campaign not only markedly improved the public's health but did so in an economically efficient manner.

  19. Multi-modal RGB–Depth–Thermal Human Body Segmentation

    DEFF Research Database (Denmark)

    Palmero, Cristina; Clapés, Albert; Bahnsen, Chris

    2016-01-01

    This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration...... to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations....

  20. No funeral bells: Public reason in a 'post-truth' age.

    Science.gov (United States)

    Jasanoff, Sheila; Simmet, Hilton R

    2017-10-01

    The label 'post-truth' signals for many a troubling turn away from principles of enlightened government. The word 'post', moreover, implies a past when things were radically different and whose loss should be universally mourned. In this paper, we argue that this framing of 'post-truth' is flawed because it is ahistorical and ignores the co-production of knowledge and norms in political contexts. Debates about public facts are necessarily debates about social meanings, rooted in realities that are subjectively experienced as all-encompassing and complete, even when they are partial and contingent. Facts used in policy are normative in four ways: They are embedded in prior choices of which experiential realities matter, produced through processes that reflect institutionalized public values, arbiters of which issues are open to democratic contestation and deliberation, and vehicles through which polities imagine their collective futures. To restore truth to its rightful place in democracy, governments should be held accountable for explaining who generated public facts, in response to which sets of concerns, and with what opportunities for deliberation and closure.

  1. Twardowski On Truth

    Directory of Open Access Journals (Sweden)

    Peter Simons

    2009-10-01

    Full Text Available Of those students of Franz Brentano who went on to become professional philosophers, Kazimierz Twardowski (1866-1938 is much less well-known than his older contemporaries Edmund Husserl and Alexius Meinong. Yet in terms of the importance of his contribution to the history of philosophy, he ranks among Brentano’s students behind at most those two, possibly only behind Husserl. The chief contribution of Twardowski to global philosophy came indirectly, through the influence of his theory of truth on his students, and they on their students, and so on. The most important of these grandstudents is one whom Twardowski presumably knew but never taught, and whose adopted name is obtained by deleting four letters from his own: Tarski.

  2. Thoracic lymph node station recognition on CT images based on automatic anatomy recognition with an optimal parent strategy

    Science.gov (United States)

    Xu, Guoping; Udupa, Jayaram K.; Tong, Yubing; Cao, Hanqiang; Odhner, Dewey; Torigian, Drew A.; Wu, Xingyu

    2018-03-01

    Currently, there are many papers that have been published on the detection and segmentation of lymph nodes from medical images. However, it is still a challenging problem owing to low contrast with surrounding soft tissues and the variations of lymph node size and shape on computed tomography (CT) images. This is particularly very difficult on low-dose CT of PET/CT acquisitions. In this study, we utilize our previous automatic anatomy recognition (AAR) framework to recognize the thoracic-lymph node stations defined by the International Association for the Study of Lung Cancer (IASLC) lymph node map. The lymph node stations themselves are viewed as anatomic objects and are localized by using a one-shot method in the AAR framework. Two strategies have been taken in this paper for integration into AAR framework. The first is to combine some lymph node stations into composite lymph node stations according to their geometrical nearness. The other is to find the optimal parent (organ or union of organs) as an anchor for each lymph node station based on the recognition error and thereby find an overall optimal hierarchy to arrange anchor organs and lymph node stations. Based on 28 contrast-enhanced thoracic CT image data sets for model building, 12 independent data sets for testing, our results show that thoracic lymph node stations can be localized within 2-3 voxels compared to the ground truth.

  3. CONCEPT OF "TRUTH" IN THE NOVEL OF MIKHAIL BULGAKOV'S MASTER AND MARGARITA

    Directory of Open Access Journals (Sweden)

    Anastasiya Sergeevna Korneenko

    2014-11-01

    Full Text Available The subject of our study was the concept of "truth" in Bulgakov's novel Master and Margarita. S. Stepanov gave the defi nition of a concept. The concept is like a clot of culture in human consciousness; that is the manner, in which culture becomes a part of the mental world of a man. And, on the other hand, the concept is a way for a man as an ordinary, normal person, not a "creator of cultural values" to be included in the culture, and in some cases, to affects it. In the analysis of the concept of "truth" we start from its cultural-and-etymological understanding of the word "truth". The aim of the study was to conduct a comparative analysis of the three editions of the novel and to fi nd out what semantic meanings Bulgakov deals with in a particular case and how it aff ects the transformation of the idea of the novel from edition to edition.Three editions of the novel were the material for the research: the first is "Engineer Hoof " (1930–1932, the second is Grand Chancellor (1932–1934 and the fi nal version of "The Master and Margarita." In these three editions Yeshua involves everyone who he is talking to in the dialogue. In the editions Engineer Hoof and Grand Chancellor M. A. Bulgakov uses the concept of "truth" in scientific- philosophical and sacred meanings (plans. In the final text Yeshua appears as a philosopher who is in search of the truth.

  4. Toward Automatic Georeferencing of Archival Aerial Photogrammetric Surveys

    Science.gov (United States)

    Giordano, S.; Le Bris, A.; Mallet, C.

    2018-05-01

    Images from archival aerial photogrammetric surveys are a unique and relatively unexplored means to chronicle 3D land-cover changes over the past 100 years. They provide a relatively dense temporal sampling of the territories with very high spatial resolution. Such time series image analysis is a mandatory baseline for a large variety of long-term environmental monitoring studies. The current bottleneck for accurate comparison between epochs is their fine georeferencing step. No fully automatic method has been proposed yet and existing studies are rather limited in terms of area and number of dates. State-of-the art shows that the major challenge is the identification of ground references: cartographic coordinates and their position in the archival images. This task is manually performed, and extremely time-consuming. This paper proposes to use a photogrammetric approach, and states that the 3D information that can be computed is the key to full automation. Its original idea lies in a 2-step approach: (i) the computation of a coarse absolute image orientation; (ii) the use of the coarse Digital Surface Model (DSM) information for automatic absolute image orientation. It only relies on a recent orthoimage+DSM, used as master reference for all epochs. The coarse orthoimage, compared with such a reference, allows the identification of dense ground references and the coarse DSM provides their position in the archival images. Results on two areas and 5 dates show that this method is compatible with long and dense archival aerial image series. Satisfactory planimetric and altimetric accuracies are reported, with variations depending on the ground sampling distance of the images and the location of the Ground Control Points.

  5. TOWARD AUTOMATIC GEOREFERENCING OF ARCHIVAL AERIAL PHOTOGRAMMETRIC SURVEYS

    Directory of Open Access Journals (Sweden)

    S. Giordano

    2018-05-01

    Full Text Available Images from archival aerial photogrammetric surveys are a unique and relatively unexplored means to chronicle 3D land-cover changes over the past 100 years. They provide a relatively dense temporal sampling of the territories with very high spatial resolution. Such time series image analysis is a mandatory baseline for a large variety of long-term environmental monitoring studies. The current bottleneck for accurate comparison between epochs is their fine georeferencing step. No fully automatic method has been proposed yet and existing studies are rather limited in terms of area and number of dates. State-of-the art shows that the major challenge is the identification of ground references: cartographic coordinates and their position in the archival images. This task is manually performed, and extremely time-consuming. This paper proposes to use a photogrammetric approach, and states that the 3D information that can be computed is the key to full automation. Its original idea lies in a 2-step approach: (i the computation of a coarse absolute image orientation; (ii the use of the coarse Digital Surface Model (DSM information for automatic absolute image orientation. It only relies on a recent orthoimage+DSM, used as master reference for all epochs. The coarse orthoimage, compared with such a reference, allows the identification of dense ground references and the coarse DSM provides their position in the archival images. Results on two areas and 5 dates show that this method is compatible with long and dense archival aerial image series. Satisfactory planimetric and altimetric accuracies are reported, with variations depending on the ground sampling distance of the images and the location of the Ground Control Points.

  6. Chinese Children's Moral Evaluation of Lies and Truths-Roles of Context and Parental Individualism-Collectivism Tendencies.

    Science.gov (United States)

    Fu, Genyue; Brunet, Megan K; Lv, Yin; Ding, Xiaopan; Heyman, Gail D; Cameron, Catherine Ann; Lee, Kang

    2010-10-01

    The present study examined Chinese children's moral evaluations of truths and lies about one's own pro-social acts. Children ages 7, 9, and 11 were read vignettes in which a protagonist performs a good deed and is asked about it by a teacher, either in front of the class or in private. In response, the protagonist either tells a modest lie, which is highly valued by the Chinese culture, or tells an immodest truth, which violates the Chinese cultural norms about modesty. Children were asked to identify whether the protagonist's statement was the truth or a lie, and to evaluate how 'good' or 'bad' the statement was. Chinese children rated modest lies more positively than immodest truths, with this effect becoming more pronounced with age. Rural Chinese children and those with at least one nonprofessional parent rated immodest truths less positively when they were told in public rather than in private. Furthermore, Chinese children of parents with high collectivism scores valued modest lies more than did children of parents with low collectivism scores. These findings suggest that both macro- and micro-cultural factors contribute significantly to children's moral understanding of truth and lie telling.

  7. Optimal interconnect ATPG under a ground-bounce constraint

    NARCIS (Netherlands)

    Hollmann, H.D.L.; Marinissen, E.J.; Vermeulen, B.

    In order to prevent ground bounce, Automatic Test Pattern Generation (ATPG) algorithms for wire interconnects have recently been extended with the capability to restrict the maximal Hamming distance between any two consecutive test patterns to a user-defined integer, referred to as the

  8. A Truthful Incentive Mechanism for Online Recruitment in Mobile Crowd Sensing System

    Directory of Open Access Journals (Sweden)

    Xiao Chen

    2017-01-01

    Full Text Available We investigate emerging mobile crowd sensing (MCS systems, in which new cloud-based platforms sequentially allocate homogenous sensing jobs to dynamically-arriving users with uncertain service qualities. Given that human beings are selfish in nature, it is crucial yet challenging to design an efficient and truthful incentive mechanism to encourage users to participate. To address the challenge, we propose a novel truthful online auction mechanism that can efficiently learn to make irreversible online decisions on winner selections for new MCS systems without requiring previous knowledge of users. Moreover, we theoretically prove that our incentive possesses truthfulness, individual rationality and computational efficiency. Extensive simulation results under both real and synthetic traces demonstrate that our incentive mechanism can reduce the payment of the platform, increase the utility of the platform and social welfare.

  9. A Truthful Incentive Mechanism for Online Recruitment in Mobile Crowd Sensing System.

    Science.gov (United States)

    Chen, Xiao; Liu, Min; Zhou, Yaqin; Li, Zhongcheng; Chen, Shuang; He, Xiangnan

    2017-01-01

    We investigate emerging mobile crowd sensing (MCS) systems, in which new cloud-based platforms sequentially allocate homogenous sensing jobs to dynamically-arriving users with uncertain service qualities. Given that human beings are selfish in nature, it is crucial yet challenging to design an efficient and truthful incentive mechanism to encourage users to participate. To address the challenge, we propose a novel truthful online auction mechanism that can efficiently learn to make irreversible online decisions on winner selections for new MCS systems without requiring previous knowledge of users. Moreover, we theoretically prove that our incentive possesses truthfulness, individual rationality and computational efficiency. Extensive simulation results under both real and synthetic traces demonstrate that our incentive mechanism can reduce the payment of the platform, increase the utility of the platform and social welfare.

  10. Segmentation editing improves efficiency while reducing inter-expert variation and maintaining accuracy for normal brain tissues in the presence of space-occupying lesions

    International Nuclear Information System (INIS)

    Deeley, M A; Chen, A; Cmelak, A; Malcolm, A; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Datteri, R D; Noble, J; Dawant, B M; Donnelly, E; Moretti, L

    2013-01-01

    Image segmentation has become a vital and often rate-limiting step in modern radiotherapy treatment planning. In recent years, the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumours in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: simultaneous truth and performance level estimation and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. (paper)

  11. Cross-Cultural Differences in Children's Choices, Categorizations, and Evaluations of Truths and Lies

    Science.gov (United States)

    Fu, Genyue; Xu, Fen; Cameron, Catherine Ann; Leyman, Gail; Lee, Kang

    2007-01-01

    This study examined cross-cultural differences and similarities in children's moral understanding of individual- or collective-oriented lies and truths. Seven-, 9-, and 11-year-old Canadian and Chinese children were read stories about story characters facing moral dilemmas about whether to lie or tell the truth to help a group but harm an…

  12. Extrametodical truth and ontology of the praxis: The mediator rationality of the phronesis

    Directory of Open Access Journals (Sweden)

    Gaetano Chiurazzi

    2016-05-01

    Full Text Available Gadamer’s vindication of the extra-methodical feature of truth in the human sciences put forward in Truth and Method does not mean a mere refusal of method: rather, it arises from the awareness that there are truths which are not reducible to the conditions of repeatability and commensurability set up by methodical thinking. In fact, the truths of the human sciences refer to the ontological dimension of the contingent and the accidental, i.e. to the dimension of the historical. In this essay I aim at highlighting this ontological dimension, which for Aristotle is eminently that of the praxis and of human action. I will show that such an ontology is a consequence of the logical and ontological discussions which crisscrossed Greek thought after the discovery of the incommensurable magnitudes. The ontology of praxis is an ontology which takes into account the “irrationality” represented by the contingent, the accidental, to which a new form of rationality corresponds: that of phrónesis. Phrónesis is in fact not a commensurative but a mediative rationality

  13. Visual truths of citizen reportage

    DEFF Research Database (Denmark)

    Allan, Stuart; Peters, Chris

    2015-01-01

    In striving to better understand issues associated with citizen contributions to newsmaking in crisis situations, this article identifies and elaborates four specific research problematics – bearing witness, technologies of truth-telling, mediating visualities and affectivities of othering...... – in order to recast more familiar modes of enquiry. Specifically, it provides an alternative heuristic to theorize the journalistic mediation of citizen imagery, and the myriad ways this process of negotiation maintains, repairs and at times disrupts the interstices of professional–amateur boundaries...

  14. Dydaktyki filozofii kłopoty z prawdą [The teaching of philosophy and its troubles with truth

    Directory of Open Access Journals (Sweden)

    Zbigniew Zdunowski

    2014-03-01

    Full Text Available I. We can talk about the crisis of the truth (in the meaning of the classical cor- respondence theory in the contemporary philosophy (of the 20th century. II. In the wake of philosophy the teaching of philosophy reconciled with the destruction of the truth. Then the didactics of philosophy resigned from the placing the aim to the truth among the objectives, which should serve to philosophical education. III. The absence and understatement of the disinterested pursuit to the truth among the objectives or requirements of education impoverish the philosophical education and cause damage to the children, pupils and wards. IV. Therefore the truth understood as an epistemological category, the purpose of education and moral value, should be restored to our didactic and educational activities.

  15. To Tell the Truth: The Challenge of Military Leadership

    National Research Council Canada - National Science Library

    Henderson, Jr, Ronald H

    1998-01-01

    The story of Regulus, while certainly apocryphal, nevertheless illustrates a fundamental tension of military leadership -- the moral imperative for military leaders to tell the truth, even when that...

  16. THE JOURNEY OF TRUTH: FROM PLATO TO ZOLA

    Directory of Open Access Journals (Sweden)

    Ribut Basuki

    1999-01-01

    Full Text Available Western theater theory and criticism is generally considered to be set forth by the Greeks. Plato was "the first theater critic" with his negative comments about theater owing to his idealistic views about "the truth." Then came Aristotle who used a different viewpoint from that of Plato, saying that there is "truth" in theater. However, hostile criticism on theater came back in the Middle Ages, championed by Tertulian before Aristotelian theory was revived by the neo-classicists such as Scaliger and Castelvetro. Theater theory and criticism discourse was then made more alive by the romanticists who disagreed with the neo-classicists' rigid rules on theater. As the influence of science became dominant in the theater world, naturalism and realism emerged and became the mainstream of theater theory and criticism until well into the twentieth century.

  17. Should physicians tell the truth without taking social complications into account? A striking case.

    Science.gov (United States)

    Avci, Ercan

    2018-03-01

    The principle of respect for autonomy requires informing patients adequately and appropriately about diagnoses, treatments, and prognoses. However, some clinical cases may cause ethical dilemmas regarding telling the truth. Under the existence especially of certain cultural, social, and religious circumstances, disclosing all the relevant information to all pertinent parties might create harmful effects. Even though the virtue of telling the truth is unquestionable, sometimes de facto conditions compel physicians to act paternalistically to protect the patient/patients from imminent dangers. This article, which aims to study the issue of whether a physician should always tell the truth, analyzes an interesting case that represents the detection of misattributed paternity during pre-transplant tests for a kidney transplant from the son to the father in Turkey, where social, cultural, and religious factors have considerable impact on marital infidelity. After analyzing the concept of telling the truth and its relationship with paternalism and two major ethical theories, consequentialism and deontology, it is concluded that the value of the integrity of life and survival overrides the value of telling the truth. For this reason, in the case of a high possibility of severe and imminent threats, withholding some information is ethically justifiable.

  18. Facing the truth: An appraisal of the potential contributions ...

    African Journals Online (AJOL)

    Facing the truth: An appraisal of the potential contributions, paradoxes and challenges of implementing the United Nations conventions on Contracts for the International Sale of Goods (CISG) in Nigeria.

  19. Exudate-based diabetic macular edema detection in fundus images using publicly available datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME through the presence of exudation. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing (e.g., the classifier was trained on an independent dataset and tested on MESSIDOR). Our algorithm obtained an AUC between 0.88 and 0.94 depending on the dataset/features used. Additionally, it does not need ground truth at lesion level to reject false positives and is computationally efficient, as it generates a diagnosis on an average of 4.4 s (9.3 s, considering the optic nerve localization) per image on an 2.6 GHz platform with an unoptimized Matlab implementation.

  20. SU-C-BRA-06: Automatic Brain Tumor Segmentation for Stereotactic Radiosurgery Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Stojadinovic, S; Jiang, S; Timmerman, R; Abdulrahman, R; Nedzi, L; Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Stereotactic radiosurgery (SRS), which delivers a potent dose of highly conformal radiation to the target in a single fraction, requires accurate tumor delineation for treatment planning. We present an automatic segmentation strategy, that synergizes intensity histogram thresholding, super-voxel clustering, and level-set based contour evolving methods to efficiently and accurately delineate SRS brain tumors on contrast-enhance T1-weighted (T1c) Magnetic Resonance Images (MRI). Methods: The developed auto-segmentation strategy consists of three major steps. Firstly, tumor sites are localized through 2D slice intensity histogram scanning. Then, super voxels are obtained through clustering the corresponding voxels in 3D with reference to the similarity metrics composited from spatial distance and intensity difference. The combination of the above two could generate the initial contour surface. Finally, a localized region active contour model is utilized to evolve the surface to achieve the accurate delineation of the tumors. The developed method was evaluated on numerical phantom data, synthetic BRATS (Multimodal Brain Tumor Image Segmentation challenge) data, and clinical patients’ data. The auto-segmentation results were quantitatively evaluated by comparing to ground truths with both volume and surface similarity metrics. Results: DICE coefficient (DC) was performed as a quantitative metric to evaluate the auto-segmentation in the numerical phantom with 8 tumors. DCs are 0.999±0.001 without noise, 0.969±0.065 with Rician noise and 0.976±0.038 with Gaussian noise. DC, NMI (Normalized Mutual Information), SSIM (Structural Similarity) and Hausdorff distance (HD) were calculated as the metrics for the BRATS and patients’ data. Assessment of BRATS data across 25 tumor segmentation yield DC 0.886±0.078, NMI 0.817±0.108, SSIM 0.997±0.002, and HD 6.483±4.079mm. Evaluation on 8 patients with total 14 tumor sites yield DC 0.872±0.070, NMI 0.824±0

  1. Intelligible genders in scene: the cinema and the truth production about bodies

    Directory of Open Access Journals (Sweden)

    Luciene Galvão

    2014-06-01

    Full Text Available This paper aims to discuss how cinematographic language produces truths about men and women. Throughout the text, we have used to illustrate some iconic films that bring notions of masculinity and femininity. The film we have chosen are works that have a distinct esthetic and markets, they are able to raise issues related to gender and sexuality in discussions against romantic love, identity, homosexuality, violence and techniques confession of truths, among others. We analyze the films from Michel Foucault perspective concern sexuality and power relations and Judith Butler about gender intelligible. The plots of the films show that such truths are constantly negotiated and further indicate that norms about sex, desire, pleasure, masculinity and femininity are not only reproduced as its effects on private plots do not end with the end of the film.

  2. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Shen, C; Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy, and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with

  3. Truth and victims’ rights: Towards a legal epistemology of international criminal justice

    OpenAIRE

    Aguilera, Edgar R.

    2013-01-01

    The author advances the thesis that the now well established international crime victims' right to know the truth creates an opportunity for an applied epistemology reflection regarding international criminal justice. At the heart of the project lies the author's argument that this victims' right -if taken seriously- implies both the right that the international criminal justice system's normative structures or legal frameworks and practices feature a truth-promoting profile, or in other word...

  4. What Truth in Lending Means to You.

    Science.gov (United States)

    Board of Governors of the Federal Reserve System, Washington, DC.

    Designed for the general public and possibly suitable also for high school economics students, this pamphlet discusses the provisions of the Truth in Lending Law. The act requires that creditors state credit charges in a uniform way. The pamphlet provides a brief description of finance charges and annual percentage rates. It also focuses on…

  5. Heart Health: Learn the Truth About Your Heart

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Cover Story Heart Health Learn the Truth About Your Heart Past Issues / Winter 2009 Table of Contents For ... turn Javascript on. Photo: iStock February is American Heart Month. Now is the time to make sure ...

  6. Florida's "truth" campaign: a counter-marketing, anti-tobacco media campaign.

    Science.gov (United States)

    Zucker, D; Hopkins, R S; Sly, D F; Urich, J; Kershaw, J M; Solari, S

    2000-05-01

    The "truth" campaign was created to change youth attitudes about tobacco and to reduce teen tobacco use throughout Florida by using youth-driven advertising, public relations, and advocacy. Results of the campaign include a 92 percent brand awareness rate among teens, a 15 percent rise in teens who agree with key attitudinal statements about smoking, a 19.4 percent decline in smoking among middle school students, and a 8.0 percent decline among high school students. States committed to results-oriented youth anti-tobacco campaigns should look to Florida's "truth" campaign as a model that effectively places youth at the helm of anti-tobacco efforts.

  7. Interdependence versus Truth and Justice: Lessons from Reconciliation Processes in Maluku

    Directory of Open Access Journals (Sweden)

    Diah Kusumaningrum

    2017-01-01

    Full Text Available Truth commissions and trials have been applauded as the way to move on from a violent past. Yet, some post-conflict societies managed to move toward reconciliation without the presence, or the effective presence of such formal institutions. This article discusses a number of lessons learned from Maluku, where reconciliation took the interdependence path. Taking on an interpretive, emic approach, it elaborates on the sites and mechanisms of interpendence. It argues that interdependence can be as viable as truth and justice procedures in bringing about reconciliation.

  8. The impact of culture and religion on truth telling at the end of life.

    Science.gov (United States)

    de Pentheny O'Kelly, Clarissa; Urch, Catherine; Brown, Edwina A

    2011-12-01

    Truth telling, a cardinal rule in Western medicine, is not a globally shared moral stance. Honest disclosure of terminal prognosis and diagnosis are regarded as imperative in preparing for the end of life. Yet in many cultures, truth concealment is common practice. In collectivist Asian and Muslim cultures, illness is a shared family affair. Consequently, decision making is family centred and beneficence and non-malfeasance play a dominant role in their ethical model, in contrast to patient autonomy in Western cultures. The 'four principles' are prevalent throughout Eastern and Western cultures, however, the weight with which they are considered and their understanding differ. The belief that a grave diagnosis or prognosis will extinguish hope in patients leads families to protect ill members from the truth. This denial of the truth, however, is linked with not losing faith in a cure. Thus, aggressive futile treatment can be expected. The challenge is to provide a health care service that is equable for all individuals in a given country. The British National Health Service provides care to all cultures but is bound by the legal principles and framework of the UK and aims for equity of provision by working within the UK ethical framework with legal and ethical norms being explained to all patients and relatives. This requires truth telling about prognosis and efficacy of potential treatments so that unrealistic expectations are not raised.

  9. Problematizing Religious Truth: Implications for Public Education

    Science.gov (United States)

    Rosenblith, Suzanne; Priestman, Scott

    2004-01-01

    The question motivating this paper is whether or not there can be standards governing the evaluation of truth claims in religion. In other areas of study such as physics, math, history, and even value-laden realms like morality there is some widespread agreement as to what constitutes good thinking. If such a standard existed in religion, then our…

  10. Communicating Truthfully and Positively in Appraising Work Performance.

    Science.gov (United States)

    Pearce, C. Glenn; And Others

    1989-01-01

    Explores the issue of acceptable behavior for managers when giving feedback to their subordinates. Notes that feedback can be either truthful or untruthful, and can be communicated either positively or negatively. Describes the advantages and disadvantages for each feedback approach to work performance. (MM)

  11. 78 FR 25818 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2013-05-03

    ... BUREAU OF CONSUMER FINANCIAL PROTECTION 12 CFR Part 1026 [Docket No. CFPB-2012-0039] RIN 3170-AA28 Truth in Lending (Regulation Z) AGENCY: Bureau of Consumer Financial Protection. ACTION: Final rule; official interpretations. SUMMARY: The Bureau of Consumer Financial Protection (Bureau) issues this final...

  12. Authenticity, Post-truth and Populism

    Directory of Open Access Journals (Sweden)

    Vintilă Mihăilescu

    2017-11-01

    Full Text Available The article discusses the fake news phenomenon as a social act, analyzed together with what has caused it and what accompanies it: the culture of authenticity, digital communication and its specificity (emphasis on image, not on concepts, post-truth and populism (with its emotional dimension. The premise is that fake news is immanent to the social space, but in the context of globalization and under the development of information technology and social media, it has a greater social impact and carries higher risks for the society.

  13. Bibliographic Entity Automatic Recognition and Disambiguation

    CERN Document Server

    AUTHOR|(SzGeCERN)766022

    This master thesis reports an applied machine learning research internship done at digital library of the European Organization for Nuclear Research (CERN). The way an author’s name may vary in its representation across scientific publications creates ambiguity when it comes to uniquely identifying an author; In the database of any scientific digital library, the same full name variation can be used by more than one author. This may occur even between authors from the same research affiliation. In this work, we built a machine learning based author name disambiguation solution. The approach consists in learning a distance function from a ground-truth data, blocking publications of broadly similar author names, and clustering the publications using a semi-supervised strategy within each of the blocks. The main contributions of this work are twofold; first, improving the distance model by taking into account the (estimated) ethnicity of the author’s full name. Indeed, names from different ethnicities, for e...

  14. Towards an Automatic Framework for Urban Settlement Mapping from Satellite Images: Applications of Geo-referenced Social Media and One Class Classification

    Science.gov (United States)

    Miao, Zelang

    2017-04-01

    Currently, urban dwellers comprise more than half of the world's population and this percentage is still dramatically increasing. The explosive urban growth over the next two decades poses long-term profound impact on people as well as the environment. Accurate and up-to-date delineation of urban settlements plays a fundamental role in defining planning strategies and in supporting sustainable development of urban settlements. In order to provide adequate data about urban extents and land covers, classifying satellite data has become a common practice, usually with accurate enough results. Indeed, a number of supervised learning methods have proven effective in urban area classification, but they usually depend on a large amount of training samples, whose collection is a time and labor expensive task. This issue becomes particularly serious when classifying large areas at the regional/global level. As an alternative to manual ground truth collection, in this work we use geo-referenced social media data. Cities and densely populated areas are an extremely fertile land for the production of individual geo-referenced data (such as GPS and social network data). Training samples derived from geo-referenced social media have several advantages: they are easy to collect, usually they are freely exploitable; and, finally, data from social media are spatially available in many locations, and with no doubt in most urban areas around the world. Despite these advantages, the selection of training samples from social media meets two challenges: 1) there are many duplicated points; 2) method is required to automatically label them as "urban/non-urban". The objective of this research is to validate automatic sample selection from geo-referenced social media and its applicability in one class classification for urban extent mapping from satellite images. The findings in this study shed new light on social media applications in the field of remote sensing.

  15. Unearthing Truth: Forensic Anthropology, Translocal Memory, and “Provention” in Guatemala

    Directory of Open Access Journals (Sweden)

    Colette G. Mazzucelli

    2015-10-01

    Full Text Available This article deliberately examines the search for truth after decades of conflict in Guatemala. Excavations of mass gravesites and the painstaking exhumation processes carried out by professional forensic anthropology teams continue to allow families to locate lost relatives—reclaiming truth and supporting calls for justice. For Guatemalans, the search for truth now transcends national borders, especially among migrant communities in the United States. The family remains the central unit through which the work of Guatemalan forensic anthropologists is undertaken. In an effort to engender deeper insights about these exhumation processes from a social science perspective, this analysis promotes the use of specific “tools” in Guatemalan forensic anthropology investigations. The first is an exhumations concept map, which yields important questions meant to stimulate meaningful analysis. The second, Story Maps, is a technology application with the potential to mediate digital access to the emerging Guatemalan translocal space. The research in this analysis suggests that these “tools” strengthen Burton’s notion of “provention” in Guatemala.

  16. The Notion of Truth and Our Evolving Understanding of Sexual Harassment.

    Science.gov (United States)

    Recupero, Patricia R

    2018-03-01

    The notion of truth and its determination in legal proceedings is contingent on the cultural setting in which a claim is argued or disputed. Recent years have demonstrated a dramatic shift in the public dialogue concerning sexual harassment. This shift reflects changing cultural mores and standards in the workplace and society as a whole, particularly with respect to the validity of women's voices. The subjective reality experienced by victims of sexual harassment is inherently tied to the legal system's treatment of women throughout history. In determinations of truth, our understanding of which information and perspectives are relevant, and our expectations regarding the credibility of complainants and the accused, are undergoing a period of rapid change. The discourse surrounding the #MeToo movement suggests that the "reasonable-person" standard so often applied by courts is poorly suited to sexual-harassment litigation. As our understanding of what constitutes "severe," "pervasive," and "unwelcome" conduct continues to evolve, forensic psychiatrists must strive to uphold the values of respect for persons in the search for the truth. © 2018 American Academy of Psychiatry and the Law.

  17. Leadership for reconciliation: A Truth and Reconciliation Commission perspective

    Directory of Open Access Journals (Sweden)

    P. G. J. Meiring

    2002-08-01

    Full Text Available As important as the need for authentic leadership in the fields of politics, economy and education in Africa may be, the continent is also in dire need of leadership for reconciliation. Against the backdrop of the South African Truth and Reconciliation Commission (TRC, the author � who served on the Commission � discusses five characteristics of leaders for reconciliation. Leaders need to be: leaders with a clear understanding of the issues at stake; leaders with respect for the truth; leaders with a sense of justice; leaders with a comprehension of the dynamics of forgiveness; and leaders with a firm commitment. The insights and experiences of both the chairperson of the TRC, Desmond Tutu, and the deputy chair, Alex Boraine, form the backbone of the article.

  18. The Victim, the International Criminal Court and the Search for Truth: on the Interdependence and Incompatibility of Truths about Mass Atrocity

    NARCIS (Netherlands)

    Stolk, S.

    2015-01-01

    In the debate on the place of victims in international criminal proceedings, the 'search for truth' takes centre stage as an important concern of victims, international criminal tribunals and the wider international community. However, the various claims about the importance of telling and receiving

  19. Overcoming Relativism and Absolutism: Dewey's Ideals of Truth and Meaning in Philosophy for Children

    Science.gov (United States)

    Bleazby, Jennifer

    2011-01-01

    Different notions of truth imply and encourage different ideals of thinking, knowledge, meaning, and learning. Thus, these concepts have fundamental importance for educational theory and practice. In this paper, I intend to draw out and clarify the notions of truth, knowledge and meaning that are implied by P4C's pedagogical ideals. There is some…

  20. 75 FR 58505 - Regulation Z; Truth in Lending

    Science.gov (United States)

    2010-09-24

    .... Reasons for the Proposed Rule Congress enacted TILA based on findings that economic stability would be... and Regulation Z Congress enacted the Truth in Lending Act (TILA) based on findings that economic stability would be enhanced and competition among consumer credit providers would be strengthened by the...

  1. Ground Truthing Orbital Clay Mineral Observations with the APXS Onboard Mars Exploration Rover Opportunity

    Science.gov (United States)

    Schroeder, C.; Gellert, R.; VanBommel, S.; Clark, B. C.; Ming, D. W.; Mittlefehldt, D. S.; Yen, A. S.

    2016-01-01

    NASA's Mars Exploration Rover Opportunity has been exploring approximately 22 km diameter Endeavour crater since 2011. Its rim segments predate the Hesperian-age Burns formation and expose Noachian-age material, which is associated with orbital Fe3+-Mg-rich clay mineral observations [1,2]. Moving to an orders of magnitude smaller instrumental field of view on the ground, the clay minerals were challenging to pinpoint on the basis of geochemical data because they appear to be the result of near-isochemical weathering of the local bedrock [3,4]. However, the APXS revealed a more complex mineral story as fracture fills and so-called red zones appear to contain more Al-rich clay minerals [5,6], which had not been observed from orbit. These observations are important to constrain clay mineral formation processes. More detail will be added as Opportunity is heading into her 10th extended mission, during which she will investigate Noachian bedrock that predates Endeavour crater, study sedimentary rocks inside Endeavour crater, and explore a fluid-carved gully. ESA's ExoMars rover will land on Noachian-age Oxia Planum where abundant Fe3+-Mg-rich clay minerals have been observed from orbit, but the story will undoubtedly become more complex once seen from the ground.

  2. Cognitive dissonance, social comparison, and disseminating untruthful or negative truthful eWOM messages

    OpenAIRE

    Liu, Y-L; Keng, Ching-Jui

    2014-01-01

    In this research we explored consumers' intentions to provide untruthful or negative truthful electronic word-of-mouth (eWOM) messages when undergoing conflicting cognitive dissonance and after experiencing social comparison. We recruited 480 Taiwanese Internet users to participate in a scenario-based experiment. The findings show that after making downward comparisons on the Internet, consumers with high cognitive dissonance were more inclined to disseminate negative truthful eWOM messages c...

  3. The Asse. On inconvenient truths and the suppression of disagreeable principles

    International Nuclear Information System (INIS)

    Gellermann, Rainer

    2016-01-01

    The retrieval of radioactive wastes and the closure of the repository Asse II is a very complex project: not only with respect to technical aspects but also with respect to public information. The information brochure no 29 edited by the Bundesamt fuer Strahlenschutz BfS is dealing with the rather philosophic questions knowledge and truth. The German expert on constitutional law Peter Bull answered to the question whether subjectively assumed health hazards could inhibit a reasonable solution: it has to be expected from the public to bear inconvenient truth. Clarification is necessary instead of wrong populism and suppression of obnoxious findings.

  4. Methods of extending signatures and training without ground information. [data processing, pattern recognition

    Science.gov (United States)

    Henderson, R. G.; Thomas, G. S.; Nalepka, R. F.

    1975-01-01

    Methods of performing signature extension, using LANDSAT-1 data, are explored. The emphasis is on improving the performance and cost-effectiveness of large area wheat surveys. Two methods were developed: ASC, and MASC. Two methods, Ratio, and RADIFF, previously used with aircraft data were adapted to and tested on LANDSAT-1 data. An investigation into the sources and nature of between scene data variations was included. Initial investigations into the selection of training fields without in situ ground truth were undertaken.

  5. THE USE OF UAS FOR ASSESSING AGRICULTURAL SYSTEMS IN AN WETLAND IN TANZANIA IN THE DRY- AND WET-SEASON FOR SUSTAINABLE AGRICULTURE AND PROVIDING GROUND TRUTH FOR TERRA-SAR X DATA

    Directory of Open Access Journals (Sweden)

    H.-P. Thamm

    2013-08-01

    Full Text Available The paper describes the assessment of the vegetation and the land use systems of the Malinda Wetland in the Usambara Mountains in Tanzania with the parachute UAS (unmanned aerial system SUSI 62. The area of investigation was around 8 km2. In two campaigns, one in the wet season and one in the dry season, approximately 2600 aerial photos of the wetland were taken using the parachute UAS SUSI 62; of these images, ortho-photos with a spatial resolution of 20 cm x 20 cm, were computed with an advanced block bundle approach. The block bundles were geo-referenced using control points taken with differential GPS. As well a digital surface model (DSM of the wetland was created out of the UAS photos. Using the ortho-photos it is possible to assess the different land use systems; the differences in the phenology of the vegetation between wet and dry season can be investigated. In addition, the regionalisation of bio mass samples on smaller test plots was possible. The ortho-photos and the DSM derived from the UAS proved to be a valuable ground truth for the interpretation of Terra-SAR X images. The campaigns demonstrated that SUSI 62 was a suitable, robust tool to obtain the valuable information under harsh conditions.

  6. Armastusest: tõerežiimid, kultuurilised kujutelmad ja kehaline ilmakogemus / On Love: Regimes of Truth, Cultural Imaginaries and the Bodily Experience of Being in the World

    Directory of Open Access Journals (Sweden)

    Epp Annus

    2016-12-01

    Full Text Available Teesid: Artikkel lähtub teesist, et armastus kui tunne on lahutamatu sellest, kuidas seda tunnet sõnastatakse ja mõistetakse, ning analüüsib armastust kui kultuurilist kujutelma, mille äratundmine toimub vastavalt ühiskondlikele tõerežiimidele. Analüüsin armastusekujutust kirjandusteostes Alain Badiou armastusmudelite abil ning lisan juurde „omailma-armastuse mudeli“, mis rõhutab armastatu lahutamatust teda ümbritsevast ruumist: armastus ei hõlma vaid armastatut kui kindlalt piiritletud kehalist üksust, vaid ka seda, kuidas armastatu suhestub teda ümbritseva ruumiga ja inimestega.   This article presents and analyses Western cultural models for speaking and thinking about love. According to Michel Foucault, each society establishes its regimes of truth: certain types of discourses are approved as truthful while others are declared unreliable. Each society includes mechanisms of control, which distinguish true statements from false, and assign some people (but not others the authority to judge the true and the false, the acceptable and the unacceptable. Regimes of truth also establish paradigms for judging the truthfulness of love: according to the romantic regime, for example, love is something ephemeral, ungraspable and immeasurable, it transgresses established boundaries and norms; according to the pragmatic regime, by contrast, love can be expressed in economic terms and thus measured: a precious gift expresses commitment (else it would be a waste of money. There may be no common ground for one regime to concede legitimacy to a value asserted by a competing regime. In the view of the romantic regime, for example, the pragmatic regime might be judged as cynical and failing to grasp the essence of love – such weighing of feelings belongs to modern regimes of truth. Both romantic and pragmatic regimes of truth belong to the larger field of cultural imaginaries. Regimes of truth order and systematize the sphere of cultural

  7. Propositional matrices as alternative representation of truth values ...

    African Journals Online (AJOL)

    The paper considered the subject of representation of truth values in symbolic logic. An alternative representation was given based on the rows and columns properties of matrices, with the operations involving the logical connectives subjected to the laws of algebra of propositions. Matrices of various propositions detailing ...

  8. Extraction of Capillary Non-perfusion from Fundus Fluorescein Angiogram

    Science.gov (United States)

    Sivaswamy, Jayanthi; Agarwal, Amit; Chawla, Mayank; Rani, Alka; Das, Taraprasad

    Capillary Non-Perfusion (CNP) is a condition in diabetic retinopathy where blood ceases to flow to certain parts of the retina, potentially leading to blindness. This paper presents a solution for automatically detecting and segmenting CNP regions from fundus fluorescein angiograms (FFAs). CNPs are modelled as valleys, and a novel technique based on extrema pyramid is presented for trough-based valley detection. The obtained valley points are used to segment the desired CNP regions by employing a variance-based region growing scheme. The proposed algorithm has been tested on 40 images and validated against expert-marked ground truth. In this paper, we present results of testing and validation of our algorithm against ground truth and compare the segmentation performance against two others methods.The performance of the proposed algorithm is presented as a receiver operating characteristic (ROC) curve. The area under this curve is 0.842 and the distance of ROC from the ideal point (0,1) is 0.31. The proposed method for CNP segmentation was found to outperform the watershed [1] and heat-flow [2] based methods.

  9. Archaeogeophysical data acquisition and analysis at Tel Burna, Israel: a valuable opportunity for ongoing ground-truth investigation and collaboration (Invited)

    Science.gov (United States)

    Pincus, J. A.

    2013-12-01

    , acquired in a zigzag east-west direction, proceeding south. The area extended from the present excavation border to the north and east. The following paper will discuss the method of data acquisition, post-processing, and analysis of the results. The final conclusions of the survey show a continuation of several key walls to the east, a valuable sub-surface tracing of the limestone bedrock, and the limit to which the archaeological material is present spatially in Area B to the north. These results play a major role in determining where to focus excavation efforts in the 2014 excavation season. This unique collaboration with the archaeological team and ongoing opportunity for archaeological ground-truthing will be documented and published as the site develops. As there is a limited presence of such data within the corpus of published archaeogeophysical research, we look forward to further investigations at the site in the coming years.

  10. The Value of Instruction for a Commitment to Truth.

    Science.gov (United States)

    Bugeja, Michael J.

    1997-01-01

    Describes the redesign of a media ethics course in which students analyze such topics as truth, falsehood, manipulation, temptation, unfairness, and power. Notes that students keep an ethics journal in the course, and discusses sample journal topics. (PA)

  11. Truth telling and informed consent: is "primum docere" the new motto of clinical practice?

    Science.gov (United States)

    Byk, Christian

    2007-09-01

    Autonomy has become in many countries a key concept in the patient-physician relationship, leaving the old paternalistic medical attitude out of the realm of common good clinical practice. Consequently, the validity of the informed consent procedure which is related to any medical intervention, should imply that the information given is true. Raising the question as such obliges us to consider the truth not for itself but in regard to the validity of the consent to a medical intervention. Although a clinical approach reveals that loyalty should guide the patient-physician relationship, there are still some situations in which informed consent and truth telling may be controversial: in some circumstances, the physician should or may not tell the truth, in others he can simply forget to tell.

  12. A Critical View of Kenya´s Truth, Justice and Reconciliation Agenda

    DEFF Research Database (Denmark)

    Owiso, Michael

    2016-01-01

    assassinations, killings, torture, denial of basic needs, and other kinds of human rights abuses perpetrated under successive regimes since the country´s independence in 1963. The truth, justice and reconciliation process, whose report was presented to the president on May 21, 2013, after four years of work....... The paper concludes that the approach so far taken is compromised for elite motivated political purposes and may not foster reconciliation and build a stable Kenya. In so doing the paper contributes to intellectual debate around truth commissions and their role in promoting democracy....

  13. Assessment of Electronic Circuits Reliability Using Boolean Truth Table Modeling Method

    International Nuclear Information System (INIS)

    EI-Shanshoury, A.I.

    2011-01-01

    This paper explores the use of Boolean Truth Table modeling Method (BTTM) in the analysis of qualitative data. It is widely used in certain fields especially in the fields of electrical and electronic engineering. Our work focuses on the evaluation of power supply circuit reliability using (BTTM) which involves systematic attempts to falsify and identify hypotheses on the basis of truth tables constructed from qualitative data. Reliability parameters such as the system's failure rates for the power supply case study are estimated. All possible state combinations (operating and failed states) of the major components in the circuit were listed and their effects on overall system were studied

  14. An Audience Study on PTT Gossiping and the Politics of Truth

    Directory of Open Access Journals (Sweden)

    Szuping Lin

    2017-10-01

    Full Text Available Through the audience research approach of in-depth interviews with users of the internet PTT Gossiping forum, this papers employs the concept of “politics of truth” to examine the “gossip” culture in the forum as well as the operation of truth/power politics within it. User interviews provide understandings of the meanings, operations, subject positions and social practices of the forum and how they interact with power politics in society. The gossip forum has expanded what gossip culture represents from collective imagination to social practices, renewing the perspectives derived from it, while engaging in a dialogue with the concept of politics of truth.

  15. 75 FR 58469 - Regulation Z; Truth in Lending

    Science.gov (United States)

    2010-09-24

    ... Congress enacted the Truth in Lending Act (TILA) based on findings that economic stability would be... 2503 of the Housing and Economic Recovery Act of 2008, Public Law 110-289, enacted on July 30, 2008. The MDIA was later amended by the Emergency Economic Stabilization Act of 2008, Public Law 110-343...

  16. From the clouds to the ground - snow precipitation patterns vs. snow accumulation patterns

    Science.gov (United States)

    Gerber, Franziska; Besic, Nikola; Mott, Rebecca; Gabella, Marco; Germann, Urs; Bühler, Yves; Marty, Mauro; Berne, Alexis; Lehning, Michael

    2017-04-01

    Knowledge about snow distribution and snow accumulation patterns is important and valuable for different applications such as the prediction of seasonal water resources or avalanche forecasting. Furthermore, accumulated snow on the ground is an important ground truth for validating meteorological and climatological model predictions of precipitation in high mountains and polar regions. Snow accumulation patterns are determined by many different processes from ice crystal nucleation in clouds to snow redistribution by wind and avalanches. In between, snow precipitation undergoes different dynamical and microphysical processes, such as ice crystal growth, aggregation and riming, which determine the growth of individual particles and thereby influence the intensity and structure of the snowfall event. In alpine terrain the interaction of different processes and the topography (e.g. lifting condensation and low level cloud formation, which may result in a seeder-feeder effect) may lead to orographic enhancement of precipitation. Furthermore, the redistribution of snow particles in the air by wind results in preferential deposition of precipitation. Even though orographic enhancement is addressed in numerous studies, the relative importance of micro-physical and dynamically induced mechanisms on local snowfall amounts and especially snow accumulation patterns is hardly known. To better understand the relative importance of different processes on snow precipitation and accumulation we analyze snowfall and snow accumulation between January and March 2016 in Davos (Switzerland). We compare MeteoSwiss operational weather radar measurements on Weissfluhgipfel to a spatially continuous snow accumulation map derived from airborne digital sensing (ADS) snow height for the area of Dischma valley in the vicinity of the weather radar. Additionally, we include snow height measurements from automatic snow stations close to the weather radar. Large-scale radar snow accumulation

  17. 78 FR 18795 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2013-03-28

    ... of credit at account opening. The consumer is also required to pay a cash advance fee that is equal... amount equal to any fees the consumer was required to pay with respect to the account that exceed 25... Regulation Z, which implements the Truth in Lending Act, and the Official Interpretations of the regulation...

  18. Discerning truth from deception: The sincere witness profile

    Directory of Open Access Journals (Sweden)

    Fiorella Giusberti

    2009-01-01

    Full Text Available During the last twenty years, we have assisted to a growing interest in the detection of verbal cues under deception. In this context, we focused our attention on the truth vs. deception topic in adults. In particular, we were interested in discrepant findings concerning some verbal indicators. The aim of the present study was to investigate whether different experimental designs may yield different results regarding the presence or absence of CBCA criteria. Forty participants were shown a video of a robbery and were asked to give a truthful and a deceitful statement of the criminal event. The participants’ performances were recorded in order to analyze content of the reports. Results showed more changes in verbal behaviour under within-subjects design compared to between-subjects one, though the presence/absence of some criteria was the same across the two statistical procedures. The different results yielded by between- and within-subjects analyses can provide some hints as regards the discrepancy in deception literature on verbal cues. Implications for applied settings are discussed.

  19. Truth comes into play: Adorno, Kant and Aesthetics

    Directory of Open Access Journals (Sweden)

    Berta M. Pérez

    2009-10-01

    Full Text Available This paper departs from the verification of the centrality that, in Adorno’s recognition of the aesthetic field, the rehabilitation of its bond to knowledge and truth has. Adorno’s criticism to the Kantian aesthetic theory is, this way, reconstructed starting from his dismissal of the separation, established by Kant, between the aesthetic and the epistemological fields. This criticism, which accuses the Kantian aesthetics of subjectivism, is finally taken back to Adorno’s dissatisfaction regarding the transcendental approach, for not being dialectic. At this point, the need of determining the specificity of his own understanding of the aesthetic and dialectics becomes apparent. Surprisingly it is then discovered an unexpected proximity between his position and the one represented by the Kantian aesthetics itself, a proximity that let recognize in the latter some features which go beyond the limits of the Critical System towards the Adornian approach. But this proximity proves also the immanence and legitimacy of the (ambiguous Adornian criticism to Kant, and, as a result of this, the unsatisfactory quality of the Kantian determination of knowledge and truth.

  20. PENGEMBANGAN MEDIA PERMAINAN TRUTH AND DARE BERVISI SETS GUNA MEMOTIVASI BELAJAR SISWA

    Directory of Open Access Journals (Sweden)

    Mita Rosyda Attaqiana

    2017-02-01

    Full Text Available Penelitian ini menggunakan media kartu permainan Truth and Dare bervisi SETS yang mengajak siswa untuk dapat menghubungkan dan mengkaitkan antara sains, teknologi, serta dampak pada lingkungan dan masyarakat. Tujuan penelitian ini adalah untuk mengetahui kelayakan, kepraktisan, dan keefektifan media permainan Truth and Dare bervisi SETS dalam pembelajaran kimia larutan penyangga. Desain penelitian yang digunakan yaitu Four-D Models yang disederhanakan menjadi Three-D Model, terdiri dari Define, Design, and Develop. Kelayakan awal media ditentukan oleh ahli media dan ahli materi yang terdiri dari dosen dan guru kimia. Kelayakan akhir media ditentukan dengan melibatkan hasil penelitian. Keefektifan media ditentukan dari hasil post test dan hasil angket motivasi belajar siswa. Kepraktisan media dilihat dari hasil angket tanggapan siswa dan guru terhadap penggunaan media permainan Truth and Dare bervisi SETS. Berdasarkan hasil validasi awal ahli media, ahli materi, analisis hasil penelitian, media dinyatakan sangat layak untuk pembelajaran kimia. Hasil analisis angket tanggapan siswa dan guru menyatakan bahwa media termasuk dalam kategori sangat praktis. Media dinyatakan efektif karena dapat membantu 35 dari 42 siswa mencapai nilai Kriteria Ketuntasan Minimal (KKM. Hal itu melebihi harapan 30 dari 42 siswa mencapai KKM. Oleh karena itu dapat disimpulkan bahwa media permainan Truth and Dare bervisi SETS dapat memotivasi belajar siswa untuk mencapai kompetensi dasar larutan penyangga.This research is using a game card media of Truth and Dare with SETS vision that invites students to be able to connect and relate between science, technology, and the impact on the environment and society. The purpose of this study was to determine the feasibility, practicality, and effectiveness of media visionary game of Truth and Dare SETS in chemistry learning of buffer solution. The study used Four-Design model which was reduced into Three-D Model consisting of

  1. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    DEFF Research Database (Denmark)

    Bertholet, Jenny; Wan, Hanlin; Toftegaard, Jakob

    2017-01-01

    segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP...... algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated...

  2. Localization of skeletal and aortic landmarks in trauma CT data based on the discriminative generalized Hough transform

    Science.gov (United States)

    Lorenz, Cristian; Hansis, Eberhard; Weese, Jürgen; Carolus, Heike

    2016-03-01

    Computed tomography is the modality of choice for poly-trauma patients to assess rapidly skeletal and vascular integrity of the whole body. Often several scans with and without contrast medium or with different spatial resolution are acquired. Efficient reading of the resulting extensive set of image data is vital, since it is often time critical to initiate the necessary therapeutic actions. A set of automatically found landmarks can facilitate navigation in the data and enables anatomy oriented viewing. Following this intention, we selected a comprehensive set of 17 skeletal and 5 aortic landmarks. Landmark localization models for the Discriminative Generalized Hough Transform (DGHT) were automatically created based on a set of about 20 training images with ground truth landmark positions. A hierarchical setup with 4 resolution levels was used. Localization results were evaluated on a separate test set, consisting of 50 to 128 images (depending on the landmark) with available ground truth landmark locations. The image data covers a large amount of variability caused by differences of field-of-view, resolution, contrast agent, patient gender and pathologies. The median localization error for the set of aortic landmarks was 14.4 mm and for the set of skeleton landmarks 5.5 mm. Median localization errors for individual landmarks ranged from 3.0 mm to 31.0 mm. The runtime performance for the whole landmark set is about 5s on a typical PC.

  3. Automatic tracking of implanted fiducial markers in cone beam CT projection images

    International Nuclear Information System (INIS)

    Marchant, T. E.; Skalski, A.; Matuszewski, B. J.

    2012-01-01

    Purpose: This paper describes a novel method for simultaneous intrafraction tracking of multiple fiducial markers. Although the proposed method is generic and can be adopted for a number of applications including fluoroscopy based patient position monitoring and gated radiotherapy, the tracking results presented in this paper are specific to tracking fiducial markers in a sequence of cone beam CT projection images. Methods: The proposed method is accurate and robust thanks to utilizing the mean shift and random sampling principles, respectively. The performance of the proposed method was evaluated with qualitative and quantitative methods, using data from two pancreatic and one prostate cancer patients and a moving phantom. The ground truth, for quantitative evaluation, was calculated based on manual tracking preformed by three observers. Results: The average dispersion of marker position error calculated from the tracking results for pancreas data (six markers tracked over 640 frames, 3840 marker identifications) was 0.25 mm (at iscoenter), compared with an average dispersion for the manual ground truth estimated at 0.22 mm. For prostate data (three markers tracked over 366 frames, 1098 marker identifications), the average error was 0.34 mm. The estimated tracking error in the pancreas data was < 1 mm (2 pixels) in 97.6% of cases where nearby image clutter was detected and in 100.0% of cases with no nearby image clutter. Conclusions: The proposed method has accuracy comparable to that of manual tracking and, in combination with the proposed batch postprocessing, superior robustness. Marker tracking in cone beam CT (CBCT) projections is useful for a variety of purposes, such as providing data for assessment of intrafraction motion, target tracking during rotational treatment delivery, motion correction of CBCT, and phase sorting for 4D CBCT.

  4. Freedom of Expression, Diversity, and Truth

    DEFF Research Database (Denmark)

    Kappel, Klemens; Hallsson, Bjørn Gunnar; Møller, Emil Frederik Lundbjerg

    2016-01-01

    The aim of this chapter is to examine how diversity benefits deliberation, information exchange and other socio-epistemic practices associated with free speech. We separate five distinct dimensions of diversity, and discuss a variety of distinct mechanisms by which various forms of diversity may...... be thought to have epistemically valuable outcomes. We relate these results to the moral justification of free speech. Finally, we characterise a collective action problem concerning the compliance with truth-conducive norms of deliberation, and suggest what may solve this problem....

  5. Automatic cloud coverage assessment of Formosat-2 image

    Science.gov (United States)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  6. Critical Inquiry for the Social Good: Methodological Work as a Means for Truth-Telling in Education

    Science.gov (United States)

    Kuntz, Aaron M.; Pickup, Austin

    2016-01-01

    This article questions the ubiquity of the term "critical" in methodological scholarship, calling for a renewed association of the term with projects concerned with social justice, truth-telling, and overt articulations of the social good. Drawing on Michel Foucault's work with parrhesia (or truth-telling) and Aristotle's articulation of…

  7. Understanding a sentence does not entail knowing its truth-conditions : Why the epistemological determination argument fails

    NARCIS (Netherlands)

    Cohnitz, Daniel; Kangilaski, Jaan

    The determination argument is supposed to show that a sentence's meaning is at least a truth-condition. This argument is supposed to rest on innocent premises that even a deflationist about truth can accept. The argument comes in two versions: one is metaphysical and the other is epistemological. In

  8. Which model of truth and reconciliation applies to former Yugoslavia?: Some thoughts on the closing panel discussion

    Directory of Open Access Journals (Sweden)

    Tošić Jelena

    2002-01-01

    Full Text Available This paper represents a reflection on the final panel discussion of the conference "Which Model of Truth and Reconciliation applies to ex-Yugoslavia?". By the means of sequential and hierarchical analysis of the argumentative structure of the discussion, the author «extracts» key dimensions of the expert discourse on truth and reconciliation using the case of the mentioned panel discussion. By identifying the points of consensus the author seeks to describe the notion of truth and reconciliation as it emerged in the closing panel discussion of the conference.

  9. Distributed operating system for NASA ground stations

    Science.gov (United States)

    Doyle, John F.

    1987-01-01

    NASA ground stations are characterized by ever changing support requirements, so application software is developed and modified on a continuing basis. A distributed operating system was designed to optimize the generation and maintenance of those applications. Unusual features include automatic program generation from detailed design graphs, on-line software modification in the testing phase, and the incorporation of a relational database within a real-time, distributed system.

  10. Climate: 15 inconvenient truths

    International Nuclear Information System (INIS)

    Marko, Istvan E.; Furfari, Samuel; Masson, Henri; Preat, Alain; Debeil, Anne; Delory, Ludovic; Godefridi, Drieu; Myren, Lars; Ripa di Meana, Carlo

    2013-01-01

    Proposed by professionals of various disciplines, this book is considered as the bible of climate sceptics. It proposes a synthesis of arguments which deny prevailing views in the domain of climate. The authors show how, since fifteen years, reality has systematically denied projections made by the IPCC and its numerous political relays and media coverage. A first objective is therefore to unlatch the debate on the climate issue in front of a systematic practice of monopolization of truth at the expense of an authentic scientific approach, and to restore a democratic debate. A second objective is to put into question again the IPCC scientific character, scientific views which are at the heart of the last report published by the IPCC, and the political, media and economic reception of IPCC reports

  11. Evaluating the effect of multiple sclerosis lesions on automatic brain structure segmentation

    Directory of Open Access Journals (Sweden)

    Sandra González-Villà

    2017-01-01

    Full Text Available In recent years, many automatic brain structure segmentation methods have been proposed. However, these methods are commonly tested with non-lesioned brains and the effect of lesions on their performance has not been evaluated. Here, we analyze the effect of multiple sclerosis (MS lesions on three well-known automatic brain structure segmentation methods, namely, FreeSurfer, FIRST and multi-atlas fused by majority voting, which use learning-based, deformable and atlas-based strategies, respectively. To perform a quantitative analysis, 100 synthetic images of MS patients with a total of 2174 lesions are simulated on two public databases with available brain structure ground truth information (IBSR18 and MICCAI’12. The Dice similarity coefficient (DSC differences and the volume differences between the healthy and the simulated images are calculated for the subcortical structures and the brainstem. We observe that the three strategies are affected when lesions are present. However, the effects of the lesions do not follow the same pattern; the lesions either make the segmentation method underperform or surprisingly augment the segmentation accuracy. The obtained results show that FreeSurfer is the method most affected by the presence of lesions, with DSC differences (generated − healthy ranging from −0.11 ± 0.54 to 9.65 ± 9.87, whereas FIRST tends to be the most robust method when lesions are present (−2.40 ± 5.54 to 0.44 ± 0.94. Lesion location is not important for global strategies such as FreeSurfer or majority voting, where structure segmentation is affected wherever the lesions exist. On the other hand, FIRST is more affected when the lesions are overlaid or close to the structure of analysis. The most affected structure by the presence of lesions is the nucleus accumbens (from −1.12 ± 2.53 to 1.32 ± 4.00 for the left hemisphere and from −2.40 ± 5.54 to 9.65 ± 9.87 for the right hemisphere, whereas the

  12. Truth commissions and gender: A South African case study ...

    African Journals Online (AJOL)

    South Africa's gendered past was never substantially addressed by the South African Truth and Reconciliation Commission (TRC) despite attempts by women's groups to ensure its inclusion.. The TRC's treatment of gender was in part constrained by its 'gender-blind' mandate, which ignored the different experiences and ...

  13. Learning from the crowd: Road infrastructure monitoring system

    Directory of Open Access Journals (Sweden)

    Johannes Masino

    2017-10-01

    To address this problem, the methods to collect training data automatically for new vehicles based on the comparison of trajectories of untrained and trained vehicles have been developed. The results show that the method based on a k-dimensional tree and Euclidean distance performs best and is robust in transferring the information of the road surface from one vehicle to another. Furthermore, this method offers the possibility to merge the output and road infrastructure information from multiple vehicles to enable a more robust and precise prediction of the ground truth.

  14. SUSTAINABLE DEVELOPMENT – HUMAN DEVELOPMENT CONNECTIONS IN THE POST-TRUTH ERA

    Directory of Open Access Journals (Sweden)

    ANDREEA CONSTANTINESCU

    2017-08-01

    Full Text Available Following the distancing of current policy from economic rigors and ethical demands aimed at redistribution of wealth, modern societies are parasitized by post-truth of actual facts. It distorts the shape and content of general interest data, for example political distortion of scientific evidence proving anthropogenic climate change. Under these circumstances, the question "to what extent economist’s truth stating what you cannot measure you cannot manage is sustained?" becomes absolutely legitimate. Regarding sustainable development management, monitoring the degree of achievement of Sustainable Development Goals is no longer sufficient to track progress in this area. Therefore, experts propose to introduce as much as possible qualitative data which, combined with quantitative data, will enhance their relevance and make them harder to be diverted for political purposes. This paper follows this direction, trying to prove that protection of data’s real meaning can be achieved by systemic analysis of all data originating from monitoring certain processes, which can be aggregated, with applicability in sustainable development. Thus, analyzing together data on sustainable development and those that indicates the state of human development emphasizes on one hand, the intrinsic link between these concepts and, on the other, maintain the sense of sustainability even in the post-truth era.

  15. Constructing the truth: from 'confusion of tongues' to 'constructions in analysis'.

    Science.gov (United States)

    Press, Jacques

    2006-04-01

    The author hypothesizes that the papers Freud wrote in the period 1934-9 constitute a final turning point in his work resulting from an attempt to work through, by means of self-analysis, early traumatic elements reactivated by the conditions of his life in the 1930s. The author emphasizes that the ups and downs of Freud's relationship with Sándor Ferenczi and the mourning which followed his death in 1933 played an important role in this traumatic situation. He suggests that through these last works, Freud pursued a posthumous dialogue with Ferenczi. This working through led Freud, in Moses and monotheism, to an ultimate revision of his theory of trauma, a revision which the author examines in full, in the light of the works of the Egyptologist, Jan Assmann. A new analytical paradigm emerges: that of constructions in analysis developed in the article of the same name. The activity of construction appears as an alternative to the mutual analysis proposed by Ferenczi and is closely bound up with the notion of historical truth. In psychoanalysis, it would mean constructing a historical truth whose anchoring in the material truth of the past is essential, though it should not be confused with it.

  16. Truth and Credibility in Sincere Policy Analysis: Alternative Approaches for the Production of Policy-Relevant Knowledge.

    Science.gov (United States)

    Bozeman, Barry; Landsbergen, David

    1989-01-01

    Two competing approaches to policy analysis are distinguished: a credibility approach, and a truth approach. According to the credibility approach, the policy analyst's role is to search for plausible argument rather than truth. Each approach has pragmatic tradeoffs in fulfilling the goal of providing usable knowledge to decision makers. (TJH)

  17. Ground-atmosphere interactions at Gale

    Science.gov (United States)

    Renno, N. O.; Martinez, G.; Ramos, M.; Hallet, B.; Gómez, F. G.; Jun, I.; Fisk, M. R.; Gomez-Elvira, J.; Hamilton, V. E.; Mischna, M. A.; Sletten, R. S.; Martin-Torres, J.; De La Torre Juarez, M.; Vasavada, A. R.; Zorzano, M.

    2013-12-01

    We analyze variations in environmental parameters and regolith properties along Curiosity's track to determine the possible causes of an abrupt change in the thermal properties of the ground and the atmosphere observed around Sol 120, as the rover transitioned from an area of sandy soil (Rocknest) to an area of fractured bedrock terrain (Yellowknife). Curiosity is instrumented with the Rover Environmental Monitoring Station (REMS) and the Dynamic Albedo of Neutrons (DAN) sensors to measure the air temperature, the ground temperature, and the hydrogen content of the shallow subsurface along Curiosity's track. Analysis of the REMS data is used to estimate the regolith's heat budget. This analysis suggests that the abrupt decrease in the ground and atmosphere temperature and the difference between ground and air temperatures observed around Sol 120 is likely caused by an increase in the soil thermal inertia. The changes in thermal inertia have been known for some time so confirming this by the REMS package provides ground truthing. A new unexpected finding is that the regolith water content, as indicated by DAN's detection of hydrogen content, is higher in the Yellowknife soil. Another interesting finding at this site are the holes and other signs of recent geological activity in the area of fractured terrain that may reflect large volumetric variations and facilitate gas exchange between the ground and atmosphere. Near-surface volumetric changes in soil and bedrock could reflect changes in the volume of subsurface H2O, or in the partitioning of H2O among its three phases. Volume increases could also result from salt crystal growth in rock pores and soil pores associated with the adsorption of water vapor. Crystallization in pores is a significant weathering process on Earth; it could well be active on Mars. Salts also inhibits the exchange of moisture between the ground and the atmosphere, and cements the soils of arid places such as in the McMurdo Dry Valleys in

  18. Islamic positivism and scientific truth: Qur'an and archeology in a creationist documentary film

    OpenAIRE

    Dupret , Baudouin; Gutron , Clémentine

    2016-01-01

    International audience; The ambition of “scientific creationism” is to prove that science actually confirms religion. This is especially true in the case of Muslim creationism, which adopts a reasoning of a syllogistic type: divine revelation is truth; good science confirms truth; divine revelation is henceforth scientifically proven. Harun Yahya is a prominent Muslim “creationist” whose website hosts many texts and documentary films, among which “Evidence of the true faith in historical sour...

  19. AUTOMATIC RETINA EXUDATES SEGMENTATION WITHOUT A MANUALLY LABELLED TRAINING SET

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy which can be assessed by detecting exudates (a type of bright lesion) in fundus images. In this work, two new methods for the detection of exudates are presented which do not use a supervised learning step and therefore do not require ground-truthed lesion training sets which are time consuming to create, difficult to obtain, and prone to human error. We introduce a new dataset of fundus images from various ethnic groups and levels of DME which we have made publicly available. We evaluate our algorithm with this dataset and compare our results with two recent exudate segmentation algorithms. In all of our tests, our algorithms perform better or comparable with an order of magnitude reduction in computational time.

  20. Self-actuating grapple automatically engages and releases loads from overhead cranes

    Science.gov (United States)

    Froehlich, J. A.; Karastas, G. A.

    1966-01-01

    Two-piece grapple mechanism consisting of a lift knob secured to the load and a grapple member connected to the crane or lift automatically disengages the load from the overhead lifting device when the load contacts the ground. The key feature is the sliding collar under the lift knob which enables the grapple latch to be stripped off over the lift knob.

  1. Using GPS-surveyed intertidal zones to determine the validity of shorelines automatically mapped by Landsat water indices

    Science.gov (United States)

    Kelly, Joshua T.; Gontz, Allen M.

    2018-03-01

    Satellite remote sensing has been used extensively in a variety of shoreline studies and validated using aerial photography. This ground truth method only represents an instantaneous depiction of the shoreline at the time of acquisition and does not take into account the spatial and temporal variability of the dynamic shoreline boundary. Landsat 8‧s Operational Land Imager sensor's capability to accurately delineate a shoreline is assessed by comparing all known Landsat water index-derived shorelines with two GPS-surveyed intertidal zones that coincide with the satellite flyover date, one of which had near-neap tide conditions. Seven indices developed for automatically classifying water pixels were evaluated for their ability to delineate shorelines. The shoreline is described here as the area above and below maximum low and high tide, otherwise known as the intertidal zone. The high-water line, or wet/dry sediment line, was chosen as the shoreline indicator to be mapped using a handheld GPS. The proportion of the Landsat-derived shorelines that fell within this zone and their alongshore profile lengths were calculated. The most frequently used water index and the predecessor to Modified Normalized Difference Water Index (MNDWI), Normalized Difference Water Index (NDWI), was found to be the least accurate by a significant margin. Other indices required calibration of their threshold value to achieve accurate results, thus diminishing their replicability success for other regions. MNDWI was determined to be the best index for automated shoreline mapping, based on its superior accuracy and repeatable, stable threshold value.

  2. Ernst von Glasersfeld's Radical Constructivism and Truth as Disclosure

    Science.gov (United States)

    Joldersma, Clarence W.

    2011-01-01

    In this essay Clarence Joldersma explores radical constructivism through the work of its most well-known advocate, Ernst von Glasersfeld, who combines a sophisticated philosophical discussion of knowledge and truth with educational practices. Joldersma uses Joseph Rouse's work in philosophy of science to criticize the antirealism inherent in…

  3. Green gold : on variations of truth in plantation forestry

    NARCIS (Netherlands)

    Romeijn, P.

    1999-01-01

    The "variations of truth in plantation forestry" is a study on the Teakwood investment program. Teakwood offered the general public in The Netherlands the opportunity to directly invest in a teak plantation in Costa Rica. The program was pioneered in 1989 and truly gained momentum when it

  4. When is Deceptive Message Production More Effortful than Truth-Telling? A Baker's Dozen of Moderators.

    Science.gov (United States)

    Burgoon, Judee K

    2015-01-01

    Deception is thought to be more effortful than telling the truth. Empirical evidence from many quarters supports this general proposition. However, there are many factors that qualify and even reverse this pattern. Guided by a communication perspective, I present a baker's dozen of moderators that may alter the degree of cognitive difficulty associated with producing deceptive messages. Among sender-related factors are memory processes, motivation, incentives, and consequences. Lying increases activation of a network of brain regions related to executive memory, suppression of unwanted behaviors, and task switching that is not observed with truth-telling. High motivation coupled with strong incentives or the risk of adverse consequences also prompts more cognitive exertion-for truth-tellers and deceivers alike-to appear credible, with associated effects on performance and message production effort, depending on the magnitude of effort, communicator skill, and experience. Factors related to message and communication context include discourse genre, type of prevarication, expected response length, communication medium, preparation, and recency of target event/issue. These factors can attenuate the degree of cognitive taxation on senders so that truth-telling and deceiving are similarly effortful. Factors related to the interpersonal relationship among interlocutors include whether sender and receiver are cooperative or adversarial and how well-acquainted they are with one another. A final consideration is whether the unit of analysis is the utterance, turn at talk, episode, entire interaction, or series of interactions. Taking these factors into account should produce a more nuanced answer to the question of when deception is more difficult than truth-telling.

  5. Fake news and post-truth pronouncements in general and in early human development.

    Science.gov (United States)

    Grech, Victor

    2017-12-01

    Fake news and post-truth pronouncements are increasingly common, and are unfortunately also progressively being applied to the sciences, including the medical sciences. This editorial briefly reviews this unsavoury trend and highlights recent debunking of fake truths in early human development. Science is arguably the last metanarrative with any significant cachet in the postmodern period. We, as scientists, must strive to ensure that our work is transparent and of the highest possible standard so as to continue to uphold science's integrity and probity. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Truth-telling, decision-making, and ethics among cancer patients in nursing practice in China.

    Science.gov (United States)

    Ling, Dong-Lan; Yu, Hong-Jing; Guo, Hui-Ling

    2017-01-01

    Truth-telling toward terminally ill patients is a challenging ethical issue in healthcare practice. However, there are no existing ethical guidelines or frameworks provided for Chinese nurses in relation to decision-making on truth-telling of terminal illness and the role of nurses thus is not explicit when encountering this issue. The intention of this paper is to provide ethical guidelines or strategies with regards to decision-making on truth-telling of terminal illness for Chinese nurses. This paper initially present a case scenario and then critically discuss the ethical issue in association with ethical principles and philosophical theories. Instead of focusing on attitudes toward truth disclosure, it aims to provide strategies regarding this issue for nurses. It highlights and discusses some of the relevant ethical assumptions around the perceived role of nurses in healthcare settings by focusing on nursing ethical virtues, nursing codes of ethics, and philosophical perspectives. And Confucian culture is discussed to explicate that deontology does not consider family-oriented care in China. Treating each family individually to explore the family's beliefs and values on this issue is essential in healthcare practice and nurses should tailor their own approach to individual needs regarding truth-telling in different situations. Moreover, the Chinese Code of Ethics should be modified to be more specific and applicable. Finally, a narrative ethics approach should be applied and teamwork between nurses, physicians and families should be established to support cancer patients and to ensure their autonomy and hope. Ethical considerations: This paper was approved by the Ethics Committee of The Second Affiliated Hospital of Guangzhou Medical University. The authors have obtained consent to use the case study and it has been anonymised to preserve the patient's confidentiality.

  7. Truth or meaning: Ricoeur versus Frei on biblical narrative ...

    African Journals Online (AJOL)

    Truth or meaning: Ricoeur versus Frei on biblical narrative. Of the theologians and philosophers now writing on biblical narrative, Hans Frei and Paul Ricoeur are probably the most prominent. It is significant that their views converge on important issues. Both are uncomfortable with hermeneutic theories that convert the text ...

  8. Ground penetrating radar utilization in exploring inadequate concrete covers in a new bridge deck

    Directory of Open Access Journals (Sweden)

    Md. Istiaque Hasan

    2014-01-01

    Full Text Available The reinforced concrete cast in place four span deck of a concrete bridge near Roanoke, Texas, was recently completed. Due to possible construction errors, it was suspected that the concrete covers in the deck did not conform to drawings and specifications. A full scale non-destructive evaluation of the concrete covers was carried out using ground penetrating radar (GPR equipment. Cover values were determined from the radargram generated from the scan. The estimated covers were plotted on contour maps. Migration data can substitute the drilling based ground truth data without compromising the concrete cover estimations, except for areas with very high cover values. Areas with high water content may result in inaccurate concrete dielectric constants. Based on the results, significant retrofitting of the bridge deck, such as additional overlay, was recommended.

  9. When deception becomes easy : The effects of task switching and goal neglect on the truth proportion effect

    NARCIS (Netherlands)

    van Bockstaele, B.; Wilhelm, C.; Meijer, E.; Debey, E.; Verschuere, B.

    2015-01-01

    Lying is typically more cognitively demanding than truth telling. Yet, recent cognitive models of lying propose that lying can be just as easy as truth telling, depending on contextual factors. In line with this idea, research has shown that the cognitive cost of deception decreases when people

  10. Automatic Imitation

    Science.gov (United States)

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  11. GNSS as a sea ice sensor - detecting coastal freeze states with ground-based GNSS-R

    Science.gov (United States)

    Strandberg, Joakim; Hobiger, Thomas; Haas, Rüdiger

    2017-04-01

    Based on the idea of using freely available signals for remote sensing, ground-based GNSS-reflectometry (GNSS-R) has found more and more applications in hydrology, oceanography, agriculture and other Earth sciences. GNSS-R is based on analysing the elevation dependent SNR patterns of GNSS signals, and traditionally only the oscillation frequency and phase have been studied to retrieve parameters from the reflecting surfaces. However, recently Strandberg et al. (2016) developed an inversion algorithm that has changed the paradigms of ground-based GNSS-R as it enables direct access to the radiometric properties of the reflector. Using the signal envelope and the rate at which the magnitude of the SNR oscillations are damped w.r.t. satellite elevation, the algorithm retrieves the roughness of the reflector surface amongst other parameters. Based on this idea, we demonstrate for the first time that a GNSS installation situated close to the coastline can detect the presence of sea-ice unambiguously. Using data from the GTGU antenna at the Onsala Space Observatory, Sweden, the time series of the derived damping parameter clearly matches the occurrence of ice in the bay where the antenna is situated. Our results were validated against visual inspection logs as well as with the help of ice charts from the Swedish Meteorological and Hydrological Institute. Our method is even sensitive to partial and intermediate ice formation stages, with clear difference in response between frazil ice and both open and solidly frozen water surfaces. As the GTGU installation is entirely built with standard geodetic equipment, the method can be applied directly to any coastal GNSS site, allowing analysis of both new and historical data. One can use the method as an automatic way of retrieving independent ground truth data for ice extent measurements for use in hydrology, cryosphere studies, and even societal interest fields such as sea transportation. Finally, the new method opens up for

  12. Truth and beauty in cosmology: does the Universe have an aesthetic?

    CERN Multimedia

    Impey, C

    2004-01-01

    "Astronomy is an empirical science, yet scientific definitions of truth and beauty are closely tied to the fact that mathematics appears to provide an accurate description of the physical Universe" (2 pages)

  13. Robust augmented reality registration method for localization of solid organs' tumors using CT-derived virtual biomechanical model and fluorescent fiducials.

    Science.gov (United States)

    Kong, Seong-Ho; Haouchine, Nazim; Soares, Renato; Klymchenko, Andrey; Andreiuk, Bohdan; Marques, Bruno; Shabat, Galyna; Piechaud, Thierry; Diana, Michele; Cotin, Stéphane; Marescaux, Jacques

    2017-07-01

    Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent

  14. The Simple Truth about the Gender Pay Gap

    Science.gov (United States)

    American Association of University Women, 2014

    2014-01-01

    It's been said that men are paid more than women are paid over their lifetimes. But what does that mean? Are women paid less because they choose lower-paying jobs? Is it because more women work part time than men do? Or is it because women tend to be the primary caregivers for their children? AAUW's "The Simple Truth about the…

  15. On truth-telling and storytelling: Truth-seeking during research involving communities with an oral culture and a history of violent conflict

    Directory of Open Access Journals (Sweden)

    A G Vethuizen

    2014-12-01

    Full Text Available The aim of this article is to propose some principles and practices for truth-seeking during research into violent conflict. To achieve this aim, an argument is deployed by analysing the theoretical concepts “truth”, “myth” and “oral culture” as sources of knowledge. This conceptual analysis precedes a discussion on community-based participatory research (CBPR as a research methodology to access the knowledge of lived experiences embedded in the oral culture of the San community of Platfontein, near Kimberley, South Africa. It was found that CBPR contains good practices to use in research to judge the probable truth about disputes. The CBPR process is ideal for determining the accuracy of data in the context of a specific culture, considering the norms, spiritual influences and personal considerations of knowledge-holders that accompany a unique cosmology. A variety and equity of worldviews and perspectives of what happened during violent conflict successfully challenges hegemonic power relationships, paradigms and narratives, ultimately leading to informed judgements of what is probably true about a conflict. CBPR with the San of Platfontein revealed principles that can be used as guidelines for researching disputes where oral culture is involved.

  16. What is the Point? Ethics, Truth and the Tractatus

    DEFF Research Database (Denmark)

    Christensen, Anne-Marie Søndergaard

    2007-01-01

    discourse is shaped by both subjective and objective concerns. Moving on, I unfold the subjective side of ethics by drawing on Stanley Cavell's notion of the point of an utterance, while the objective side will be presented via Diamond's writing on the importance of truth in ethics. My goal is to argue...

  17. Truth and Fibble-Fable in Renaissance Satire

    Directory of Open Access Journals (Sweden)

    Branko Madžarevič

    2011-12-01

    Full Text Available Throughout the years of my close involvement with wor(lds’ transactions, as a translator, in the triangle of the Renaissance doctors Rabelais, Montaigne, and, yes, Louis-Ferdinand Céline, the French author of the novel Journey to the End of Night (1932, their views on satire can be considered from a rather unconventional angle. By means of an imaginary morbid epistolary medical council, the impromptu introduction tries to entangle this peculiar trio in a freewheeling alliance, leading to the assumption that every translation defies the interpretational ambiguities of the utopian Thelema motto “Do as you will”. In the satirical context of all source and target faces, it is always acting on the verge of the paradoxical encomium, the hypothetical pasticcio, and obscurantist reversals of the original text. Of course, the issue at stake here is one of the convolutions of Erasmus’ Praise of Folly, Rabelais’ utopian Thelema Abbey, and the German Epistles of Obscure Men in pathetically wretched Latin. This paper deals with Renaissance and humanist satire, focusing on Rabelais’ five books of Gargantua and Pantagruel (1532–1564 and the interplay between the ideas of truth, truthfulness, and seriousness. In addition, the paper deals with how the Renaissance spirit of this satirical contemporary and ally of ours challenges the issue of verbal boundaries and the materiality of language.

  18. Figure-ground segregation: A fully nonlocal approach.

    Science.gov (United States)

    Dimiccoli, Mariella

    2016-09-01

    We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

    Science.gov (United States)

    Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

    1998-01-01

    Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

  20. Automatic Deduction in Dynamic Geometry using Sage

    Directory of Open Access Journals (Sweden)

    Francisco Botana

    2012-02-01

    Full Text Available We present a symbolic tool that provides robust algebraic methods to handle automatic deduction tasks for a dynamic geometry construction. The main prototype has been developed as two different worksheets for the open source computer algebra system Sage, corresponding to two different ways of coding a geometric construction. In one worksheet, diagrams constructed with the open source dynamic geometry system GeoGebra are accepted. In this worksheet, Groebner bases are used to either compute the equation of a geometric locus in the case of a locus construction or to determine the truth of a general geometric statement included in the GeoGebra construction as a boolean variable. In the second worksheet, locus constructions coded using the common file format for dynamic geometry developed by the Intergeo project are accepted for computation. The prototype and several examples are provided for testing. Moreover, a third Sage worksheet is presented in which a novel algorithm to eliminate extraneous parts in symbolically computed loci has been implemented. The algorithm, based on a recent work on the Groebner cover of parametric systems, identifies degenerate components and extraneous adherence points in loci, both natural byproducts of general polynomial algebraic methods. Detailed examples are discussed.

  1. Design and Fabrication of Automatic Glass Cutting Machine

    Science.gov (United States)

    Veena, T. R.; Kadadevaramath, R. S.; Nagaraj, P. M.; Madhusudhan, S. V.

    2016-09-01

    This paper deals with the design and fabrication of the automatic glass or mirror cutting machine. In order to increase the accuracy of cut and production rate; and decrease the production time and accidents caused due to manual cutting of mirror or glass, this project aims at development of an automatic machine which uses a programmable logic controller (PLC) for controlling the movement of the conveyer and also to control the pneumatic circuit. In this machine, the work of the operator is to load and unload the mirror. The cutter used in this machine is carbide wheel with its cutting edge ground to a V-shaped profile. The PLC controls the pneumatic cylinder and intern actuates the cutter along the glass, a fracture layer is formed causing a mark to be formed below the fracture layer and a crack to be formed below the rib mark. The machine elements are designed using CATIA V5R20 and pneumatic circuit are designed using FESTO FLUID SIM software.

  2. Do citation systems represent theories of truth?

    Directory of Open Access Journals (Sweden)

    Betsy Van der Veer Martens

    2001-01-01

    Full Text Available This article suggests that the citation can be viewed not only as a "concept symbol" but also as a "boundary object". The scientific, legal, and patent citation systems in America are examined at the micro, meso, and macro levels in order to understand how they function as commodified theories of truth in contemporary knowledge representation. This approach also offers a meta-theoretical overview of existing citation research efforts in science, law, and technology that may be of interdisciplinary interest.

  3. Good things don't come easy (to mind): explaining framing effects in judgments of truth.

    Science.gov (United States)

    Hilbig, Benjamin E

    2012-01-01

    Recently, the general phenomenon of a positive-negative-asymmetry was extended to judgments of truth. That is, negatively framed statements were shown to receive substantially higher truth ratings than formally equivalent statements framed positively. However, the cognitive mechanisms underlying this effect are unknown, so far. In the current work, two potential accounts are introduced and tested against each other in three experiments: On the one hand, negative framing may induce increased elaboration and thereby persuasion. Alternatively, negative framing could yield faster retrieval or generation of evidence and thus influence subjective veracity via experiential fluency. Two experiments drawing on response latencies and one manipulating the delay between information acquisition and judgment provide support for the fluency-based account. Overall, results replicate and extend the negatively-biased framing effect in truth judgments and show that processing fluency may account for it. © 2011 Hogrefe Publishing

  4. Isfahan MISP Dataset.

    Science.gov (United States)

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).

  5. AUTOMATIC ADJUSTMENT OF WIDE-BASE GOOGLE STREET VIEW PANORAMAS

    Directory of Open Access Journals (Sweden)

    E. Boussias-Alexakis

    2016-06-01

    Full Text Available This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  6. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy.

    Science.gov (United States)

    Yang, Jinzhong; Beadle, Beth M; Garden, Adam S; Schwartz, David L; Aristophanous, Michalis

    2015-09-01

    To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation-maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the "ground truth" for quantitative evaluation. The median multichannel segmented GTV of the primary tumor was 15.7 cm(3) (range, 6.6-44.3 cm(3)), while the PET segmented GTV was 10.2 cm(3) (range, 2.8-45.1 cm(3)). The median physician-defined GTV was 22.1 cm(3) (range, 4.2-38.4 cm(3)). The median difference between the multichannel segmented and physician-defined GTVs was -10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was -19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55-0.84), and the

  7. On the semantics of conflict resolution in truth maintenance systems

    NARCIS (Netherlands)

    Jonker, C.M.

    A Truth Maintenance -System (TMS) maintains a consistent state of belief given a set J of justifications, i.e. arguments for belief. To resolve contradictions dependency-directed backtracking is performed. In this paper we introduce a method that can, be used to track all

  8. Experimental Investigations of a Precision Sensor for an Automatic Weapons Stabilizer System

    Directory of Open Access Journals (Sweden)

    Igor Korobiichuk

    2016-12-01

    Full Text Available This paper presents the results of experimental investigations of a precision sensor for an automatic weapons stabilizer system. It also describes the experimental equipment used and the structure of the developed sensor. A weapons stabilizer is designed for automatic guidance of an armament unit in the horizontal and vertical planes when firing at ground and air targets that are quickly maneuvering, and at lower speeds when firing anti-tank missiles, as well as the bypass of construction elements by the armament unit, and the automatic tracking of moving targets when interacting with a fire control system. The results of experimental investigations have shown that the error of the precision sensor developed on the basis of a piezoelectric element is 6 × 10−10 m/s2 under quasi-static conditions, and ~10−5 m/s2 for mobile use. This paper defines metrological and calibration properties of the developed sensor.

  9. Mineral Potential in India Using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) Data

    Science.gov (United States)

    Oommen, T.; Chatterjee, S.

    2017-12-01

    NASA and the Indian Space Research Organization (ISRO) are generating Earth surface features data using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) within 380 to 2500 nm spectral range. This research focuses on the utilization of such data to better understand the mineral potential in India and to demonstrate the application of spectral data in rock type discrimination and mapping for mineral exploration by using automated mapping techniques. The primary focus area of this research is the Hutti-Maski greenstone belt, located in Karnataka, India. The AVIRIS-NG data was integrated with field analyzed data (laboratory scaled compositional analysis, mineralogy, and spectral library) to characterize minerals and rock types. An expert system was developed to produce mineral maps from AVIRIS-NG data automatically. The ground truth data from the study areas was obtained from the existing literature and collaborators from India. The Bayesian spectral unmixing algorithm was used in AVIRIS-NG data for endmember selection. The classification maps of the minerals and rock types were developed using support vector machine algorithm. The ground truth data was used to verify the mineral maps.

  10. Negotiating Nation-building and Citizenship through the Truth and ...

    African Journals Online (AJOL)

    This paper, therefore, seeks to interrogate the dramatic world(s) created using the material properties of the TRC in John Kani's Nothing but the Truth and Zakes Mda's The Bells of Amersfoort. The paper argues that the domination and manipulation of this public realm by the state at the expense of the individual is not only ...

  11. When is deceptive message production more effortful than truth-telling? A baker’s dozen of moderators

    Directory of Open Access Journals (Sweden)

    Judee K Burgoon

    2015-12-01

    Full Text Available Deception is thought to be more effortful than telling the truth. Empirical evidence from many quarters supports this general proposition. However, there are many factors that qualify and even reverse this pattern. Guided by a communication perspective, I present a baker’s dozen of moderators that may alter the degree of cognitive difficulty associated with producing deceptive messages. Among sender-related factors are memory processes, motivation, incentives, and consequences. Lying increases activation of a network of brain regions related to executive memory, suppression of unwanted behaviors, and task switching that is not observed with truth-telling. High motivation coupled with strong incentives or the risk of adverse consequences also prompts more cognitive exertion--for truth-tellers and deceivers alike--to appear credible, with associated effects on performance and message production effort, depending on the magnitude of effort, communicator skill and experience. Factors related to message and communication context include discourse genre, type of prevarication, expected response length, communication medium, preparation, and recency of target event/issue. These factors can attenuate the degree of cognitive taxation on senders so that truth-telling and deceiving are similarly effortful. Factors related to the interpersonal relationship among interlocutors include whether sender and receiver are cooperative or adversarial and how well-acquainted they are with one another. A final consideration is whether the unit of analysis is the utterance, turn at talk, episode, entire interaction, or series of interactions. Taking these factors into account should produce a more nuanced answer to the question of when deception is more difficult than truth-telling.

  12. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    Science.gov (United States)

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  13. QCD-aware partonic jet clustering for truth-jet flavour labelling

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, Andy; Pollard, Chris [University of Glasgow, School of Physics and Astronomy, Glasgow (United Kingdom)

    2016-02-15

    We present an algorithm for deriving partonic flavour labels to be applied to truth particle jets in Monte Carlo event simulations. The inputs to this approach are final pre-hadronisation partons, to remove dependence on unphysical details such as the order of matrix element calculation and shower generator frame recoil treatment. These are clustered using standard jet algorithms, modified to restrict the allowed pseudo-jet combinations to those in which tracked flavour labels are consistent with QCD and QED Feynman rules. The resulting algorithm is shown to be portable between the major families of shower generators, and largely insensitive to many possible systematic variations: it hence offers significant advantages over existing ad hoc labelling schemes. However, it is shown that contamination from multi-parton scattering simulations can disrupt the labelling results. Suggestions are made for further extension to incorporate more detailed QCD splitting function kinematics, robustness improvements, and potential uses for truth-level physics object definitions and tagging. (orig.)

  14. QCD-aware partonic jet clustering for truth-jet flavour labelling

    International Nuclear Information System (INIS)

    Buckley, Andy; Pollard, Chris

    2016-01-01

    We present an algorithm for deriving partonic flavour labels to be applied to truth particle jets in Monte Carlo event simulations. The inputs to this approach are final pre-hadronisation partons, to remove dependence on unphysical details such as the order of matrix element calculation and shower generator frame recoil treatment. These are clustered using standard jet algorithms, modified to restrict the allowed pseudo-jet combinations to those in which tracked flavour labels are consistent with QCD and QED Feynman rules. The resulting algorithm is shown to be portable between the major families of shower generators, and largely insensitive to many possible systematic variations: it hence offers significant advantages over existing ad hoc labelling schemes. However, it is shown that contamination from multi-parton scattering simulations can disrupt the labelling results. Suggestions are made for further extension to incorporate more detailed QCD splitting function kinematics, robustness improvements, and potential uses for truth-level physics object definitions and tagging. (orig.)

  15. Truth Seeded Reconstruction for Fast Simulation in the ATLAS Experiment

    CERN Document Server

    Jansky, Roland; Salzburger, Andreas

    The huge success of the ATLAS experiment for particle physics during Run 1 of the LHC would not have been possible without the production of vast amounts of simulated Monte Carlo data. However, the very detailed detector simulation is a highly CPU intensive task and thus resource shortages occurred. Motivated by this, great effort has been put into speeding up the simulation. As a result, other timeconsuming parts became visible. One of which is the track reconstruction. This thesis describes one potential solution to the CPU intensive reconstruction of simulated data: a newly designed truth seeded reconstruction. At its basics is the idea to skip the pattern recognition altogether, instead utilizing the available (truth) information from simulation to directly fit particle trajectories without searching for them. At the same time tracking effects of the standard reconstruction need to be emulated. This approach is validated thoroughly and no critical deviations of the results compared to the standard reconst...

  16. A Hybrid Hierarchical Approach for Brain Tissue Segmentation by Combining Brain Atlas and Least Square Support Vector Machine

    Science.gov (United States)

    Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh

    2013-01-01

    In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800

  17. A practical algorithm for the retrieval of floe size distribution of Arctic sea ice from high-resolution satellite Synthetic Aperture Radar imagery

    Directory of Open Access Journals (Sweden)

    Byongjun Hwang

    2017-07-01

    Full Text Available In this study, we present an algorithm for summer sea ice conditions that semi-automatically produces the floe size distribution of Arctic sea ice from high-resolution satellite Synthetic Aperture Radar data. Currently, floe size distribution data from satellite images are very rare in the literature, mainly due to the lack of a reliable algorithm to produce such data. Here, we developed the algorithm by combining various image analysis methods, including Kernel Graph Cuts, distance transformation and watershed transformation, and a rule-based boundary revalidation. The developed algorithm has been validated against the ground truth that was extracted manually with the aid of 1-m resolution visible satellite data. Comprehensive validation analysis has shown both perspectives and limitations. The algorithm tends to fail to detect small floes (mostly less than 100 m in mean caliper diameter compared to ground truth, which is mainly due to limitations in water-ice segmentation. Some variability in the power law exponent of floe size distribution is observed due to the effects of control parameters in the process of de-noising, Kernel Graph Cuts segmentation, thresholds for boundary revalidation and image resolution. Nonetheless, the algorithm, for floes larger than 100 m, has shown a reasonable agreement with ground truth under various selections of these control parameters. Considering that the coverage and spatial resolution of satellite Synthetic Aperture Radar data have increased significantly in recent years, the developed algorithm opens a new possibility to produce large volumes of floe size distribution data, which is essential for improving our understanding and prediction of the Arctic sea ice cover

  18. DMPD: MyDths and un-TOLLed truths: sensor, instructive and effector immunity totuberculosis. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 18191460 MyDths and un-TOLLed truths: sensor, instructive and effector immunity totuberculosis...g) (.svg) (.html) (.csml) Show MyDths and un-TOLLed truths: sensor, instructive and effector immunity totuberculosis...e and effector immunity totuberculosis. Authors Reiling N, Ehlers S, Holscher C. Publication Immunol Lett. 2

  19. Incentives for Truthful Information Elicitation of Continuous Signals

    OpenAIRE

    Radanovic, Goran; Faltings, Boi

    2014-01-01

    We consider settings where a collective intelligence is formed by aggregating information contributed from many independent agents, such as product reviews, community sensing, or opinion polls. We propose a novel mechanism that elicits both private signals and beliefs. The mechanism extends the previous versions of the Bayesian Truth Serum (the original BTS, the RBTS, and the multi-valued BTS), by allowing small populations and non-binary private signals, while not requiring additional assump...

  20. Art and fiction are signals with indeterminate truth values.

    Science.gov (United States)

    Rabb, Nathaniel

    2017-01-01

    Menninghaus et al. distinguish art from fiction, but no current arguments or data suggest that the concept of art can be meaningfully circumscribed. This is a problem for aesthetic psychology. I sketch a solution by rejecting the distinction: Unlike most animal communication, in which signals are either true or false, art and fiction consist of signals without determinate truth values.

  1. Fiction and truth in the parallel lives of Plutarch

    Directory of Open Access Journals (Sweden)

    Analía V. Sapere

    2018-05-01

    Full Text Available In this paper we will investigate the occurrences of πλάσμα in Plutarch’s Parallel Lives. We will analyse the meanings and nuances of the word in different passages of the work (understood as ‘fiction, counterfeit, figment’, etc. in order to connect our conclusions with plutarchean theorizations about the problem of truth in historical narrative.

  2. Rheticus Displacement: an Automatic Geo-Information Service Platform for Ground Instabilities Detection and Monitoring

    Science.gov (United States)

    Chiaradia, M. T.; Samarelli, S.; Agrimano, L.; Lorusso, A. P.; Nutricato, R.; Nitti, D. O.; Morea, A.; Tijani, K.

    2016-12-01

    Rheticus® is an innovative cloud-based data and services hub able to deliver Earth Observation added-value products through automatic complex processes and a minimum interaction with human operators. This target is achieved by means of programmable components working as different software layers in a modern enterprise system which relies on SOA (service-oriented-architecture) model. Due to its architecture, where every functionality is well defined and encapsulated in a standalone component, Rheticus is potentially highly scalable and distributable allowing different configurations depending on the user needs. Rheticus offers a portfolio of services, ranging from the detection and monitoring of geohazards and infrastructural instabilities, to marine water quality monitoring, wildfires detection or land cover monitoring. In this work, we outline the overall cloud-based platform and focus on the "Rheticus Displacement" service, aimed at providing accurate information to monitor movements occurring across landslide features or structural instabilities that could affect buildings or infrastructures. Using Sentinel-1 (S1) open data images and Multi-Temporal SAR Interferometry techniques (i.e., SPINUA), the service is complementary to traditional survey methods, providing a long-term solution to slope instability monitoring. Rheticus automatically browses and accesses (on a weekly basis) the products of the rolling archive of ESA S1 Scientific Data Hub; S1 data are then handled by a mature running processing chain, which is responsible of producing displacement maps immediately usable to measure with sub-centimetric precision movements of coherent points. Examples are provided, concerning the automatic displacement map generation process, as well as the integration of point and distributed scatterers, the integration of multi-sensors displacement maps (e.g., Sentinel-1 IW and COSMO-SkyMed HIMAGE), the combination of displacement rate maps acquired along both ascending

  3. Application of remote sensing to the photogeologic mapping of the region of the Itatiaia alkaline complex. M.S. Thesis; [Minas Gerais, Rio De Janeiro, Sao Paulo, and Itatiaia, Brazil

    Science.gov (United States)

    Dejesusparada, N. (Principal Investigator); Rodrigues, J. E.

    1981-01-01

    Remote sensing methods applied to geologically complex areas, through interaction of ground truth and information obtained from multispectral LANDSAT images and radar mosaics were evaluated. The test area covers parts of Minos Gerais, Rio De Janeiro and Sao Paulo states and contains the alkaline complex of Itatiaia and surrounding Precambrian terrains. Geological and structural mapping was satisfactory; however, lithological varieties which form the massif's could not be identified. Photogeological lineaments were mapped, some of which represent the boundaries of stratigraphic units. Automatic processing was used to classify sedimentary areas, which includes the talus deposits of the alkaline massifs.

  4. The different worlds of labour and company law: truth or myth ...

    African Journals Online (AJOL)

    The different worlds of labour and company law: truth or myth? ... take due cognisance of both the labour and company law principles that may be pertinent, ... indicate how the different functions, theories and models of labour and company law ...

  5. The universe and the teacup the mathematics of truth and beauty

    CERN Document Server

    Cole, K C

    1998-01-01

    Filled with "a thousand fascinating facts and shrewd observations (Martin Gardner, Los Angeles Times), this "beguiling and lucid book" (San Francisco Chronicle) demonstrates how the truth and beauty of everything, from relativity to rainbows, is all in the numbers. Line drawings.

  6. Harnessing Youth and Young Adult Culture: Improving the Reach and Engagement of the truth® Campaign.

    Science.gov (United States)

    Hair, Elizabeth; Pitzer, Lindsay; Bennett, Morgane; Halenar, Michael; Rath, Jessica; Cantrell, Jennifer; Dorrler, Nicole; Asche, Eric; Vallone, Donna

    2017-07-01

    The national youth and young adult tobacco prevention mass media campaign, truth®, relaunched in 2014 with the goal of creating "the generation that ends smoking." The objective of this study was to assess whether the strategy of airing truth ads during popular, culturally relevant televised events was associated with higher ad and brand awareness and increases in social media engagement. Awareness of six truth advertisements that aired during popular television events and self-reported social media engagement were assessed via cross-sectional online surveys of youth and young adults aged 15-21 years. Social engagement was also measured using separate Twitter and YouTube metrics. Logistic regression models predicted self-reported social engagement and any ad awareness, and a negative binomial regression predicted the total social media engagement across digital platforms. The study found that viewing a popular televised event was associated with higher odds of ad awareness and social engagement. The results also indicate that levels of social media engagement for an event period are greater than for a nonevent period. The findings demonstrate that premiering advertisements during a popular, culturally relevant televised event is associated with higher awareness of truth ads and increased social engagement related to the campaign, controlling for variables that might also influence the response to campaign messages.

  7. Optimal Recovery Trajectories for Automatic Ground Collision Avoidance Systems (Auto GCAS)

    Science.gov (United States)

    2015-03-01

    Harmon) Weilhouwer, Judson Brohmer, Aaron ‘Cdot’ George, Dave ‘Cools’ Cooley, and all the brave men and women who have lost their lives to ground... inequality constraints µ aircraft bank angle µmax upper bounds on aircraft bank angle µmin lower bounds on aircraft bank angle ω aircraft turn rate τ...interested in implementing a safety campaign to re- duce workplace injury rate just as then Treasury Secretary Paul O’Neill had done while President and Chief

  8. Dsm Based Orientation of Large Stereo Satellite Image Blocks

    Science.gov (United States)

    d'Angelo, P.; Reinartz, P.

    2012-07-01

    High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.

  9. Automatically pairing measured findings across narrative abdomen CT reports.

    Science.gov (United States)

    Sevenster, Merlijn; Bozeman, Jeffrey; Cowhy, Andrea; Trost, William

    2013-01-01

    Radiological measurements are one of the key variables in widely adopted guidelines (WHO, RECIST) that standardize and objectivize response assessment in oncology care. Measurements are typically described in free-text, narrative radiology reports. We present a natural language processing pipeline that extracts measurements from radiology reports and pairs them with extracted measurements from prior reports of the same clinical finding, e.g., lymph node or mass. A ground truth was created by manually pairing measurements in the abdomen CT reports of 50 patients. A Random Forest classifier trained on 15 features achieved superior results in an end-to-end evaluation of the pipeline on the extraction and pairing task: precision 0.910, recall 0.878, F-measure 0.894, AUC 0.988. Representing the narrative content in terms of UMLS concepts did not improve results. Applications of the proposed technology include data mining, advanced search and workflow support for healthcare professionals managing radiological measurements.

  10. Simple example of definitions of truth, validity, consistency, and completeness in quantum mechanics

    International Nuclear Information System (INIS)

    Benioff, P.

    1999-01-01

    Besides their use for efficient computation, quantum computers and quantum robots form a base for studying quantum systems that create valid physical theories using mathematics and physics. If quantum mechanics is universally applicable, then quantum mechanics must describe its own validation by these quantum systems. An essential part of this process is the development of a coherent theory of mathematics and quantum-mechanics together. It is expected that such a theory will include a coherent combination of mathematical logical concepts with quantum mechanics. That this might be possible is shown here by defining truth, validity, consistency, and completeness for a quantum-mechanical version of a simple (classical) expression enumeration machine described by Smullyan. Some of the expressions are chosen as sentences denoting the presence or absence of other expressions in the enumeration. Two of the sentences are self-referential. It is seen that, for an interpretation based on a Feynman path sum over expression paths, truth, consistency, and completeness for the quantum system have different properties than for the classical system. For instance, the truth of a sentence S is defined only on those paths containing S. It is undefined elsewhere. Also S and its negation can both be true provided they appear on separate paths. This satisfies the definition of consistency. The definitions of validity and completeness connect the dynamics of the system to the truth of the sentences. It is proved that validity implies consistency. It is seen that the requirements of validity and maximal completeness strongly restrict the allowable dynamics for the quantum system. Aspects of the existence of a valid, maximally complete dynamics are discussed. An exponentially efficient quantum computer is described that is also valid and complete for the set of sentences considered here. copyright 1999 The American Physical Society

  11. Views from the field Truth seeking and gender: The Liberian ...

    African Journals Online (AJOL)

    to gender influence truth-seeking in a post-conflict situation? Following ..... of how apartheid structured identities not simply along the fault lines of race, but also ... the Second World War in Europe and all black people (African, Coloured and ... and power of Afrikaner nationalism by means of an exclusive system of white.

  12. Entrance C - New Automatic Number Plate Recognition System

    CERN Multimedia

    2013-01-01

    Entrance C (Satigny) is now equipped with a latest-generation Automatic Number Plate Recognition (ANPR) system and a fast-action road gate.   During the month of August, Entrance C will be continuously open from 7.00 a.m. to 7.00 p.m. (working days only). The security guards will open the gate as usual from 7.00 a.m. to 9.00 a.m. and from 5.00 p.m. to 7.00 p.m. For the rest of the working day (9.00 a.m. to 5.00 p.m.) the gate will operate automatically. Please observe the following points:       Stop at the STOP sign on the ground     Position yourself next to the card reader for optimal recognition     Motorcyclists must use their CERN card     Cyclists may not activate the gate and should use the bicycle turnstile     Keep a safe distance from the vehicle in front of you   If access is denied, please check that your vehicle regist...

  13. Ground Truth in Building Human Security

    Science.gov (United States)

    2012-11-01

    structured. This allows for the creation of one master matrix where the assessment results are collected for each AO and then weighted ... weighted criteria for a geographic region’s value to overall policy aims, aids in decision- making where to best allocate resources when, as in this...Snow Leopards and Cadastres: Rare Sightings in Afghanistan,” in Land and Post-Conflict Peacebuild- ing, Jon Unruh and Rhodri Williams, Eds

  14. The Pedagogy of Heideggerian (Un)Truth: How Can We See Stars by Day in a Deep Dark Well?

    Science.gov (United States)

    Yu, Jie

    2014-01-01

    The question of truth as it relates to the teacher's role in the classroom raises not only issues of what and how we should teach, but challenges the very purpose of teaching. Since truth itself is a major question of phenomenology, the author chose to use the works of German philosopher Martin Heidegger for his phenomenological treatment of truth…

  15. Truth telling in a South African tertiary hospital | Vangu | South ...

    African Journals Online (AJOL)

    Introduction. Truth telling forms part of the contemporary debate in clinical bioethics and centres around the right of the patient to receive honest information concerning his or her medical condition/illness and the duty of the doctor to give this information to the patient. Many patients complain that they are not being informed, ...

  16. Truth or dare: expertise and risk governance

    International Nuclear Information System (INIS)

    Paterson, J.

    2002-01-01

    There is increasing evidence that the public is as concerned with the risks associated with technology as it is enthused by the opportunities that technology presents. Experts are increasingly referred to not so much for solutions to social problems per se, but paradoxically to problems attendant on technological solutions themselves. In these circumstances, there is an urgent need for the role of the expert to be clarified. While the public and political actors have essentially looked to experts for certainty in an uncertain world, this is precisely what scientific rationality cannot provide. The inherent modesty of science (exemplified, for example, by the need for falsifiability) must always be compromised at the point when a decision is made, when 'knowledge' becomes 'action'. There is accordingly a need to be clear about the status of scientific information or knowledge on the one hand, and the effect of the decision to act on the other - and hence the appropriate locus of responsibility. Analysing the process from expert advice through to political or economic decision can help to clarify the point at which misunderstanding arises, at which the inherently provisional truth of science is transformed into the effectively absolute truth implied by a decision to apply knowledge as technology. Recognizing that it is at this point that risks are run (as well as the opportunity for rewards created) may lead to greater clarity as to the respective roles. It may in turn offer some lessons as regards the design of risk governance arrangements and the place of experts in them. (author)

  17. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [comp.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  18. Balancing truth-telling: relatives acting as translators for older adult cancer patients of Turkish or northwest African origin in Belgium.

    Science.gov (United States)

    van Eechoud, I; Grypdonck, M; Leman, J; Van Den Noortgate, N; Deveugele, M; Verhaeghe, S

    2017-09-01

    The first generation of Turkish and Northwest African immigrants in Belgium are ageing and at risk for developing cancer. Relatives play an important role and provide both emotional and practical care, including mental support and acting as a contact person and/or a translator for improving access to healthcare, as most patients and their spouses have only a limited command of the language. Although access to professional interpreters has shown to be the best guarantee for qualitative healthcare, oncology health providers working with relatives as interpreters is much more common than professional interpreters. The aim of this study was to provide insight into the process wherein relatives balance truth-telling in translating for an older family member diagnosed with cancer. This was a qualitative research study, with elements of constructivist grounded theory. Twenty-eight loosely structured interviews were conducted. Most relatives consider it their responsibility to contribute to a positive attitude of the patient. Relatives decided to what extent they inform the patient, based on several motives and embedded in their assessment of the patient's emotional strength, understanding and need to be informed. What they decide influences the way they act as a translator and/or a contact person between the patient and health professional(s). Some considered it best to omit medical information while others considered it best to inform the patient fully. The results emphasise the importance for healthcare providers to take into account the complexity and unpredictable character of the process of balancing truth-telling when family members translate for their ill older relative. © 2016 John Wiley & Sons Ltd.

  19. Automatic measurement of target crossing speed

    Science.gov (United States)

    Wardell, Mark; Lougheed, James H.

    1992-11-01

    The motion of ground vehicle targets after a ballistic round is launched can be a major source of inaccuracy for small (handheld) anti-armour weapon systems. A method of automatically measuring the crossing component to compensate the fire control solution has been devised and tested against various targets in a range of environments. A photodetector array aligned with the sight's horizontal reticle obtains scene features, which are digitized and processed to separate target from sight motion. Relative motion of the target against the background is briefly monitored to deduce angular crossing rate and a compensating lead angle is introduced into the aim point. Research to gather quantitative data and optimize algorithm performance is described, and some results from field testing are presented.

  20. Towards an automatic tool for resolution evaluation of mammographic images

    Energy Technology Data Exchange (ETDEWEB)

    De Oliveira, J. E. E. [FUMEC, Av. Alfonso Pena 3880, CEP 30130-009 Belo Horizonte - MG (Brazil); Nogueira, M. S., E-mail: juliae@fumec.br [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Pte. Antonio Carlos 6627, 31270-901, Belo Horizonte - MG (Brazil)

    2014-08-15

    Quality of Mammographies from the Public and Private Services of the State. With an essentially educational character, an evaluation of the image quality is monthly held from a breast phantom in each mammographic equipment. In face of this, this work proposes to develop a protocol for automatic evaluation of image quality of mammograms so that the radiological protection and image quality requirements are met in the early detection of breast cancer. Specifically, image resolution will be addressed and evaluated, as a part of the program of image quality evaluation. Results show that for the fourth resolution and using 28 phantom images with the ground truth settled, the computer analysis of the resolution is promising and may be used as a tool for the assessment of the image quality. (Author)

  1. Towards an automatic tool for resolution evaluation of mammographic images

    International Nuclear Information System (INIS)

    De Oliveira, J. E. E.; Nogueira, M. S.

    2014-08-01

    Quality of Mammographies from the Public and Private Services of the State. With an essentially educational character, an evaluation of the image quality is monthly held from a breast phantom in each mammographic equipment. In face of this, this work proposes to develop a protocol for automatic evaluation of image quality of mammograms so that the radiological protection and image quality requirements are met in the early detection of breast cancer. Specifically, image resolution will be addressed and evaluated, as a part of the program of image quality evaluation. Results show that for the fourth resolution and using 28 phantom images with the ground truth settled, the computer analysis of the resolution is promising and may be used as a tool for the assessment of the image quality. (Author)

  2. Therapeutic privilege: between the ethics of lying and the practice of truth.

    Science.gov (United States)

    Richard, Claude; Lajeunesse, Yvette; Lussier, Marie-Thérèse

    2010-06-01

    The 'right to the truth' involves disclosing all the pertinent facts to a patient so that an informed decision can be made. However, this concept of a 'right to the truth' entails certain ambiguities, especially since it is difficult to apply the concept in medical practice based mainly on current evidence-based data that are probabilistic in nature. Furthermore, in some situations, the doctor is confronted with a moral dilemma, caught between the necessity to inform the patient (principle of autonomy) and the desire to ensure the patient's well-being by minimising suffering (principle of beneficence). To comply with the principle of beneficence as well as the principle of non-maleficence 'to do no harm', the doctor may then feel obliged to turn to 'therapeutic privilege', using lies or deception to preserve the patient's hope, and psychological and moral integrity, as well as his self-image and dignity. There is no easy answer to such a moral dilemma. This article will propose a process that can fit into reflective practice, allowing the doctor to decide if the use of therapeutic privilege is justified when he is faced with these kinds of conflicting circumstances. We will present the conflict arising in practice in the context of the various theoretical orientations in ethics, and then we will suggest an approach for a 'practice of truth'. Last, we will situate this reflective method in the broader clinical context of medical practice viewed as a dialogic process.

  3. Application of ground-truth for classification and quantification of bird movements on migratory bird habitat initiative sites in southwest Louisiana: final report

    Science.gov (United States)

    Barrow, Wylie C.; Baldwin, Michael J.; Randall, Lori A.; Pitre, John; Dudley, Kyle J.

    2013-01-01

    This project was initiated to assess migrating and wintering bird use of lands enrolled in the Natural Resources Conservation Service’s (NRCS) Migratory Bird Habitat Initiative (MBHI). The MBHI program was developed in response to the Deepwater Horizon oil spill in 2010, with the goal of improving/creating habitat for waterbirds affected by the spill. In collaboration with the University of Delaware (UDEL), we used weather surveillance radar data (Sieges 2014), portable marine radar data, thermal infrared images, and visual observations to assess bird use of MBHI easements. Migrating and wintering birds routinely make synchronous flights near dusk (e.g., departure during migration, feeding flights during winter). Weather radars readily detect birds at the onset of these flights and have proven to be useful remote sensing tools for assessing bird-habitat relations during migration and determining the response of wintering waterfowl to wetland restoration (e.g., Wetlands Reserve Program lands). However, ground-truthing is required to identify radar echoes to species or species group. We designed a field study to ground-truth a larger-scale, weather radar assessment of bird use of MBHI sites in southwest Louisiana. We examined seasonal bird use of MBHI fields in fall, winter, and spring of 2011-2012. To assess diurnal use, we conducted total area surveys of MBHI sites in the afternoon, collecting data on bird species composition, abundance, behavior, and habitat use. In the evenings, we quantified bird activity at the MBHI easements and described flight behavior (i.e., birds landing in, departing from, circling, or flying over the MBHI tract). Our field sampling captured the onset of evening flights and spanned the period of collection of the weather radar data analyzed. Pre- and post-dusk surveys were conducted using a portable radar system and a thermal infrared camera. Landbirds, shorebirds, and wading birds were commonly found on MBHI fields during diurnal

  4. ROBIN: a platform for evaluating automatic target recognition algorithms: I. Overview of the project and presentation of the SAGEM DS competition

    Science.gov (United States)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set

  5. Disclosing the truth: a dilemma between instilling hope and respecting patient autonomy in everyday clinical practice.

    Science.gov (United States)

    Sarafis, Pavlos; Tsounis, Andreas; Malliarou, Maria; Lahana, Eleni

    2013-12-20

    While medical ethics place a high value on providing truthful information to patients, disclosure practices are far from being the norm in many countries. Transmitting bad news still remains a big problem that health care professionals face in their every day clinical practice. Through the review of relevant literature, an attempt to examine the trends in this issue worldwide will be made. Various electronic databases were searched by the authors and through systematic selection 51 scientific articles were identified that this literature review is based on. There are many parameters that lead to the concealment of truth. Factors related to doctors, patients and their close environment, still maintain a strong resistance against disclosure of diagnosis and prognosis in terminally ill patients, while cultural influences lead to different approaches in various countries. Withholding the truth is mainly based in the fear of causing despair to patients. However, fostering a spurious hope, hides the danger of its' total loss, while it can disturb patient-doctor relationship.

  6. Children's Reasoning about Lie-Telling and Truth-Telling in Politeness Contexts

    Science.gov (United States)

    Heyman, Gail D.; Sweet, Monica A.; Lee, Kang

    2009-01-01

    Children's reasoning about lying and truth-telling was examined among participants ages 7-11 (total N = 181) with reference to conflicts between being honest and protecting the feelings of others. In Study 1, participants showed different patterns of evaluation and motivational inference in politeness contexts vs. transgression contexts: in…

  7. Values in Fritz Perls's Gestalt Therapy: On the Dangers of Half-Truths.

    Science.gov (United States)

    Cadwallader, Eva H.

    1984-01-01

    Examines some of the values in Perls's theory of psychotherapy, which his Gestalt Prayer epitomizes. Argues that at least five of the major value claims presupposed by his psychotherapeutic theory and practice are in fact dangerous half-truths. (JAC)

  8. Semi-Automatic Image Labelling Using Depth Information

    Directory of Open Access Journals (Sweden)

    Mostafa Pordel

    2015-05-01

    Full Text Available Image labeling tools help to extract objects within images to be used as ground truth for learning and testing in object detection processes. The inputs for such tools are usually RGB images. However with new widely available low-cost sensors like Microsoft Kinect it is possible to use depth images in addition to RGB images. Despite many existing powerful tools for image labeling, there is a need for RGB-depth adapted tools. We present a new interactive labeling tool that partially automates image labeling, with two major contributions. First, the method extends the concept of image segmentation from RGB to RGB-depth using Fuzzy C-Means clustering, connected component labeling and superpixels, and generates bounding pixels to extract the desired objects. Second, it minimizes the interaction time needed for object extraction by doing an efficient segmentation in RGB-depth space. Very few clicks are needed for the entire procedure compared to existing, tools. When the desired object is the closest object to the camera, which is often the case in robotics applications, no clicks at all are required to accurately extract the object.

  9. Autonomy, nudging and post-truth politics.

    Science.gov (United States)

    Keeling, Geoff

    2017-11-16

    In his excellent essay, 'Nudges in a post-truth world', Neil Levy argues that 'nudges to reason', or nudges which aim to make us more receptive to evidence, are morally permissible. A strong argument against the moral permissibility of nudging is that nudges fail to respect the autonomy of the individuals affected by them. Levy argues that nudges to reason do respect individual autonomy, such that the standard autonomy objection fails against nudges to reason. In this paper, I argue that Levy fails to show that nudges to reason respect individual autonomy. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Digital Elevation Models of Patterned Ground in the Canadian Arctic and Implications for the Study of Mars

    Science.gov (United States)

    Knightly, P.; Murakami, Y.; Clarke, J.; Sizemore, H.; Siegler, M.; Rupert, S.; Chevrier, V.

    2017-12-01

    Patterned ground forms in periglacial zones from both expansion and contraction of permafrost by freeze-thaw and sub-freezing temperature changes and has been observed on both Earth and Mars from orbital and the surface at the Phoneix and Viking 2 landing sites. The Phoenix mission to Mars studied patterned ground in the vicinity of the spacecraft including the excavation of a trench revealing water permafrost beneath the surface. A study of patterned ground at the Haughton Impact structure on Devon Island used stereo-pair imaging and three-dimensional photographic models to catalog the type and occurrence of patterned ground in the study area. This image catalog was then used to provide new insight into photographic observations gathered by Phoenix. Stereo-pair imagery has been a valuable geoscience tool for decades and it is an ideal tool for comparative planetary geology studies. Stereo-pair images captured on Devon Island were turned into digital elevation models (DEMs) and comparisons were noted between the permafrost and patterned ground environment of Earth and Mars including variations in grain sorting, active layer thickness, and ice table depth. Recent advances in 360° cameras also enabled the creation of a detailed, immersive site models of patterned ground at selected sites in Haughton crater on Devon Island. The information from this ground truth study will enable the development and refinement of existing models to better evaluate patterned ground on Mars and predict its evolution.

  11. Affect and/as Collective Resistance in a Post-Truth Moment

    Science.gov (United States)

    Castro Samayoa, Andrés; Nicolazzo, Z

    2017-01-01

    In this piece, we call upon the value of coalitional politics as a strategy of resistance in the face of an increasingly powerful post-truth regime. Reflecting on the insidious effects of fact-denying discourses are echoed in our classrooms and in our roles as educators, we openly wonder: What might it mean for educators to use feelings as a guide…

  12. Truth or meaning: Ricoeur versus Frei on biblical narrative

    OpenAIRE

    Gary L. Comstock

    1989-01-01

    Truth or meaning: Ricoeur versus Frei on biblical narrative Of the theologians and philosophers now writing on biblical narrative, Hans Frei and Paul Ricoeur are probably the most prominent. It is significant that their views converge on important issues. Both are uncomfortable with hermeneutic theories that convert the text into an abstract philosophical system, an ideal typological structure, or a mere occasion for existential decision. Frei and Ricoeur seem knit together in a common en...

  13. Autonomy versus absolute truth in comparative law on family status of the child

    Directory of Open Access Journals (Sweden)

    Kovaček-Stanić Gordana

    2011-01-01

    Full Text Available The development of biology and medicine enables that legal and biological maternity and paternity coincide completely, owing to biomedical analysis, in particular to DNA analysis. On the other hand, development of biology and medicine causes discrepancy in legal and biological maternity and paternity in the situation of biomedically assisted conception if donor genetic material is used. Thus, autonomy of the parties gain in importance, legal parental relations are based on the will of the parties, so principle of biological truth looses importance. Legal parents would be persons who participated in the process of biomedically assisted conception in order to get the child. Autonomy of the mother could be extended to allow for anonymous birth. Acknowledgement of the paternity depends almost entirely on the will of the parties concerned. If the man acknowledges his paternity and the requisite consent is given, the man is considered to be the father. The biological truth is not examined. On the other hand, in proceedings for establishing and contesting maternity and paternity, the court is obliged to determine the biological truth, which may be based on DNA and other biomedical evidence. It could be said that the autonomy of the parties is limited by the requirement that maternity and paternity in such cases be established based on the biological facts.

  14. Automatic control of human thermal comfort with a liquid-cooled garment

    Science.gov (United States)

    Kuznetz, L. H.

    1977-01-01

    Water cooling in a liquid-cooled garment is used to maintain the thermal comfort of crewmembers during extravehicular activity. The feasibility of a simple control that will operate automatically to maintain the thermal comfort is established. Data on three test subjects are included to support the conclusion that heat balance can be maintained well within allowable medical limits. The controller concept was also successfully demonstrated for ground-based applications and shows potential for any tasks involving the use of liquid-cooled garments.

  15. Design and development of an automated D.C. ground fault detection and location system for Cirus

    International Nuclear Information System (INIS)

    Marik, S.K.; Ramesh, N.; Jain, J.K.; Srivastava, A.P.

    2002-01-01

    Full text: The original design of Cirus safety system provided for automatic detection of ground fault in class I D.C. power supply system and its annunciation followed by delayed reactor trip. Identification of a faulty section was required to be done manually by switching off various sections one at a time thus requiring a lot of shutdown time to identify the faulty section. Since class I power supply is provided for safety control system, quick detection and location of ground faults in this supply is necessary as these faults have potential to bypass safety interlocks and hence the need for a new system for automatic location of a faulty section. Since such systems are not readily available in the market, in-house efforts were made to design and develop a plant-specific system, which has been installed and commissioned

  16. Breaking new ground in mapping human settlements from space - The Global Urban Footprint

    Science.gov (United States)

    Esch, Thomas; Heldens, Wieke; Hirner, Andreas; Keil, Manfred; Marconcini, Mattia; Roth, Achim; Zeidler, Julian; Dech, Stefan; Strano, Emanuele

    2017-12-01

    Today, approximately 7.2 billion people inhabit the Earth and by 2050 this number will have risen to around nine billion, of which about 70% will be living in cities. The population growth and the related global urbanization pose one of the major challenges to a sustainable future. Hence, it is essential to understand drivers, dynamics, and impacts of the human settlements development. A key component in this context is the availability of an up-to-date and spatially consistent map of the location and distribution of human settlements. It is here that the Global Urban Footprint (GUF) raster map can make a valuable contribution. The new global GUF binary settlement mask shows a so far unprecedented spatial resolution of 0.4″ (∼ 12m) that provides - for the first time - a complete picture of the entirety of urban and rural settlements. The GUF has been derived by means of a fully automated processing framework - the Urban Footprint Processor (UFP) - that was used to analyze a global coverage of more than 180,000 TanDEM-X and TerraSAR-X radar images with 3 m ground resolution collected in 2011-2012. The UFP consists of five main technical modules for data management, feature extraction, unsupervised classification, mosaicking and post-editing. Various quality assessment studies to determine the absolute GUF accuracy based on ground truth data on the one hand and the relative accuracies compared to established settlements maps on the other hand, clearly indicate the added value of the new global GUF layer, in particular with respect to the representation of rural settlement patterns. The Kappa coefficient of agreement compared to absolute ground truth data, for instance, shows GUF accuracies which are frequently twice as high as those of established low resolution maps. Generally, the GUF layer achieves an overall absolute accuracy of about 85%, with observed minima around 65% and maxima around 98%. The GUF will be provided open and free for any scientific use in

  17. Telling the truth: Don DeLillo in an age of amnesia and redressDOI:10.5007/2175-8026.2010n59p176

    Directory of Open Access Journals (Sweden)

    Marni Gauthier

    2010-03-01

    Full Text Available The December 2006 Iran Holocaust denial Conference and the international excoriation of it reveal a paradox of two cultural strands that are emblematic of the legacy of the twentieth century: official denial and historical amnesia on the one hand; and (international attempts at truth telling and historical redress on the other. Massive violence–and associative denial—punctuate the entire twentieth century. Yet coordinated tenacious efforts at public acknowledgment of “what really happened”–a recurrent and insistent emphasis in this context of trials, reparations, and above all, truth commissions—and concomitant historical redress for state-sanctioned crimes is a particularly recent phenomenon, unique, in fact, to the 1990s. But it is not only political readers who address what Priscilla B. Hayner, in her exhaustive study of truth commissions calls, “unspeakable truths.” This essay addresses the incongruity between the recent global concern with truth telling, official apology, memory and historical redress on the one hand–an obsession that certainly includes the US—and American amnesia on the other. It is in the interstices of these two apposite late twentieth century phenomena–amnesia and truth telling; “history” distinct from “the truth of the past”; “official” opposed to “vernacular” memory — that, I argue, a new genre of historical novel develops and performs a vital cultural work: telling the truth in an age of amnesia and redress. Such novels engage the recalcitrant materials of historical experience to assert truth claims that in turn challenge nationalist histories and revise traditional mythologies. Among the foremost authors of this new “truth-telling” historical novel is Don DeLillo. Americana, the vital precursor to Libra and especially to Underworld, is the definitive harbinger of DeLillo’s third century of work that writes both within and against postmodernism. In these Cold-War era

  18. Information fusion performance evaluation for motion imagery data using mutual information: initial study

    Science.gov (United States)

    Grieggs, Samuel M.; McLaughlin, Michael J.; Ezekiel, Soundararajan; Blasch, Erik

    2015-06-01

    As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.

  19. Truths and fallacies concerning radiation and its effects

    International Nuclear Information System (INIS)

    Wilkins, S.R.

    1984-01-01

    In childhood we learned many myths about radiation. For example, we were told that people exposed to x-rays would glow in the dark, become radioactive, or under the proper circumstances, turn into superhumans such as the ''Hulk'' or ''Spiderman.'' Although these and other childhood myths are not taken seriously, many misconceptions still exist about the effects of ionizing radiation. Does exposure to radiation necessarily imply an ill fate? It is the intent of this chapter to highlight a few of the truths and fallacies concerning radiation and its effects

  20. The Truth About the Internet and Online Predators

    CERN Document Server

    Dingwell, Heath; Peterson, Fred L

    2011-01-01

    To help readers avoid and recognize risky behaviors, The Truth About the Internet and Online Predators explains many of the dangers associated with the Internet. The A-to-Z entries detail the social, legal, and personal risks of Internet use, while personal testimonies and question-and-answer sections provide readers with an inside look at common issues online. Entries include:. Bullies and cyberbullying. Characteristics of online predators. Chat rooms and instant messaging. Internet safety. Parental control. Peers and peer pressure. Phishing and pharming. Privacy issues. Social networking Web

  1. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2014-12-01

    Full Text Available Intelligent seamline selection for image mosaicking is an area of active research in the fields of massive data processing, computer vision, photogrammetry and remote sensing. In mosaicking applications for digital orthophoto maps (DOMs, the visual transition in mosaics is mainly caused by differences in positioning accuracy, image tone and relief displacement of high ground objects between overlapping DOMs. Among these three factors, relief displacement, which prevents the seamless mosaicking of images, is relatively more difficult to address. To minimize visual discontinuities, many optimization algorithms have been studied for the automatic selection of seamlines to avoid high ground objects. Thus, a new automatic seamline selection algorithm using a digital surface model (DSM is proposed. The main idea of this algorithm is to guide a seamline toward a low area on the basis of the elevation information in a DSM. Given that the elevation of a DSM is not completely synchronous with a DOM, a new model, called the orthoimage elevation synchronous model (OESM, is derived and introduced. OESM can accurately reflect the elevation information for each DOM unit. Through the morphological processing of the OESM data in the overlapping area, an initial path network is obtained for seamline selection. Subsequently, a cost function is defined on the basis of several measurements, and Dijkstra’s algorithm is adopted to determine the least-cost path from the initial network. Finally, the proposed algorithm is employed for automatic seamline network construction; the effective mosaic polygon of each image is determined, and a seamless mosaic is generated. The experiments with three different datasets indicate that the proposed method meets the requirements for seamline network construction. In comparative trials, the generated seamlines pass through fewer ground objects with low time consumption.

  2. Invariant protection of high-voltage electric motors of technological complexes at industrial enterprises at partial single-phase ground faults

    Science.gov (United States)

    Abramovich, B. N.; Sychev, Yu A.; Pelenev, D. N.

    2018-03-01

    Development results of invariant protection of high-voltage motors at incomplete single-phase ground faults are observed in the article. It is established that current protections have low action selectivity because of an inadmissible decrease in entrance signals during the shirt circuit occurrence in the place of transient resistance. The structural functional scheme and an algorithm of protective actions where correction of automatic zero sequence currents signals of the protected accessions implemented according to the level of incompleteness of ground faults are developed. It is revealed that automatic correction of zero sequence currents allows one to provide the invariance of sensitivity factor for protection under the variation conditions of a transient resistance in the place of damage. Application of invariant protection allows one to minimize damages in 6-10 kV electrical installations of industrial enterprises for a cause of infringement of consumers’ power supply and their system breakdown due to timely localization of emergency of ground faults modes.

  3. A reflection on Russell's ramified types and Kripke's hierarchy of truths

    NARCIS (Netherlands)

    Kamareddine, F.; Laan, T.D.L.

    1996-01-01

    Both in Kripke's Theory of Truth KTT [8] and Russell's Ramified Type Theory RTT [16, 9] we are confronted with some hierarchy. In RTT, we have a double hierarchy of orders and types. That is, the class of propositions is divided into different orders where a propositional function can only depend on

  4. Models for truth-telling in physician-patient encounters: what can we learn from Yoruba concept of Ooto?

    Science.gov (United States)

    Ewuoso, Cornelius

    2017-09-29

    Empirical studies have now established that many patients make clinical decisions based on models other than Anglo American model of truth-telling and patient autonomy. Some scholars also add that current medical ethics frameworks and recent proposals for enhancing communication in health professional-patient relationship have not adequately accommodated these models. In certain clinical contexts where health professional and patients are motivated by significant cultural and religious values, these current frameworks cannot prevent communication breakdown, which can, in turn, jeopardize patient care, cause undue distress to a patient in certain clinical contexts or negatively impact his/her relationship with the community. These empirical studies have now recommended that additional frameworks developed around other models of truth-telling; and which take very seriously significant value-differences which sometimes exist between health professional and patients, as well as patient's cultural/religious values or relational capacities, must be developed. This paper contributes towards the development of one. Specifically, this study proposes a framework for truth-telling developed around African model of truth-telling by drawing insights from the communitarian concept of ootọ́ amongst the Yoruba people of south west Nigeria. I am optimistic that if this model is incorporated into current medical ethics codes and curricula, it will significantly enhance health professional-patient communication. © 2017 John Wiley & Sons Ltd.

  5. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  6. Teleoperated Visual Inspection and Surveillance with Unmanned Ground and Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Viatcheslav Tretyakov

    2008-11-01

    Full Text Available This paper introduces our robotic system named UGAV (Unmanned Ground-Air Vehicle consisting of two semi-autonomous robot platforms, an Unmanned Ground Vehicle (UGV and an Unmanned Aerial Vehicles (UAV. The paper focuses on three topics of the inspection with the combined UGV and UAV: (A teleoperated control by means of cell or smart phones with a new concept of automatic configuration of the smart phone based on a RKI-XML description of the vehicles control capabilities, (B the camera and vision system with the focus to real time feature extraction e.g. for the tracking of the UAV and (C the architecture and hardware of the UAV

  7. An Inconvenient Truth: An Application of the Extended Parallel Process Model

    Science.gov (United States)

    Goodall, Catherine E.; Roberto, Anthony J.

    2008-01-01

    "An Inconvenient Truth" is an Academy Award-winning documentary about global warming presented by Al Gore. This documentary is appropriate for a lesson on fear appeals and the extended parallel process model (EPPM). The EPPM is concerned with the effects of perceived threat and efficacy on behavior change. Perceived threat is composed of an…

  8. HOW TRUTHFUL ARE WATER ACCOUNTING DATA?

    Directory of Open Access Journals (Sweden)

    Libor Ansorge

    2016-01-01

    Full Text Available Water accounting is an important tool for water managers. Many studies use official water accounting data or similar data for their assessment. In particular, large-scale studies or water footprint studies have limited opportunities for “in-situ” data collection. In many cases, the processors of studies do not know the origin of the data and their limitations. Water accounting data are very often used for decision-making process, water resource management, and planning in the water sector. This article tries to answer the question “How truthful are water accounting data?” For this task water accounting in the agriculture sector in the Czech Republic was selected. The data on water withdrawals for the agriculture purposes was analysed and compared with water needs estimation based on additional data on agricultural production.

  9. PROCESSING OF CRAWLED URBAN IMAGERY FOR BUILDING USE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Tutzauer

    2017-05-01

    Full Text Available Recent years have shown a shift from pure geometric 3D city models to data with semantics. This is induced by new applications (e.g. Virtual/Augmented Reality and also a requirement for concepts like Smart Cities. However, essential urban semantic data like building use categories is often not available. We present a first step in bridging this gap by proposing a pipeline to use crawled urban imagery and link it with ground truth cadastral data as an input for automatic building use classification. We aim to extract this city-relevant semantic information automatically from Street View (SV imagery. Convolutional Neural Networks (CNNs proved to be extremely successful for image interpretation, however, require a huge amount of training data. Main contribution of the paper is the automatic provision of such training datasets by linking semantic information as already available from databases provided from national mapping agencies or city administrations to the corresponding façade images extracted from SV. Finally, we present first investigations with a CNN and an alternative classifier as a proof of concept.

  10. UE4Sim: A Photo-Realistic Simulator for Computer Vision Applications

    KAUST Repository

    Mueller, Matthias; Casser, Vincent; Lahoud, Jean; Smith, Neil; Ghanem, Bernard

    2017-01-01

    We present a photo-realistic training and evaluation simulator (UE4Sim) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network (DNN) architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.

  11. Improvement of remote monitoring on water quality in a subtropical reservoir by incorporating grammatical evolution with parallel genetic algorithms into satellite imagery.

    Science.gov (United States)

    Chen, Li; Tan, Chih-Hung; Kao, Shuh-Ji; Wang, Tai-Sheng

    2008-01-01

    Parallel GEGA was constructed by incorporating grammatical evolution (GE) into the parallel genetic algorithm (GA) to improve reservoir water quality monitoring based on remote sensing images. A cruise was conducted to ground-truth chlorophyll-a (Chl-a) concentration longitudinally along the Feitsui Reservoir, the primary water supply for Taipei City in Taiwan. Empirical functions with multiple spectral parameters from the Landsat 7 Enhanced Thematic Mapper (ETM+) data were constructed. The GE, an evolutionary automatic programming type system, automatically discovers complex nonlinear mathematical relationships among observed Chl-a concentrations and remote-sensed imageries. A GA was used afterward with GE to optimize the appropriate function type. Various parallel subpopulations were processed to enhance search efficiency during the optimization procedure with GA. Compared with a traditional linear multiple regression (LMR), the performance of parallel GEGA was found to be better than that of the traditional LMR model with lower estimating errors.

  12. UE4Sim: A Photo-Realistic Simulator for Computer Vision Applications

    KAUST Repository

    Mueller, Matthias

    2017-08-19

    We present a photo-realistic training and evaluation simulator (UE4Sim) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network (DNN) architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.

  13. Sim4CV: A Photo-Realistic Simulator for Computer Vision Applications

    KAUST Repository

    Müller, Matthias

    2018-03-24

    We present a photo-realistic training and evaluation simulator (Sim4CV) (http://www.sim4cv.org) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.

  14. Truth in politics : rhetorical approaches to democratic deliberation in Africa and beyond

    NARCIS (Netherlands)

    Salazar, P.J.; Osha, S.; Binsbergen, van W.M.J.

    2004-01-01

    Democracy is about competing "truths". This is why "rhetoric"- the study of public deliberation and the training in public debate and argumentation - is part of democracy in development. This volume acclimatizes "rhetoric" to the philosophical scene in South Africa, and more in general in Africa as

  15. The Rise of Post-truth Populism in Pluralist Liberal Democracies: Challenges for Health Policy

    Science.gov (United States)

    Speed, Ewen; Mannion, Russell

    2017-01-01

    Recent years have witnessed the rise of populism and populist leaders, movements and policies in many pluralist liberal democracies, with Brexit and the election of Trump the two most recent high profile examples of this backlash against established political elites and the institutions that support them. This new populism is underpinned by a post-truth politics which is using social media as a mouthpiece for ‘fake news’ and ‘alternative facts’ with the intention of inciting fear and hatred of ‘the other’ and thereby helping to justify discriminatory health policies for marginalised groups. In this article, we explore what is meant by populism and highlight some of the challenges for health and health policy posed by the new wave of post-truth populism. PMID:28812811

  16. On the construction of reality and truth. Towards an epistemology of community social psychology.

    Science.gov (United States)

    Montero, Maritza

    2002-08-01

    An expression of community socialpsychology based on the need to transform social reality, and to considerpeople as the constructors ofthat reality, is examined from an epistemologicalpoint of view. Dualism, the position considering that object and subject are separate entities, and monism, the perspective stating that there is only one substance, are discussed The consequences of both conceptions for community social psychology, and their incompatibility, as well as the notions of reality and truth are analyzed. That analysis deals with the problems of defining reality, of separating subject and object of knowledge, of language's role, and of relativism and truth. Finally, a constructionist view of monism based on relatedness, and action, is proposed, stating the mutually influencing union of subject and object in the construction of reality.

  17. China QIUSHI SEEKING TRUTH no 3, 1 August 1988

    Science.gov (United States)

    1988-09-16

    Legal Principles in Law Circles in Recent Years 40 A New ’Political Economy’ Textbook Is Compiled in the Soviet Union [Dan Zhu ] 43 On ’Writings...Analysis on Results" (hereafter called "Census") which has been published by Zhongguo Caizheng Jingji Chubanshe (China Finance and Economics Publishing...HK2508082288 Beijing Q1USHI[SEEKING TRUTH] in Chinese No 3, 1 Aug 88 p 44 [Article by Dan Zhu 0030 4554 of the Chinese Academy of Social Sciences

  18. Academic Training: Telling the truth with statistics

    CERN Multimedia

    Françoise Benz

    2005-01-01

    2004-2005 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 21, 22, 23, 24 & 25 February from 11.00 to 12.00 hrs - Main Auditorium, bldg. 500 Telling the truth with statistics by G. D'Agostini / NFN, Roma, Italy The issue of evaluating and expressing the uncertainty in measurements, as well as that of testing hypotheses, is reviewed, with particular emphasis on the frontier cases typical of particle physics experiments. Fundamental aspects of probability will be addressed and the applications, solely based on probability theory, will cover several topics of practical interest, including counting experiments, upper/lower bounds, systematic errors, fits and comparison of hypotheses. ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch

  19. Populism, Exclusion, Post-truth. Some Conceptual Caveats

    Science.gov (United States)

    De Cleen, Benjamin

    2018-01-01

    In their editorial, Speed and Mannion identify two main challenges "the rise of post-truth populism" poses for health policy: the populist threat to inclusive healthcare policies, and the populist threat to well-designed health policies that draw on professional expertise and research evidence. This short comment suggests some conceptual clarifications that might help in thinking through more profoundly these two important issues. It argues that we should approach right-wing populism as a combination of a populist down/up (people/elite) axis with an exclusionary nationalist in/out (member/non-member) axis. And it raises some questions regarding the equation between populism, demagogy and the rejection of expertise and scientific knowledge. PMID:29524956

  20. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks

    Science.gov (United States)

    Cruz-Roa, Angel; Basavanhally, Ajay; González, Fabio; Gilmore, Hannah; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant

    2014-03-01

    This paper presents a deep learning approach for automatic detection and visual analysis of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BCa). Deep learning approaches are learn-from-data methods involving computational modeling of the learning process. This approach is similar to how human brain works using different interpretation levels or layers of most representative and useful features resulting into a hierarchical learned representation. These methods have been shown to outpace traditional approaches of most challenging problems in several areas such as speech recognition and object detection. Invasive breast cancer detection is a time consuming and challenging task primarily because it involves a pathologist scanning large swathes of benign regions to ultimately identify the areas of malignancy. Precise delineation of IDC in WSI is crucial to the subsequent estimation of grading tumor aggressiveness and predicting patient outcome. DL approaches are particularly adept at handling these types of problems, especially if a large number of samples are available for training, which would also ensure the generalizability of the learned features and classifier. The DL framework in this paper extends a number of convolutional neural networks (CNN) for visual semantic analysis of tumor regions for diagnosis support. The CNN is trained over a large amount of image patches (tissue regions) from WSI to learn a hierarchical part-based representation. The method was evaluated over a WSI dataset from 162 patients diagnosed with IDC. 113 slides were selected for training and 49 slides were held out for independent testing. Ground truth for quantitative evaluation was provided via expert delineation of the region of cancer by an expert pathologist on the digitized slides. The experimental evaluation was designed to measure classifier accuracy in detecting IDC tissue regions in WSI. Our method yielded the best quantitative