WorldWideScience

Sample records for ground truth automatic

  1. Ground-truth measurement systems

    Science.gov (United States)

    Serafin, R.; Seliga, T. A.; Lhermitte, R. M.; Nystuen, J. A.; Cherry, S.; Bringi, V. N.; Blackmer, R.; Heymsfield, G. M.

    1981-01-01

    Ground-truth measurements of precipitation and related weather events are an essential component of any satellite system designed for monitoring rainfall from space. Such measurements are required for testing, evaluation, and operations; they provide detailed information on the actual weather events, which can then be compared with satellite observations intended to provide both quantitative and qualitative information about them. Also, very comprehensive ground-truth observations should lead to a better understanding of precipitation fields and their relationships to satellite data. This process serves two very important functions: (a) aiding in the development and interpretation of schemes of analyzing satellite data, and (b) providing a continuing method for verifying satellite measurements.

  2. Ground Truth Annotation in T Analyst

    DEFF Research Database (Denmark)

    2015-01-01

    This video shows how to annotate the ground truth tracks in the thermal videos. The ground truth tracks are produced to be able to compare them to tracks obtained from a Computer Vision tracking approach. The program used for annotation is T-Analyst, which is developed by Aliaksei Laureshyn, Ph...

  3. Development of mine explosion ground truth smart sensors

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Steven R. [Rocky Mountain Geophysics, Inc., Los Alamos, NM (United States); Harben, Phillip E. [Rocky Mountain Geophysics, Inc., Los Alamos, NM (United States); Jarpe, Steve [Jarpe Data Solutions, Prescott, AZ (United States); Harris, David B. [Deschutes Signal Processing, Maupin, OR (United States)

    2015-09-14

    Accurate seismo-acoustic source location is one of the fundamental aspects of nuclear explosion monitoring. Critical to improved location is the compilation of ground truth data sets for which origin time and location are accurately known. Substantial effort by the National Laboratories and other seismic monitoring groups have been undertaken to acquire and develop ground truth catalogs that form the basis of location efforts (e.g. Sweeney, 1998; Bergmann et al., 2009; Waldhauser and Richards, 2004). In particular, more GT1 (Ground Truth 1 km) events are required to improve three-dimensional velocity models that are currently under development. Mine seismicity can form the basis of accurate ground truth datasets. Although the location of mining explosions can often be accurately determined using array methods (e.g. Harris, 1991) and from overhead observations (e.g. MacCarthy et al., 2008), accurate origin time estimation can be difficult. Occasionally, mine operators will share shot time, location, explosion size and even shot configuration, but this is rarely done, especially in foreign countries. Additionally, shot times provided by mine operators are often inaccurate. An inexpensive, ground truth event detector that could be mailed to a contact, placed in close proximity (< 5 km) to mining regions or earthquake aftershock regions that automatically transmits back ground-truth parameters, would greatly aid in development of ground truth datasets that could be used to improve nuclear explosion monitoring capabilities. We are developing an inexpensive, compact, lightweight smart sensor unit (or units) that could be used in the development of ground truth datasets for the purpose of improving nuclear explosion monitoring capabilities. The units must be easy to deploy, be able to operate autonomously for a significant period of time (> 6 months) and inexpensive enough to be discarded after useful operations have expired (although this may not be part of our business

  4. Eliciting Perceptual Ground Truth for Image Segmentation

    OpenAIRE

    Hodge, Victoria Jane; Eakins, John; Austin, Jim

    2006-01-01

    In this paper, we investigate human visual perception and establish a body of ground truth data elicited from human visual studies. We aim to build on the formative work of Ren, Eakins and Briggs who produced an initial ground truth database. Human subjects were asked to draw and rank their perceptions of the parts of a series of figurative images. These rankings were then used to score the perceptions, identify the preferred human breakdowns and thus allow us to induce perceptual rules for h...

  5. Ground Truth Collections at the MTI Core Sites

    International Nuclear Information System (INIS)

    Garrett, A.J.

    2001-01-01

    The Savannah River Technology Center (SRTC) selected 13 sites across the continental US and one site in the western Pacific to serve as the primary or core site for collection of ground truth data for validation of MTI science algorithms. Imagery and ground truth data from several of these sites are presented in this paper. These sites are the Comanche Peak, Pilgrim and Turkey Point power plants, Ivanpah playas, Crater Lake, Stennis Space Center and the Tropical Western Pacific ARM site on the island of Nauru. Ground truth data includes water temperatures (bulk and skin), radiometric data, meteorological data and plant operating data. The organizations that manage these sites assist SRTC with its ground truth data collections and also give the MTI project a variety of ground truth measurements that they make for their own purposes. Collectively, the ground truth data from the 14 core sites constitute a comprehensive database for science algorithm validation

  6. On the ground truth problem of malicious DNS traffic analysis

    DEFF Research Database (Denmark)

    Stevanovic, Matija; Pedersen, Jens Myrup; D’Alconzo, Alessandro

    2015-01-01

    algorithms at their core. These methods require accurate ground truth of both malicious and benign DNS traffic for model training as well as for the performance evaluation. This paper elaborates on the problem of obtaining such a ground truth and evaluates practices employed by contemporary detection methods...

  7. Fast and Accurate Ground Truth Generation for Skew-Tolerance Evaluation of Page Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Okun Oleg

    2006-01-01

    Full Text Available Many image segmentation algorithms are known, but often there is an inherent obstacle in the unbiased evaluation of segmentation quality: the absence or lack of a common objective representation for segmentation results. Such a representation, known as the ground truth, is a description of what one should obtain as the result of ideal segmentation, independently of the segmentation algorithm used. The creation of ground truth is a laborious process and therefore any degree of automation is always welcome. Document image analysis is one of the areas where ground truths are employed. In this paper, we describe an automated tool called GROTTO intended to generate ground truths for skewed document images, which can be used for the performance evaluation of page segmentation algorithms. Some of these algorithms are claimed to be insensitive to skew (tilt of text lines. However, this fact is usually supported only by a visual comparison of what one obtains and what one should obtain since ground truths are mostly available for upright images, that is, those without skew. As a result, the evaluation is both subjective; that is, prone to errors, and tedious. Our tool allows users to quickly and easily produce many sufficiently accurate ground truths that can be employed in practice and therefore it facilitates automatic performance evaluation. The main idea is to utilize the ground truths available for upright images and the concept of the representative square [9] in order to produce the ground truths for skewed images. The usefulness of our tool is demonstrated through a number of experiments with real-document images of complex layout.

  8. Is our Ground-Truth for Traffic Classification Reliable?

    DEFF Research Database (Denmark)

    Carela-Español, Valentín; Bujlow, Tomasz; Barlet-Ros, Pere

    2014-01-01

    . In order to evaluate these tools we have carefully built a labeled dataset of more than 500 000 flows, which contains traffic from popular applications. Our results present PACE, a commercial tool, as the most reliable solution for ground-truth generation. However, among the open-source tools available...

  9. AMS Ground Truth Measurements: Calibration and Test Lines

    International Nuclear Information System (INIS)

    Wasiolek, P.

    2013-01-01

    Airborne gamma spectrometry is one of the primary techniques used to define the extent of ground contamination after a radiological incident. Its usefulness was demonstrated extensively during the response to the Fukushima nuclear power plant (NPP) accident in March-May 2011. To map ground contamination a set of scintillation detectors is mounted on an airborne platform (airplane or helicopter) and flown over contaminated areas. The acquisition system collects spectral information together with the aircraft position and altitude every second. To provide useful information to decision makers, the count rate data expressed in counts per second (cps) needs to be converted to the terrestrial component of the exposure rate 1 m above ground, or surface activity of isotopes of concern. This is done using conversion coefficients derived from calibration flights. During a large scale radiological event, multiple flights may be necessary and may require use of assets from different agencies. However, as the production of a single, consistent map product depicting the ground contamination is the primary goal, it is critical to establish very early into the event a common calibration line. Such a line should be flown periodically in order to normalize data collected from different aerial acquisition systems and potentially flown at different flight altitudes and speeds. In order to verify and validate individual aerial systems, the calibration line needs to be characterized in terms of ground truth measurements. This is especially important if the contamination is due to short-lived radionuclides. The process of establishing such a line, as well as necessary ground truth measurements, is described in this document.

  10. AMS Ground Truth Measurements: Calibrations and Test Lines

    Energy Technology Data Exchange (ETDEWEB)

    Wasiolek, Piotr T. [National Security Technologies, LLC

    2015-12-01

    Airborne gamma spectrometry is one of the primary techniques used to define the extent of ground contamination after a radiological incident. Its usefulness was demonstrated extensively during the response to the Fukushima NPP accident in March-May 2011. To map ground contamination, a set of scintillation detectors is mounted on an airborne platform (airplane or helicopter) and flown over contaminated areas. The acquisition system collects spectral information together with the aircraft position and altitude every second. To provide useful information to decision makers, the count data, expressed in counts per second (cps), need to be converted to a terrestrial component of the exposure rate at 1 meter (m) above ground, or surface activity of the isotopes of concern. This is done using conversion coefficients derived from calibration flights. During a large-scale radiological event, multiple flights may be necessary and may require use of assets from different agencies. However, because production of a single, consistent map product depicting the ground contamination is the primary goal, it is critical to establish a common calibration line very early into the event. Such a line should be flown periodically in order to normalize data collected from different aerial acquisition systems and that are potentially flown at different flight altitudes and speeds. In order to verify and validate individual aerial systems, the calibration line needs to be characterized in terms of ground truth measurements This is especially important if the contamination is due to short-lived radionuclides. The process of establishing such a line, as well as necessary ground truth measurements, is described in this document.

  11. Government Applications Task Force ground truth study of WAG 4

    International Nuclear Information System (INIS)

    Evers, T.K.; Smyre, J.L.; King, A.L.

    1997-06-01

    This report documents the Government Applications Task Force (GATF) Buried Waste Project. The project was initiated as a field investigation and verification of the 1994 Strategic Environmental Research and Development Program's (SERDP) Buried Waste Identification Project results. The GATF project team included staff from three US Department of Energy (DOE) Laboratories [Oak Ridge National Laboratory (ORNL), Los Alamos National Laboratory (LANL), and the Savannah River Technology Center (SRTC)] and from the National Exploitation Laboratory. Similar studies were conducted at each of the three DOE laboratories to demonstrate the effective use of remote sensing technologies. The three locations were selected to assess differences in buried waste signatures under various environmental conditions (i.e., climate, terrain, precipitation, geology, etc.). After a brief background discussion of the SERDP Project, this report documents the field investigation (ground truth) results from the 1994--1995 GATF Buried Waste Study at ORNL's Waste Area Grouping (WAG) 4. Figures for this report are located in Appendix A

  12. Visualization of ground truth tracks for the video 'Tracking a "facer's" behavior in a public plaza'

    DEFF Research Database (Denmark)

    2015-01-01

    The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager.......The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager....

  13. Validation of neural spike sorting algorithms without ground-truth information.

    Science.gov (United States)

    Barnett, Alex H; Magland, Jeremy F; Greengard, Leslie F

    2016-05-01

    The throughput of electrophysiological recording is growing rapidly, allowing thousands of simultaneous channels, and there is a growing variety of spike sorting algorithms designed to extract neural firing events from such data. This creates an urgent need for standardized, automatic evaluation of the quality of neural units output by such algorithms. We introduce a suite of validation metrics that assess the credibility of a given automatic spike sorting algorithm applied to a given dataset. By rerunning the spike sorter two or more times, the metrics measure stability under various perturbations consistent with variations in the data itself, making no assumptions about the internal workings of the algorithm, and minimal assumptions about the noise. We illustrate the new metrics on standard sorting algorithms applied to both in vivo and ex vivo recordings, including a time series with overlapping spikes. We compare the metrics to existing quality measures, and to ground-truth accuracy in simulated time series. We provide a software implementation. Metrics have until now relied on ground-truth, simulated data, internal algorithm variables (e.g. cluster separation), or refractory violations. By contrast, by standardizing the interface, our metrics assess the reliability of any automatic algorithm without reference to internal variables (e.g. feature space) or physiological criteria. Stability is a prerequisite for reproducibility of results. Such metrics could reduce the significant human labor currently spent on validation, and should form an essential part of large-scale automated spike sorting and systematic benchmarking of algorithms. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. SEA ICE THICKNESS MEASUREMENT BY GROUND PENETRATING RADAR FOR GROUND TRUTH OF MICROWAVE REMOTE SENSING DATA

    Directory of Open Access Journals (Sweden)

    M. Matsumoto

    2018-04-01

    Full Text Available Observation of sea ice thickness is one of key issues to understand regional effect of global warming. One of approaches to monitor sea ice in large area is microwave remote sensing data analysis. However, ground truth must be necessary to discuss the effectivity of this kind of approach. The conventional method to acquire ground truth of ice thickness is drilling ice layer and directly measuring the thickness by a ruler. However, this method is destructive, time-consuming and limited spatial resolution. Although there are several methods to acquire ice thickness in non-destructive way, ground penetrating radar (GPR can be effective solution because it can discriminate snow-ice and ice-sea water interface. In this paper, we carried out GPR measurement in Lake Saroma for relatively large area (200 m by 300 m, approximately aiming to obtain grand truth for remote sensing data. GPR survey was conducted at 5 locations in the area. The direct measurement was also conducted simultaneously in order to calibrate GPR data for thickness estimation and to validate the result. Although GPR Bscan image obtained from 600MHz contains the reflection which may come from a structure under snow, the origin of the reflection is not obvious. Therefore, further analysis and interpretation of the GPR image, such as numerical simulation, additional signal processing and use of 200 MHz antenna, are required to move on thickness estimation.

  15. Sea Ice Thickness Measurement by Ground Penetrating Radar for Ground Truth of Microwave Remote Sensing Data

    Science.gov (United States)

    Matsumoto, M.; Yoshimura, M.; Naoki, K.; Cho, K.; Wakabayashi, H.

    2018-04-01

    Observation of sea ice thickness is one of key issues to understand regional effect of global warming. One of approaches to monitor sea ice in large area is microwave remote sensing data analysis. However, ground truth must be necessary to discuss the effectivity of this kind of approach. The conventional method to acquire ground truth of ice thickness is drilling ice layer and directly measuring the thickness by a ruler. However, this method is destructive, time-consuming and limited spatial resolution. Although there are several methods to acquire ice thickness in non-destructive way, ground penetrating radar (GPR) can be effective solution because it can discriminate snow-ice and ice-sea water interface. In this paper, we carried out GPR measurement in Lake Saroma for relatively large area (200 m by 300 m, approximately) aiming to obtain grand truth for remote sensing data. GPR survey was conducted at 5 locations in the area. The direct measurement was also conducted simultaneously in order to calibrate GPR data for thickness estimation and to validate the result. Although GPR Bscan image obtained from 600MHz contains the reflection which may come from a structure under snow, the origin of the reflection is not obvious. Therefore, further analysis and interpretation of the GPR image, such as numerical simulation, additional signal processing and use of 200 MHz antenna, are required to move on thickness estimation.

  16. Community detection algorithm evaluation with ground-truth data

    Science.gov (United States)

    Jebabli, Malek; Cherifi, Hocine; Cherifi, Chantal; Hamouda, Atef

    2018-02-01

    Community structure is of paramount importance for the understanding of complex networks. Consequently, there is a tremendous effort in order to develop efficient community detection algorithms. Unfortunately, the issue of a fair assessment of these algorithms is a thriving open question. If the ground-truth community structure is available, various clustering-based metrics are used in order to compare it versus the one discovered by these algorithms. However, these metrics defined at the node level are fairly insensitive to the variation of the overall community structure. To overcome these limitations, we propose to exploit the topological features of the 'community graphs' (where the nodes are the communities and the links represent their interactions) in order to evaluate the algorithms. To illustrate our methodology, we conduct a comprehensive analysis of overlapping community detection algorithms using a set of real-world networks with known a priori community structure. Results provide a better perception of their relative performance as compared to classical metrics. Moreover, they show that more emphasis should be put on the topology of the community structure. We also investigate the relationship between the topological properties of the community structure and the alternative evaluation measures (quality metrics and clustering metrics). It appears clearly that they present different views of the community structure and that they must be combined in order to evaluate the effectiveness of community detection algorithms.

  17. ON CONSTRUCTION OF A RELIABLE GROUND TRUTH FOR EVALUATION OF VISUAL SLAM ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Jan Bayer

    2016-11-01

    Full Text Available In this work we are concerning the problem of localization accuracy evaluation of visual-based Simultaneous Localization and Mapping (SLAM techniques. Quantitative evaluation of the SLAM algorithm performance is usually done using the established metrics of Relative pose error and Absolute trajectory error which require a precise and reliable ground truth. Such a ground truth is usually hard to obtain, while it requires an expensive external localization system. In this work we are proposing to use the SLAM algorithm itself to construct a reliable ground-truth by offline frame-by-frame processing. The generated ground-truth is suitable for evaluation of different SLAM systems, as well as for tuning the parametrization of the on-line SLAM. The presented practical experimental results indicate the feasibility of the proposed approach.

  18. Automated Breast Ultrasound for Ductal Pattern Reconstruction: Ground Truth File Generation and CADe Evaluation

    Science.gov (United States)

    Manousaki, D.; Panagiotopoulou, A.; Bizimi, V.; Haynes, M. S.; Love, S.; Kallergi, M.

    2017-11-01

    The purpose of this study was the generation of ground truth files (GTFs) of the breast ducts from 3D images of the Invenia™ Automated Breast Ultrasound System (ABUS) system (GE Healthcare, Little Chalfont, UK) and the application of these GTFs for the optimization of the imaging protocol and the evaluation of a computer aided detection (CADe) algorithm developed for automated duct detection. Six lactating, nursing volunteers were scanned with the ABUS before and right after breastfeeding their infants. An expert in breast ultrasound generated rough outlines of the milk-filled ducts in the transaxial slices of all image volumes and the final GTFs were created by using thresholding and smoothing tools in ImageJ. In addition, a CADe algorithm automatically segmented duct like areas and its results were compared to the expert’s GTFs by estimating true positive fraction (TPF) or % overlap. The CADe output differed significantly from the expert’s but both detected a smaller than expected volume of the ducts due to insufficient contrast (ducts were partially filled with milk), discontinuities, and artifacts. GTFs were used to modify the imaging protocol and improve the CADe method. In conclusion, electronic GTFs provide a valuable tool in the optimization of a tomographic imaging system, the imaging protocol, and the CADe algorithms. Their generation, however, is an extremely time consuming, strenuous process, particularly for multi-slice examinations, and alternatives based on phantoms or simulations are highly desirable.

  19. Ground truth data collection on mining industrial explosions registered by the International Monitoring System

    International Nuclear Information System (INIS)

    Ehl'tekov, A.Yu.; Gordon, V.P.; Firsov, V.A.; Chervyakov, V.B.

    2004-01-01

    The presentation is dedicated to organizational and technical issues connected with the task of Comprehensive Test-Ban-Treaty Organization timely notification on large chemical explosions including data on explosion location and time, on applied explosive substance quantity and type, and also on configuration and assumed purpose of explosion. Explosions registered by International Monitoring System are of special interest. Their data could be used for calibration of the monitoring system. Ground truth data collection and some explosions location results on Russia's mining enterprises were given. Ground truth data collection peculiarities according to mining industrial explosions were considered. (author)

  20. Satellite markers: a simple method for ground truth car pose on stereo video

    Science.gov (United States)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  1. Skepticism, truth as coherence, and constructivist epistemology: grounds for resolving the discord between science and religion?

    Science.gov (United States)

    Staver, John R.

    2010-03-01

    Science and religion exhibit multiple relationships as ways of knowing. These connections have been characterized as cousinly, mutually respectful, non-overlapping, competitive, proximate-ultimate, dominant-subordinate, and opposing-conflicting. Some of these ties create stress, and tension between science and religion represents a significant chapter in humans' cultural heritage before and since the Enlightenment. Truth, knowledge, and their relation are central to science and religion as ways of knowing, as social institutions, and to their interaction. In religion, truth is revealed through God's word. In science, truth is sought after via empirical methods. Discord can be viewed as a competition for social legitimization between two social institutions whose goals are explaining the world and how it works. Under this view, the root of the discord is truth as correspondence. In this concept of truth, knowledge corresponds to the facts of reality, and conflict is inevitable for many because humans want to ask which one—science or religion—gets the facts correct. But, the root paradox, also known as the problem of the criterion, suggests that seeking to know nature as it is represents a fruitless endeavor. The discord can be set on new ground and resolved by taking a moderately skeptical line of thought, one which employs truth as coherence and a moderate form of constructivist epistemology. Quantum mechanics and evolution as scientific theories and scientific research on human consciousness and vision provide support for this line of argument. Within a constructivist perspective, scientists would relinquish only the pursuit of knowing reality as it is. Scientists would retain everything else. Believers who hold that religion explains reality would come to understand that God never revealed His truth of nature; rather, He revealed His truth in how we are to conduct our lives.

  2. Go fly a kite : air truthing replaces ground truthing for environmental investigations

    Energy Technology Data Exchange (ETDEWEB)

    Eaton, S.

    2008-05-15

    This article discussed the use of kite aerial photography (KAP) in oil and gas exploration activities. KAP exhibits a minimal environmental footprint while providing high resolution airborne data of the Earth's surface in infrared and a variety of other media. The cost-effective technology is being employed by Alberta's oil and gas operators as well as by the environmental consulting sector. The kites fly at lower elevations than other remote sensing tools, and yield better spatial resolution on the ground. KAP can map the Earth's surface at a scale of investigation on the order of 5 to 10 centimetres. The images are placed into a geo-referenced mosaic along with poorer resolution remote sensing tools. A KAP kit can be assembled for under $1000. By using infrared KAP images, operators are able to determine the health of muskeg and swamp areas and measure the rate of photosynthesis of plants. KAP is also used to evaluate troublesome wellsite by reclamation groups. The next generation of sensors will include radio-controlled drones and miniature aircraft. 6 figs.

  3. The evaluation of a population based diffusion tensor image atlas using a ground truth method

    Science.gov (United States)

    Van Hecke, Wim; Leemans, Alexander; D'Agostino, Emiliano; De Backer, Steve; Vandervliet, Evert; Parizel, Paul M.; Sijbers, Jan

    2008-03-01

    Purpose: Voxel based morphometry (VBM) is increasingly being used to detect diffusion tensor (DT) image abnormalities in patients for different pathologies. An important requisite for these VBM studies is the use of a high-dimensional, non-rigid coregistration technique, which is able to align both the spatial and the orientational information. Recent studies furthermore indicate that high-dimensional DT information should be included during coregistration for an optimal alignment. In this context, a population based DTI atlas is created that preserves the orientational DT information robustly and contains a minimal bias towards any specific individual data set. Methods: A ground truth evaluation method is developed using a single subject DT image that is deformed with 20 deformation fields. Thereafter, an atlas is constructed based on these 20 resulting images. Thereby, the non-rigid coregistration algorithm is based on a viscous fluid model and on mutual information. The fractional anisotropy (FA) maps as well as the DT elements are used as DT image information during the coregistration algorithm, in order to minimize the orientational alignment inaccuracies. Results: The population based DT atlas is compared with the ground truth image using accuracy and precision measures of spatial and orientational dependent metrics. Results indicate that the population based atlas preserves the orientational information in a robust way. Conclusion: A subject independent population based DT atlas is constructed and evaluated with a ground truth method. This atlas contains all available orientational information and can be used in future VBM studies as a reference system.

  4. Improving the Quality of Satellite Imagery Based on Ground-Truth Data from Rain Gauge Stations

    Directory of Open Access Journals (Sweden)

    Ana F. Militino

    2018-03-01

    Full Text Available Multitemporal imagery is by and large geometrically and radiometrically accurate, but the residual noise arising from removal clouds and other atmospheric and electronic effects can produce outliers that must be mitigated to properly exploit the remote sensing information. In this study, we show how ground-truth data from rain gauge stations can improve the quality of satellite imagery. To this end, a simulation study is conducted wherein different sizes of outlier outbreaks are spread and randomly introduced in the normalized difference vegetation index (NDVI and the day and night land surface temperature (LST of composite images from Navarre (Spain between 2011 and 2015. To remove outliers, a new method called thin-plate splines with covariates (TpsWc is proposed. This method consists of smoothing the median anomalies with a thin-plate spline model, whereby transformed ground-truth data are the external covariates of the model. The performance of the proposed method is measured with the square root of the mean square error (RMSE, calculated as the root of the pixel-by-pixel mean square differences between the original data and the predicted data with the TpsWc model and with a state-space model with and without covariates. The study shows that the use of ground-truth data reduces the RMSE in both the TpsWc model and the state-space model used for comparison purposes. The new method successfully removes the abnormal data while preserving the phenology of the raw data. The RMSE reduction percentage varies according to the derived variables (NDVI or LST, but reductions of up to 20% are achieved with the new proposal.

  5. Reference-free ground truth metric for metal artifact evaluation in CT images

    International Nuclear Information System (INIS)

    Kratz, Baerbel; Ens, Svitlana; Mueller, Jan; Buzug, Thorsten M.

    2011-01-01

    Purpose: In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. Methods: The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. Results: The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. Conclusions: The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.

  6. Field Ground Truthing Data Collector - a Mobile Toolkit for Image Analysis and Processing

    Science.gov (United States)

    Meng, X.

    2012-07-01

    Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1) Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use) and health conditions of ecosystems and environments in the vicinity of the flight field; 2) Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3) Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.

  7. FIELD GROUND TRUTHING DATA COLLECTOR – A MOBILE TOOLKIT FOR IMAGE ANALYSIS AND PROCESSING

    Directory of Open Access Journals (Sweden)

    X. Meng

    2012-07-01

    Full Text Available Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1 Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use and health conditions of ecosystems and environments in the vicinity of the flight field; 2 Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3 Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.

  8. Ground Truth Studies - A hands-on environmental science program for students, grades K-12

    Science.gov (United States)

    Katzenberger, John; Chappell, Charles R.

    1992-01-01

    The paper discusses the background and the objectives of the Ground Truth Studies (GTSs), an activity-based teaching program which integrates local environmental studies with global change topics, utilizing remotely sensed earth imagery. Special attention is given to the five key concepts around which the GTS programs are organized, the pilot program, the initial pilot study evaluation, and the GTS Handbook. The GTS Handbook contains a primer on global change and remote sensing, aerial and satellite images, student activities, glossary, and an appendix of reference material. Also described is a K-12 teacher training model. International participation in the program is to be initiated during the 1992-1993 school year.

  9. Modified ground-truthing: an accurate and cost-effective food environment validation method for town and rural areas.

    Science.gov (United States)

    Caspi, Caitlin Eicher; Friebur, Robin

    2016-03-17

    A major concern in food environment research is the lack of accuracy in commercial business listings of food stores, which are convenient and commonly used. Accuracy concerns may be particularly pronounced in rural areas. Ground-truthing or on-site verification has been deemed the necessary standard to validate business listings, but researchers perceive this process to be costly and time-consuming. This study calculated the accuracy and cost of ground-truthing three town/rural areas in Minnesota, USA (an area of 564 miles, or 908 km), and simulated a modified validation process to increase efficiency without comprising accuracy. For traditional ground-truthing, all streets in the study area were driven, while the route and geographic coordinates of food stores were recorded. The process required 1510 miles (2430 km) of driving and 114 staff hours. The ground-truthed list of stores was compared with commercial business listings, which had an average positive predictive value (PPV) of 0.57 and sensitivity of 0.62 across the three sites. Using observations from the field, a modified process was proposed in which only the streets located within central commercial clusters (the 1/8 mile or 200 m buffer around any cluster of 2 stores) would be validated. Modified ground-truthing would have yielded an estimated PPV of 1.00 and sensitivity of 0.95, and would have resulted in a reduction in approximately 88 % of the mileage costs. We conclude that ground-truthing is necessary in town/rural settings. The modified ground-truthing process, with excellent accuracy at a fraction of the costs, suggests a new standard and warrants further evaluation.

  10. First- and third-party ground truth for key frame extraction from consumer video clips

    Science.gov (United States)

    Costello, Kathleen; Luo, Jiebo

    2007-02-01

    Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.

  11. Modeling multiple time series annotations as noisy distortions of the ground truth: An Expectation-Maximization approach.

    Science.gov (United States)

    Gupta, Rahul; Audhkhasi, Kartik; Jacokes, Zach; Rozga, Agata; Narayanan, Shrikanth

    2018-01-01

    Studies of time-continuous human behavioral phenomena often rely on ratings from multiple annotators. Since the ground truth of the target construct is often latent, the standard practice is to use ad-hoc metrics (such as averaging annotator ratings). Despite being easy to compute, such metrics may not provide accurate representations of the underlying construct. In this paper, we present a novel method for modeling multiple time series annotations over a continuous variable that computes the ground truth by modeling annotator specific distortions. We condition the ground truth on a set of features extracted from the data and further assume that the annotators provide their ratings as modification of the ground truth, with each annotator having specific distortion tendencies. We train the model using an Expectation-Maximization based algorithm and evaluate it on a study involving natural interaction between a child and a psychologist, to predict confidence ratings of the children's smiles. We compare and analyze the model against two baselines where: (i) the ground truth in considered to be framewise mean of ratings from various annotators and, (ii) each annotator is assumed to bear a distinct time delay in annotation and their annotations are aligned before computing the framewise mean.

  12. A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique

    International Nuclear Information System (INIS)

    Dou, Tai H.; Thomas, David H.; O'Connell, Dylan P.; Lamb, James M.; Lee, Percy; Low, Daniel A.

    2015-01-01

    Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the original 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its

  13. A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique

    Energy Technology Data Exchange (ETDEWEB)

    Dou, Tai H., E-mail: tdou@mednet.ucla.edu; Thomas, David H.; O' Connell, Dylan P.; Lamb, James M.; Lee, Percy; Low, Daniel A.

    2015-11-15

    Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the original 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its

  14. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    Science.gov (United States)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  15. A new device for acquiring ground truth on the absorption of light by turbid waters

    Science.gov (United States)

    Klemas, V. (Principal Investigator); Srna, R.; Treasure, W.

    1974-01-01

    The author has identified the following significant results. A new device, called a Spectral Attenuation Board, has been designed and tested, which enables ERTS-1 sea truth collection teams to monitor the attenuation depths of three colors continuously, as the board is being towed behind a boat. The device consists of a 1.2 x 1.2 meter flat board held below the surface of the water at a fixed angle to the surface of the water. A camera mounted above the water takes photographs of the board. The resulting film image is analyzed by a micro-densitometer trace along the descending portion of the board. This yields information on the rate of attenuation of light penetrating the water column and the Secchi depth. Red and green stripes were painted on the white board to approximate band 4 and band 5 of the ERTS MSS so that information on the rate of light absorption by the water column of light in these regions of the visible spectrum could be concurrently measured. It was found that information from a red, green, and white stripe may serve to fingerprint the composition of the water mass. A number of these devices, when automated, could also be distributed over a large region to provide a cheap method of obtaining valuable satellite ground truth data at present time intervals.

  16. SIR-C/X-SAR data calibration and ground truth campaign over the NASA-CB1 test-site

    International Nuclear Information System (INIS)

    Notarnicola, C.; Posa, F.; Refice, A.; Sergi, R.; Smacchia, P.; Casarano, D.; De Carolis, G.; Mattia, F.; Schena, V.D.

    2001-01-01

    During the Space Shuttle Endeavour mission in October 1994, a remote-sensing campaign was carried out with the objectives of both radiometric and polarimetric calibration and ground truth data acquisition of bare soils. This paper presents the results obtained in the experiment. Polarimetric cross-talk and channel imbalance values, as well as radiometric calibration parameters, have been found to be within the science requirements for SAR images. Regarding ground truth measurements, a wide spread in the height rms values and correlation lengths has been observed, which was motivated a critical revisiting of surface parameters descriptors

  17. Combining Ground-Truthing and Technology to Improve Accuracy in Establishing Children's Food Purchasing Behaviors.

    Science.gov (United States)

    Coakley, Hannah Lee; Steeves, Elizabeth Anderson; Jones-Smith, Jessica C; Hopkins, Laura; Braunstein, Nadine; Mui, Yeeli; Gittelsohn, Joel

    Developing nutrition-focused environmental interventions for youth requires accurate assessment of where they purchase food. We have developed an innovative, technology-based method to improve the accuracy of food source recall among children using a tablet PC and ground-truthing methodologies. As part of the B'more Healthy Communties for Kids study, we mapped and digitally photographed every food source within a half-mile radius of 14 Baltimore City recreation centers. This food source database was then used with children from the surrounding neighborhoods to search for and identify the food sources they frequent. This novel integration of traditional data collection and technology enables researchers to gather highly accurate information on food source usage among children in Baltimore City. Funding is provided by the NICHD U-54 Grant #1U54HD070725-02.

  18. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  19. Ground truth methods for optical cross-section modeling of biological aerosols

    Science.gov (United States)

    Kalter, J.; Thrush, E.; Santarpia, J.; Chaudhry, Z.; Gilberry, J.; Brown, D. M.; Brown, A.; Carter, C. C.

    2011-05-01

    Light detection and ranging (LIDAR) systems have demonstrated some capability to meet the needs of a fastresponse standoff biological detection method for simulants in open air conditions. These systems are designed to exploit various cloud signatures, such as differential elastic backscatter, fluorescence, and depolarization in order to detect biological warfare agents (BWAs). However, because the release of BWAs in open air is forbidden, methods must be developed to predict candidate system performance against real agents. In support of such efforts, the Johns Hopkins University Applied Physics Lab (JHU/APL) has developed a modeling approach to predict the optical properties of agent materials from relatively simple, Biosafety Level 3-compatible bench top measurements. JHU/APL has fielded new ground truth instruments (in addition to standard particle sizers, such as the Aerodynamic particle sizer (APS) or GRIMM aerosol monitor (GRIMM)) to more thoroughly characterize the simulant aerosols released in recent field tests at Dugway Proving Ground (DPG). These instruments include the Scanning Mobility Particle Sizer (SMPS), the Ultraviolet Aerodynamic Particle Sizer (UVAPS), and the Aspect Aerosol Size and Shape Analyser (Aspect). The SMPS was employed as a means of measuring smallparticle concentrations for more accurate Mie scattering simulations; the UVAPS, which measures size-resolved fluorescence intensity, was employed as a path toward fluorescence cross section modeling; and the Aspect, which measures particle shape, was employed as a path towards depolarization modeling.

  20. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    Science.gov (United States)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  1. Evaluation of digital image correlation techniques using realistic ground truth speckle images

    International Nuclear Information System (INIS)

    Cofaru, C; Philips, W; Van Paepegem, W

    2010-01-01

    Digital image correlation (DIC) has been acknowledged and widely used in recent years in the field of experimental mechanics as a contactless method for determining full field displacements and strains. Even though several sub-pixel motion estimation algorithms have been proposed in the literature, little is known about their accuracy and limitations in reproducing complex underlying motion fields occurring in real mechanical tests. This paper presents a new method for evaluating sub-pixel motion estimation algorithms using ground truth speckle images that are realistically warped using artificial motion fields that were obtained following two distinct approaches: in the first, the horizontal and vertical displacement fields are created according to theoretical formulas for the given type of experiment while the second approach constructs the displacements through radial basis function interpolation starting from real DIC results. The method is applied in the evaluation of five DIC algorithms with results indicating that the gradient-based DIC methods generally have a quality advantage when using small sized blocks and are a better choice for calculating very small displacements and strains. The Newton–Raphson is the overall best performing method with a notable quality advantage when large block sizes are employed and in experiments where large strain fields are of interest

  2. Ground truth measurements plan for the Multispectral Thermal Imager (MTI) satellite

    Energy Technology Data Exchange (ETDEWEB)

    Garrett, A.J.

    2000-01-03

    Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), and the Savannah River Technology Center (SRTC) have developed a diverse group of algorithms for processing and analyzing the data that will be collected by the Multispectral Thermal Imager (MTI) after launch late in 1999. Each of these algorithms must be verified by comparison to independent surface and atmospheric measurements. SRTC has selected 13 sites in the continental U.S. for ground truth data collections. These sites include a high altitude cold water target (Crater Lake), cooling lakes and towers in the warm, humid southeastern US, Department of Energy (DOE) climate research sites, the NASA Stennis satellite Validation and Verification (V and V) target array, waste sites at the Savannah River Site, mining sites in the Four Corners area and dry lake beds in the southwestern US. SRTC has established mutually beneficial relationships with the organizations that manage these sites to make use of their operating and research data and to install additional instrumentation needed for MTI algorithm V and V.

  3. Automatic Barometric Updates from Ground-Based Navigational Aids

    Science.gov (United States)

    1990-03-12

    ro fAutomatic Barometric Updates US Department from of Transportation Ground-Based Federal Aviation Administration Navigational Aids Office of Safety...tighter vertical spacing controls , particularly for operations near Terminal Control Areas (TCAs), Airport Radar Service Areas (ARSAs), military climb and...E.F., Ruth, J.C., and Williges, B.H. (1987). Speech Controls and Displays. In Salvendy, G., E. Handbook of Human Factors/Ergonomics, New York, John

  4. How Many Subjects are Needed for a Visual Field Normative Database? A Comparison of Ground Truth and Bootstrapped Statistics.

    Science.gov (United States)

    Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K

    2018-03-01

    The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.

  5. Automatic Scheduling and Planning (ASAP) in future ground control systems

    Science.gov (United States)

    Matlin, Sam

    1988-01-01

    This report describes two complementary approaches to the problem of space mission planning and scheduling. The first is an Expert System or Knowledge-Based System for automatically resolving most of the activity conflicts in a candidate plan. The second is an Interactive Graphics Decision Aid to assist the operator in manually resolving the residual conflicts which are beyond the scope of the Expert System. The two system designs are consistent with future ground control station activity requirements, support activity timing constraints, resource limits and activity priority guidelines.

  6. Towards ground-truthing of spaceborne estimates of above-ground biomass and leaf area index in tropical rain forests

    Science.gov (United States)

    Köhler, P.; Huth, A.

    2010-05-01

    The canopy height of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or lidar. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground biomass (AGB) (and thus carbon content of vegetation) and leaf area index (LAI). The process-based forest growth model FORMIND2.0 was applied to simulate (a) undisturbed forest growth and (b) a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia) in South-East Asia. It is found that for undisturbed forest and a variety of disturbed forests situations AGB can be expressed as a power-law function of canopy height h (AGB=a·hb) with an r2~60% for a spatial resolution of 20 m×20 m (0.04 ha, also called plot size). The regression is becoming significant better for the hectare wide analysis of the disturbed forest sites (r2=91%). There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2~60%) between AGB and the area fraction in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot data from the same region and with the large-scale forest inventory in Lambir. We conclude that the spaceborne remote sensing techniques have the potential to

  7. Surface Properties and Characteristics of Mars Landing Sites from Remote Sensing Data and Ground Truth

    Science.gov (United States)

    Golombek, M. P.; Haldemann, A. F.; Simpson, R. A.; Furgason, R. L.; Putzig, N. E.; Huertas, A.; Arvidson, R. E.; Heet, T.; Bell, J. F.; Mellon, M. T.; McEwen, A. S.

    2008-12-01

    Surface characteristics at the six sites where spacecraft have successfully landed on Mars can be related favorably to their signatures in remotely sensed data from orbit and from the Earth. Comparisons of the rock abundance, types and coverage of soils (and their physical properties), thermal inertia, albedo, and topographic slope all agree with orbital remote sensing estimates and show that the materials at the landing sites can be used as ground truth for the materials that make up most of the equatorial and mid- to moderately high-latitude regions of Mars. The six landing sites sample two of the three dominant global thermal inertia and albedo units that cover ~80% of the surface of Mars. The Viking, Spirit, Mars Pathfinder, and Phoenix landing sites are representative of the moderate to high thermal inertia and intermediate to high albedo unit that is dominated by crusty, cloddy, blocky or frozen soils (duricrust that may be layered) with various abundances of rocks and bright dust. The Opportunity landing site is representative of the moderate to high thermal inertia and low albedo surface unit that is relatively dust free and composed of dark eolian sand and/or increased abundance of rocks. Rock abundance derived from orbital thermal differencing techniques in the equatorial regions agrees with that determined from rock counts at the surface and varies from ~3-20% at the landing sites. The size-frequency distributions of rocks >1.5 m diameter fully resolvable in HiRISE images of the landing sites follow exponential models developed from lander measurements of smaller rocks and are continuous with these rock distributions indicating both are part of the same population. Interpretation of radar data confirms the presence of load bearing, relatively dense surfaces controlled by the soil type at the landing sites, regional rock populations from diffuse scattering similar to those observed directly at the sites, and root-mean-squared slopes that compare favorably

  8. Towards Autonomous Agriculture: Automatic Ground Detection Using Trinocular Stereovision

    Directory of Open Access Journals (Sweden)

    Annalisa Milella

    2012-09-01

    Full Text Available Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively.

  9. Strategies for cloud-top phase determination: differentiation between thin cirrus clouds and snow in manual (ground truth) analyses

    Science.gov (United States)

    Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.

    1996-12-01

    Quantitative assessments on the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create ground truth analyses for the evaluation of cloud detection algorithms is relatively straightforward. However, when focus shifts toward quantifying the performance of automated cloud classification algorithms, the task of creating ground truth images becomes much more complicated since these CNC analyses must differentiate between water and ice cloud tops while ensuring that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. The process of creating these ground truth CNC analyses may become particularly difficult when little or no spectral signature is evident between a cloud and its background, as appears to be the case when thin cirrus is present over snow-covered surfaces. In this paper, procedures are described that enhance the researcher's ability to manually interpret and differentiate between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery. The methodology uses data in up to six AVHRR spectral bands, including an additional band derived from the daytime 3.7 micron channel, which has proven invaluable for the manual discrimination between thin cirrus clouds and snow. It is concluded that while the 1.6 micron channel remains essential to differentiate between thin ice clouds and snow. However, this capability that may be lost if the 3.7 micron data switches to a nighttime-only transmission with the launch of future NOAA satellites.

  10. Ground-truth aerosol lidar observations: can the Klett solutions obtained from ground and space be equal for the same aerosol case?

    International Nuclear Information System (INIS)

    Ansmann, Albert

    2006-01-01

    Upcoming multiyear satellite lidar aerosol observations need strong support by a worldwide ground-truth lidar network. In this context the question arises as to whether the ground stations can deliver the same results as obtained from space when the Klett formalism is applied to elastic backscatter lidar data for the same aerosol case. This question is investigated based on simulations of observed cases of simple and complex aerosol layering. The results show that the differences between spaceborne and ground-based observations can be as large as20% for the backscatter and extinction coefficients and the optimum estimates of the column lidar ratios. In cases with complex aerosol layering, the application of the two-layer approach can lead to similar results (space, ground) and accurate products provided that horizontally homogeneous aerosol conditions are given

  11. Assessment of infrasound signals recorded on seismic stations and infrasound arrays in the western United States using ground truth sources

    Science.gov (United States)

    Park, Junghyun; Hayward, Chris; Stump, Brian W.

    2018-06-01

    Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.

  12. Ground truthing for methane hotspots at Railroad Valley, NV - application to Mars

    Science.gov (United States)

    Detweiler, A. M.; Kelley, C. A.; Bebout, B.; McKay, C. P.; DeMarines, J.; Yates, E. L.; Iraci, L. T.

    2011-12-01

    .7%. Temperature and relative humidity sensors were placed in the playa at 5, 20, and 30 cm below the surface. Since the relative humidity neared 100% (down to 20 cm below the surface), high enough to support microbial life, the observed absence of methane production in the playa itself is likely due to the low POC content, compared to other methane-producing environments. The spatial distribution of methane in combination with the spectral reflectance at the RRV dry lakebed makes it a good Mars analog. The ground truthing and satellite calibration work accomplished at RRV is a good exercise in preparation to identifying the origins of methane observed in the atmosphere of Mars during the upcoming 2012 Mars Science Laboratory and 2016 ExoMars Trace Gas Orbiter missions.

  13. A procedure used for a ground truth study of a land use map of North Alabama generated from LANDSAT data

    Science.gov (United States)

    Downs, S. W., Jr.; Sharma, G. C.; Bagwell, C.

    1977-01-01

    A land use map of a five county area in North Alabama was generated from LANDSAT data using a supervised classification algorithm. There was good overall agreement between the land use designated and known conditions, but there were also obvious discrepancies. In ground checking the map, two types of errors were encountered - shift and misclassification - and a method was developed to eliminate or greatly reduce the errors. Randomly selected study areas containing 2,525 pixels were analyzed. Overall, 76.3 percent of the pixels were correctly classified. A contingency coefficient of correlation was calculated to be 0.7 which is significant at the alpha = 0.01 level. The land use maps generated by computers from LANDSAT data are useful for overall land use by regional agencies. However, care must be used when making detailed analysis of small areas. The procedure used for conducting the ground truth study together with data from representative study areas is presented.

  14. Assessment of MTI Water Temperature Retrievals with Ground Truth from the Comanche Peak Steam Electric Station Cooling Lake

    International Nuclear Information System (INIS)

    Kurzeja, R.J.

    2002-01-01

    Surface water temperatures calculated from Multispectral Thermal Imager (MTI) brightness temperatures and the robust retrieval algorithm, developed by the Los Alamos National Laboratory (LANL), are compared with ground truth measurements at the Squaw Creek reservoir at the Comanche Peak Steam Electric Station near Granbury Texas. Temperatures calculated for thirty-four images covering the period May 2000 to March 2002 are compared with water temperatures measured at 10 instrumented buoy locations supplied by the Savannah River Technology Center. The data set was used to examine the effect of image quality on temperature retrieval as well as to document any bias between the sensor chip arrays (SCA's). A portion of the data set was used to evaluate the influence of proximity to shoreline on the water temperature retrievals. This study found errors in daytime water temperature retrievals of 1.8 C for SCA 2 and 4.0 C for SCA 1. The errors in nighttime water temperature retrievals were 3.8 C for SCA 1. Water temperature retrievals for nighttime appear to be related to image quality with the largest positive bias for the highest quality images and the largest negative bias for the lowest quality images. The daytime data show no apparent relationship between water temperature retrieval error and image quality. The average temperature retrieval error near open water buoys was less than corresponding values for the near-shore buoys. After subtraction of the estimated error in the ground truth data, the water temperature retrieval error was 1.2 C for the open-water buoys compared to 1.8 C for the near-shore buoys. The open-water error is comparable to that found at Nauru

  15. Evaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensor

    Directory of Open Access Journals (Sweden)

    Bodo eRückauer

    2016-04-01

    Full Text Available In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS. For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240x180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS. This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera.

  16. Truth, not truce: "common ground" on abortion, a movement within both movements.

    Science.gov (United States)

    Kelly, J R

    1995-01-01

    This sociological study examines the "common ground" movement that arose among abortion activists in the US during the 1980s. The first application of the term "common ground" to joint efforts by abortion activists on both sides of the issue is traced, and its meaning to early organizers is described. Discussion continues on the complicated and elusive efforts on the part of grassroots organizations and conflict resolution groups to practice the common ground approach to abortion. The five characteristics of the seminal common ground group in St. Louis were that it resulted from a combined pro-life and pro-choice initiative, it involved activists who publicly distinguished common ground from moral compromise or political accommodation, the activists remained loyal to their abortion activities, the activists agreed to cooperate in efforts aimed at reducing the pressures on women to abort, and common ground involved identifying the overlaps in emerging social thinking. The conceptual difficulties involved with use of the term are included in the reasons given for its virtual disappearance from abortion reporting in the press, which was busy relaying incidents of violence at abortion clinics. The election of President Clinton also stole the momentum from the common ground movement. While the future of movements based on the concept of "common ground" as envisioned by the St. Louis group remains precarious, depending for success as it does on actually changing society, this use of the term bears witness that conflicting loyalties do not preclude the promotion of common good. This meaning of the term is worth pursuing in cultural controversies such as that posed by abortion.

  17. Automatic tracking of wake vortices using ground-wind sensor data

    Science.gov (United States)

    1977-01-03

    Algorithms for automatic tracking of wake vortices using ground-wind anemometer : data are developed. Methods of bad-data suppression, track initiation, and : track termination are included. An effective sensor-failure detection-and identification : ...

  18. Geographic information system for fusion and analysis of high-resolution remote sensing and ground truth data

    Science.gov (United States)

    Freeman, Anthony; Way, Jo Bea; Dubois, Pascale; Leberl, Franz

    1992-01-01

    We seek to combine high-resolution remotely sensed data with models and ground truth measurements, in the context of a Geographical Information System, integrated with specialized image processing software. We will use this integrated system to analyze the data from two Case Studies, one at a bore Al forest site, the other a tropical forest site. We will assess the information content of the different components of the data, determine the optimum data combinations to study biogeophysical changes in the forest, assess the best way to visualize the results, and validate the models for the forest response to different radar wavelengths/polarizations. During the 1990's, unprecedented amounts of high-resolution images from space of the Earth's surface will become available to the applications scientist from the LANDSAT/TM series, European and Japanese ERS-1 satellites, RADARSAT and SIR-C missions. When the Earth Observation Systems (EOS) program is operational, the amount of data available for a particular site can only increase. The interdisciplinary scientist, seeking to use data from various sensors to study his site of interest, may be faced with massive difficulties in manipulating such large data sets, assessing their information content, determining the optimum combinations of data to study a particular parameter, visualizing his results and validating his model of the surface. The techniques to deal with these problems are also needed to support the analysis of data from NASA's current program of Multi-sensor Airborne Campaigns, which will also generate large volumes of data. In the Case Studies outlined in this proposal, we will have somewhat unique data sets. For the Bonanza Creek Experimental Forest (Case I) calibrated DC-8 SAR data and extensive ground truth measurement are already at our disposal. The data set shows documented evidence to temporal change. The Belize Forest Experiment (Case II) will produce calibrated DC-8 SAR and AVIRIS data, together with

  19. Ground Truthing Orbital Clay Mineral Observations with the APXS Onboard Mars Exploration Rover Opportunity

    Science.gov (United States)

    Schroeder, C.; Gellert, R.; VanBommel, S.; Clark, B. C.; Ming, D. W.; Mittlefehldt, D. S.; Yen, A. S.

    2016-01-01

    NASA's Mars Exploration Rover Opportunity has been exploring approximately 22 km diameter Endeavour crater since 2011. Its rim segments predate the Hesperian-age Burns formation and expose Noachian-age material, which is associated with orbital Fe3+-Mg-rich clay mineral observations [1,2]. Moving to an orders of magnitude smaller instrumental field of view on the ground, the clay minerals were challenging to pinpoint on the basis of geochemical data because they appear to be the result of near-isochemical weathering of the local bedrock [3,4]. However, the APXS revealed a more complex mineral story as fracture fills and so-called red zones appear to contain more Al-rich clay minerals [5,6], which had not been observed from orbit. These observations are important to constrain clay mineral formation processes. More detail will be added as Opportunity is heading into her 10th extended mission, during which she will investigate Noachian bedrock that predates Endeavour crater, study sedimentary rocks inside Endeavour crater, and explore a fluid-carved gully. ESA's ExoMars rover will land on Noachian-age Oxia Planum where abundant Fe3+-Mg-rich clay minerals have been observed from orbit, but the story will undoubtedly become more complex once seen from the ground.

  20. Ground-Truthing of Airborne LiDAR Using RTK-GPS Surveyed Data in Coastal Louisiana's Wetlands

    Science.gov (United States)

    Lauve, R. M.; Alizad, K.; Hagen, S. C.

    2017-12-01

    Airborne LiDAR (Light Detection and Ranging) data are used by engineers and scientists to create bare earth digital elevation models (DEM), which are essential to modeling complex coastal, ecological, and hydrological systems. However, acquiring accurate bare earth elevations in coastal wetlands is difficult due to the density of marsh grasses that prevent the sensors reflection off the true ground surface. Previous work by Medeiros et al. [2015] developed a technique to assess LiDAR error and adjust elevations according to marsh vegetation density and index. The aim of this study is the collection of ground truth points and the investigation on the range of potential errors found in existing LiDAR datasets within coastal Louisiana's wetlands. Survey grids were mapped out in an area dominated by Spartina alterniflora and a survey-grade Trimble Real Time Kinematic (RTK) GPS device was employed to measure bare earth ground elevations in the marsh system adjacent to Terrebonne Bay, LA. Elevations were obtained for 20 meter-spaced surveyed grid points and were used to generate a DEM. The comparison between LiDAR derived and surveyed data DEMs yield an average difference of 23 cm with a maximum difference of 68 cm. Considering the local tidal range of 45 cm, these differences can introduce substantial error when the DEM is used for ecological modeling [Alizad et al., 2016]. Results from this study will be further analyzed and implemented in order to adjust LiDAR-derived DEMs closer to their true elevation across Louisiana's coastal wetlands. ReferencesAlizad, K., S. C. Hagen, J. T. Morris, S. C. Medeiros, M. V. Bilskie, and J. F. Weishampel (2016), Coastal wetland response to sea-level rise in a fluvial estuarine system, Earth's Future, 4(11), 483-497, 10.1002/2016EF000385. Medeiros, S., S. Hagen, J. Weishampel, and J. Angelo (2015), Adjusting Lidar-Derived Digital Terrain Models in Coastal Marshes Based on Estimated Aboveground Biomass Density, Remote Sensing, 7

  1. Ground-truthing the Foraminifera-bound Nitrogen Isotope Paleo-proxy in the Modern Sargasso Sea

    Science.gov (United States)

    Smart, S.; Ren, H. A.; Fawcett, S. E.; Conte, M. H.; Rafter, P. A.; Ellis, K. K.; Weigand, M. A.; Sigman, D. M.

    2016-02-01

    We present the nitrogen isotope ratios (δ15N) of planktonic foraminifera, a type of calcifying zooplankton, collected from surface ocean net tows, moored sediment traps and core-top sediments at the Bermuda Atlantic Time-series Study site in the Sargasso Sea between 2009 and 2013. Consistent with previous measurements from low-latitude core-top sediments, the annually averaged δ15N of organic matter bound within the shells of euphotic zone-dwelling foraminifera approximates that of thermocline nitrate, the dominant source of new nitrogen to Sargasso Sea surface waters. Based on net tow collections in the upper 200 m of the water column, we observe no systematic difference between the biomass δ15N and shell-bound δ15N of a given foraminifera species. For multiple species, the δ15N of net tow-collected upper ocean shells is lower than shells from sediment traps (by 0.5-2.1‰) and lower than shells from seafloor sediments (by 0.5-1.4‰). We are currently investigating whether these differences reflect actual processes affecting shell-bound δ15N or instead relate to the different time periods over which the three sample types integrate. The foraminiferal biomass δ15N time-series from the surface Sargasso Sea exhibits significant seasonal variations, with the lowest values in fall and the highest values in spring. The roles of hydrography, biogeochemistry, and ecosystem dynamics in driving these seasonal variations will be discussed. These data from the modern subtropical ocean form part of a greater effort to ground-truth the use of foram-bound δ15N to reconstruct past nutrient conditions, not only as a recorder of the isotopic composition of nitrogen supply in oligotrophic environments but also as a recorder of the degree of nitrate consumption in high-latitude regions such as the Southern Ocean.

  2. Land Use and Land Cover, Existing land use derived from orthoimagery. Ground-truthing from discussion with local plan commission members., Published in 2000, 1:12000 (1in=1000ft) scale, Portage County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Land Use and Land Cover dataset current as of 2000. Existing land use derived from orthoimagery. Ground-truthing from discussion with local plan commission members..

  3. UAS-Borne Photogrammetry for Surface Topographic Characterization: A Ground-Truth Baseline for Future Change Detection and Refinement of Scaled Remotely-Sensed Datasets

    Science.gov (United States)

    Coppersmith, R.; Schultz-Fellenz, E. S.; Sussman, A. J.; Vigil, S.; Dzur, R.; Norskog, K.; Kelley, R.; Miller, L.

    2015-12-01

    While long-term objectives of monitoring and verification regimes include remote characterization and discrimination of surficial geologic and topographic features at sites of interest, ground truth data is required to advance development of remote sensing techniques. Increasingly, it is desirable for these ground-based or ground-proximal characterization methodologies to be as nimble, efficient, non-invasive, and non-destructive as their higher-altitude airborne counterparts while ideally providing superior resolution. For this study, the area of interest is an alluvial site at the Nevada National Security Site intended for use in the Source Physics Experiment's (Snelson et al., 2013) second phase. Ground-truth surface topographic characterization was performed using a DJI Inspire 1 unmanned aerial system (UAS), at very low altitude (clouds. Within the area of interest, careful installation of surveyed ground control fiducial markers supplied necessary targets for field collection, and information for model georectification. The resulting model includes a Digital Elevation Model derived from 2D imagery. It is anticipated that this flexible and versatile characterization process will provide point cloud data resolution equivalent to a purely ground-based LiDAR scanning deployment (e.g., 1-2cm horizontal and vertical resolution; e.g., Sussman et al., 2012; Schultz-Fellenz et al., 2013). In addition to drastically increasing time efficiency in the field, the UAS method also allows for more complete coverage of the study area when compared to ground-based LiDAR. Comparison and integration of these data with conventionally-acquired airborne LiDAR data from a higher-altitude (~ 450m) platform will aid significantly in the refinement of technologies and detection capabilities of remote optical systems to identify and detect surface geologic and topographic signatures of interest. This work includes a preliminary comparison of surface signatures detected from varying

  4. Estimation of snowpack matching ground-truth data and MODIS satellite-based observations by using regression kriging

    Science.gov (United States)

    Juan Collados-Lara, Antonio; Pardo-Iguzquiza, Eulogio; Pulido-Velazquez, David

    2016-04-01

    The estimation of Snow Water Equivalent (SWE) is essential for an appropriate assessment of the available water resources in Alpine catchment. The hydrologic regime in these areas is dominated by the storage of water in the snowpack, which is discharged to rivers throughout the melt season. An accurate estimation of the resources will be necessary for an appropriate analysis of the system operation alternatives using basin scale management models. In order to obtain an appropriate estimation of the SWE we need to know the spatial distribution snowpack and snow density within the Snow Cover Area (SCA). Data for these snow variables can be extracted from in-situ point measurements and air-borne/space-borne remote sensing observations. Different interpolation and simulation techniques have been employed for the estimation of the cited variables. In this paper we propose to estimate snowpack from a reduced number of ground-truth data (1 or 2 campaigns per year with 23 observation point from 2000-2014) and MODIS satellite-based observations in the Sierra Nevada Mountain (Southern Spain). Regression based methodologies has been used to study snowpack distribution using different kind of explicative variables: geographic, topographic, climatic. 40 explicative variables were considered: the longitude, latitude, altitude, slope, eastness, northness, radiation, maximum upwind slope and some mathematical transformation of each of them [Ln(v), (v)^-1; (v)^2; (v)^0.5). Eight different structure of regression models have been tested (combining 1, 2, 3 or 4 explicative variables). Y=B0+B1Xi (1); Y=B0+B1XiXj (2); Y=B0+B1Xi+B2Xj (3); Y=B0+B1Xi+B2XjXl (4); Y=B0+B1XiXk+B2XjXl (5); Y=B0+B1Xi+B2Xj+B3Xl (6); Y=B0+B1Xi+B2Xj+B3XlXk (7); Y=B0+B1Xi+B2Xj+B3Xl+B4Xk (8). Where: Y is the snow depth; (Xi, Xj, Xl, Xk) are the prediction variables (any of the 40 variables); (B0, B1, B2, B3) are the coefficients to be estimated. The ground data are employed to calibrate the multiple regressions. In

  5. Towards ground-truthing of spaceborne estimates of above-ground life biomass and leaf area index in tropical rain forests

    Directory of Open Access Journals (Sweden)

    P. Köhler

    2010-08-01

    Full Text Available The canopy height h of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or LIDAR. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground life biomass (AGB (and thus carbon content of vegetation and leaf area index (LAI and identify how correlation and uncertainty vary for two different spatial scales. The process-based forest growth model FORMIND2.0 was applied to simulate (a undisturbed forest growth and (b a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia in South-East Asia. In both undisturbed and disturbed forests AGB can be expressed as a power-law function of canopy height h (AGB = a · hb with an r2 ~ 60% if data are analysed in a spatial resolution of 20 m × 20 m (0.04 ha, also called plot size. The correlation coefficient of the regression is becoming significant better in the disturbed forest sites (r2 = 91% if data are analysed hectare wide. There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2 ~ 60% between AGB and the area fraction of gaps in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a

  6. Towards ground-truthing of spaceborne estimates of above-ground life biomass and leaf area index in tropical rain forests

    Science.gov (United States)

    Köhler, P.; Huth, A.

    2010-08-01

    The canopy height h of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or LIDAR. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground life biomass (AGB) (and thus carbon content of vegetation) and leaf area index (LAI) and identify how correlation and uncertainty vary for two different spatial scales. The process-based forest growth model FORMIND2.0 was applied to simulate (a) undisturbed forest growth and (b) a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia) in South-East Asia. In both undisturbed and disturbed forests AGB can be expressed as a power-law function of canopy height h (AGB = a · hb) with an r2 ~ 60% if data are analysed in a spatial resolution of 20 m × 20 m (0.04 ha, also called plot size). The correlation coefficient of the regression is becoming significant better in the disturbed forest sites (r2 = 91%) if data are analysed hectare wide. There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2 ~ 60%) between AGB and the area fraction of gaps in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot (PSP) data from the same region and with the

  7. Biometric correspondence between reface computerized facial approximations and CT-derived ground truth skin surface models objectively examined using an automated facial recognition system.

    Science.gov (United States)

    Parks, Connie L; Monson, Keith L

    2018-05-01

    This study employed an automated facial recognition system as a means of objectively evaluating biometric correspondence between a ReFace facial approximation and the computed tomography (CT) derived ground truth skin surface of the same individual. High rates of biometric correspondence were observed, irrespective of rank class (R k ) or demographic cohort examined. Overall, 48% of the test subjects' ReFace approximation probes (n=96) were matched to his or her corresponding ground truth skin surface image at R 1 , a rank indicating a high degree of biometric correspondence and a potential positive identification. Identification rates improved with each successively broader rank class (R 10 =85%, R 25 =96%, and R 50 =99%), with 100% identification by R 57 . A sharp increase (39% mean increase) in identification rates was observed between R 1 and R 10 across most rank classes and demographic cohorts. In contrast, significantly lower (p0.05) performance differences were observed across demographic cohorts or CT scan protocols. Performance measures observed in this research suggest that ReFace approximations are biometrically similar to the actual faces of the approximated individuals and, therefore, may have potential operational utility in contexts in which computerized approximations are utilized as probes in automated facial recognition systems. Copyright © 2018. Published by Elsevier B.V.

  8. Status of the undisturbed mangroves at Brunei Bay, East Malaysia: a preliminary assessment based on remote sensing and ground-truth observations

    Directory of Open Access Journals (Sweden)

    Behara Satyanarayana

    2018-02-01

    Full Text Available Brunei Bay, which receives freshwater discharge from four major rivers, namely Limbang, Sundar, Weston and Menumbok, hosts a luxuriant mangrove cover in East Malaysia. However, this relatively undisturbed mangrove forest has been less scientifically explored, especially in terms of vegetation structure, ecosystem services and functioning, and land-use/cover changes. In the present study, mangrove areal extent together with species composition and distribution at the four notified estuaries was evaluated through remote sensing (Advanced Land Observation Satellite—ALOS and ground-truth (Point-Centred Quarter Method—PCQM observations. As of 2010, the total mangrove cover was found to be ca. 35,183.74 ha, of which Weston and Menumbok occupied more than two-folds (58%, followed by Sundar (27% and Limbang (15%. The medium resolution ALOS data were efficient for mapping dominant mangrove species such as Nypa fruticans, Rhizophora apiculata, Sonneratia caseolaris, S. alba and Xylocarpus granatum in the vicinity (accuracy: 80%. The PCQM estimates found a higher basal area at Limbang and Menumbok—suggestive of more mature vegetation, compared to Sundar and Weston. Mangrove stand structural complexity (derived from the complexity index was also high in the order of Limbang > Menumbok > Sundar > Weston and supporting the perspective of less/undisturbed vegetation at two former locations. Both remote sensing and ground-truth observations have complementarily represented the distribution of Sonneratia spp. as pioneer vegetation at shallow river mouths, N. fruticans in the areas of strong freshwater discharge, R. apiculata in the areas of strong neritic incursion and X. granatum at interior/elevated grounds. The results from this study would be able to serve as strong baseline data for future mangrove investigations at Brunei Bay, including for monitoring and management purposes locally at present.

  9. Status of the undisturbed mangroves at Brunei Bay, East Malaysia: a preliminary assessment based on remote sensing and ground-truth observations

    Science.gov (United States)

    Izzaty Horsali, Nurul Amira; Mat Zauki, Nurul Ashikin; Otero, Viviana; Nadzri, Muhammad Izuan; Ibrahim, Sulong; Husain, Mohd-Lokman; Dahdouh-Guebas, Farid

    2018-01-01

    Brunei Bay, which receives freshwater discharge from four major rivers, namely Limbang, Sundar, Weston and Menumbok, hosts a luxuriant mangrove cover in East Malaysia. However, this relatively undisturbed mangrove forest has been less scientifically explored, especially in terms of vegetation structure, ecosystem services and functioning, and land-use/cover changes. In the present study, mangrove areal extent together with species composition and distribution at the four notified estuaries was evaluated through remote sensing (Advanced Land Observation Satellite—ALOS) and ground-truth (Point-Centred Quarter Method—PCQM) observations. As of 2010, the total mangrove cover was found to be ca. 35,183.74 ha, of which Weston and Menumbok occupied more than two-folds (58%), followed by Sundar (27%) and Limbang (15%). The medium resolution ALOS data were efficient for mapping dominant mangrove species such as Nypa fruticans, Rhizophora apiculata, Sonneratia caseolaris, S. alba and Xylocarpus granatum in the vicinity (accuracy: 80%). The PCQM estimates found a higher basal area at Limbang and Menumbok—suggestive of more mature vegetation, compared to Sundar and Weston. Mangrove stand structural complexity (derived from the complexity index) was also high in the order of Limbang > Menumbok > Sundar > Weston and supporting the perspective of less/undisturbed vegetation at two former locations. Both remote sensing and ground-truth observations have complementarily represented the distribution of Sonneratia spp. as pioneer vegetation at shallow river mouths, N. fruticans in the areas of strong freshwater discharge, R. apiculata in the areas of strong neritic incursion and X. granatum at interior/elevated grounds. The results from this study would be able to serve as strong baseline data for future mangrove investigations at Brunei Bay, including for monitoring and management purposes locally at present. PMID:29479500

  10. Status of the undisturbed mangroves at Brunei Bay, East Malaysia: a preliminary assessment based on remote sensing and ground-truth observations.

    Science.gov (United States)

    Satyanarayana, Behara; M Muslim, Aidy; Izzaty Horsali, Nurul Amira; Mat Zauki, Nurul Ashikin; Otero, Viviana; Nadzri, Muhammad Izuan; Ibrahim, Sulong; Husain, Mohd-Lokman; Dahdouh-Guebas, Farid

    2018-01-01

    Brunei Bay, which receives freshwater discharge from four major rivers, namely Limbang, Sundar, Weston and Menumbok, hosts a luxuriant mangrove cover in East Malaysia. However, this relatively undisturbed mangrove forest has been less scientifically explored, especially in terms of vegetation structure, ecosystem services and functioning, and land-use/cover changes. In the present study, mangrove areal extent together with species composition and distribution at the four notified estuaries was evaluated through remote sensing (Advanced Land Observation Satellite-ALOS) and ground-truth (Point-Centred Quarter Method-PCQM) observations. As of 2010, the total mangrove cover was found to be ca. 35,183.74 ha, of which Weston and Menumbok occupied more than two-folds (58%), followed by Sundar (27%) and Limbang (15%). The medium resolution ALOS data were efficient for mapping dominant mangrove species such as Nypa fruticans , Rhizophora apiculata , Sonneratia caseolaris , S. alba and Xylocarpus granatum in the vicinity (accuracy: 80%). The PCQM estimates found a higher basal area at Limbang and Menumbok-suggestive of more mature vegetation, compared to Sundar and Weston. Mangrove stand structural complexity (derived from the complexity index) was also high in the order of Limbang > Menumbok > Sundar > Weston and supporting the perspective of less/undisturbed vegetation at two former locations. Both remote sensing and ground-truth observations have complementarily represented the distribution of Sonneratia spp. as pioneer vegetation at shallow river mouths, N. fruticans in the areas of strong freshwater discharge, R. apiculata in the areas of strong neritic incursion and X. granatum at interior/elevated grounds. The results from this study would be able to serve as strong baseline data for future mangrove investigations at Brunei Bay, including for monitoring and management purposes locally at present.

  11. Heart Truth

    Science.gov (United States)

    ... health! Get a free badge or banner to post to your website or blog. Are you at risk for heart disease? Here's how to find out . Planning to use The Heart Truth logo? Check out our logo guidelines and downloads. ...

  12. Southwest U.S. Seismo-Acoustic Network: An Autonomous Data Aggregation, Detection, Localization and Ground-Truth Bulletin for the Infrasound Community

    Science.gov (United States)

    Jones, K. R.; Arrowsmith, S.

    2013-12-01

    The Southwest U.S. Seismo-Acoustic Network (SUSSAN) is a collaborative project designed to produce infrasound event detection bulletins for the infrasound community for research purposes. We are aggregating a large, unique, near real-time data set with available ground truth information from seismo-acoustic arrays across New Mexico, Utah, Nevada, California, Texas and Hawaii. The data are processed in near real-time (~ every 20 minutes) with detections being made on individual arrays and locations determined for networks of arrays. The detection and location data are then combined with any available ground truth information and compiled into a bulletin that will be released to the general public directly and eventually through the IRIS infrasound event bulletin. We use the open source Earthworm seismic data aggregation software to acquire waveform data either directly from the station operator or via the Incorporated Research Institutions for Seismology Data Management Center (IRIS DMC), if available. The data are processed using InfraMonitor, a powerful infrasound event detection and localization software program developed by Stephen Arrowsmith at Los Alamos National Laboratory (LANL). Our goal with this program is to provide the infrasound community with an event database that can be used collaboratively to study various natural and man-made sources. We encourage participation in this program directly or by making infrasound array data available through the IRIS DMC or other means. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. R&A 5317326

  13. Design of Wireless Automatic Synchronization for the Low-Frequency Coded Ground Penetrating Radar

    Directory of Open Access Journals (Sweden)

    Zhenghuan Xia

    2015-01-01

    Full Text Available Low-frequency coded ground penetrating radar (GPR with a pair of wire dipole antennas has some advantages for deep detection. Due to the large distance between the two antennas, the synchronization design is a major challenge of implementing the GPR system. This paper proposes a simple and stable wireless automatic synchronization method based on our developed GPR system, which does not need any synchronization chips or modules and reduces the cost of the hardware system. The transmitter omits the synchronization preamble and pseudorandom binary sequence (PRBS at an appropriate time interval, while receiver automatically estimates the synchronization time and receives the returned signal from the underground targets. All the processes are performed in a single FPGA. The performance of the proposed synchronization method is validated with experiment.

  14. Automatic Detection and Positioning of Ground Control Points Using TerraSAR-X Multiaspect Acquisitions

    Science.gov (United States)

    Montazeri, Sina; Gisinger, Christoph; Eineder, Michael; Zhu, Xiao xiang

    2018-05-01

    Geodetic stereo Synthetic Aperture Radar (SAR) is capable of absolute three-dimensional localization of natural Persistent Scatterer (PS)s which allows for Ground Control Point (GCP) generation using only SAR data. The prerequisite for the method to achieve high precision results is the correct detection of common scatterers in SAR images acquired from different viewing geometries. In this contribution, we describe three strategies for automatic detection of identical targets in SAR images of urban areas taken from different orbit tracks. Moreover, a complete work-flow for automatic generation of large number of GCPs using SAR data is presented and its applicability is shown by exploiting TerraSAR-X (TS-X) high resolution spotlight images over the city of Oulu, Finland and a test site in Berlin, Germany.

  15. Optimal Recovery Trajectories for Automatic Ground Collision Avoidance Systems (Auto GCAS)

    Science.gov (United States)

    Suplisson, Angela W.

    The US Air Force recently fielded the F-16 Automatic Ground Collision Avoidance System (Auto GCAS). This system meets the operational requirements of being both aggressive and timely, meaning that extremely agile avoidance maneuvers will be executed at the last second to avoid the ground. This small window of automatic operation maneuvering in close proximity to the ground makes the problem challenging. There currently exists no similar Auto GCAS for manned military 'heavy' aircraft with lower climb performance such as transport, tanker, or bomber aircraft. The F-16 Auto GCAS recovery is a single pre-planned roll to wings-level and 5-g pull-up which is very effective for fighters due to their high g and climb performance, but it is not suitable for military heavy aircraft. This research proposes a new optimal control approach to the ground collision avoidance problem for heavy aircraft by mapping the aggressive and timely requirements of the automatic recovery to the optimal control formulation which includes lateral maneuvers around terrain. This novel mapping creates two ways to pose the optimal control problem for Auto GCAS; one as a Max Distance with a Timely Trigger formulation and the other as a Min Control with an Aggressive Trigger formulation. Further, the optimal path and optimal control admitted by these two formulations are demonstrated to be equivalent at the point the automatic recovery is initiated for the simplified 2-D case. The Min Control formulation was demonstrated to have faster computational speed and was chosen for the 3-D case. Results are presented for representative heavy aircraft scenarios against 3-D digital terrain. The Min Control formulation was then compared to a Multi-Trajectory Auto GCAS with five pre-planned maneuvers. Metrics were developed to quantify the improvement from using an optimal approach versus the pre-planned maneuvers. The proposed optimal Min Control method was demonstrated to require less control or trigger later

  16. A spike sorting toolbox for up to thousands of electrodes validated with ground truth recordings in vitro and in vivo

    Science.gov (United States)

    Lefebvre, Baptiste; Deny, Stéphane; Gardella, Christophe; Stimberg, Marcel; Jetter, Florian; Zeck, Guenther; Picaud, Serge; Duebel, Jens

    2018-01-01

    In recent years, multielectrode arrays and large silicon probes have been developed to record simultaneously between hundreds and thousands of electrodes packed with a high density. However, they require novel methods to extract the spiking activity of large ensembles of neurons. Here, we developed a new toolbox to sort spikes from these large-scale extracellular data. To validate our method, we performed simultaneous extracellular and loose patch recordings in rodents to obtain ‘ground truth’ data, where the solution to this sorting problem is known for one cell. The performance of our algorithm was always close to the best expected performance, over a broad range of signal-to-noise ratios, in vitro and in vivo. The algorithm is entirely parallelized and has been successfully tested on recordings with up to 4225 electrodes. Our toolbox thus offers a generic solution to sort accurately spikes for up to thousands of electrodes. PMID:29557782

  17. Estimating Daily Maximum and Minimum Land Air Surface Temperature Using MODIS Land Surface Temperature Data and Ground Truth Data in Northern Vietnam

    Directory of Open Access Journals (Sweden)

    Phan Thanh Noi

    2016-12-01

    Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.

  18. Small UAV Automatic Ground Collision Avoidance System Design Considerations and Flight Test Results

    Science.gov (United States)

    Sorokowski, Paul; Skoog, Mark; Burrows, Scott; Thomas, SaraKatie

    2015-01-01

    The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center Small Unmanned Aerial Vehicle (SUAV) Automatic Ground Collision Avoidance System (Auto GCAS) project demonstrated several important collision avoidance technologies. First, the SUAV Auto GCAS design included capabilities to take advantage of terrain avoidance maneuvers flying turns to either side as well as straight over terrain. Second, the design also included innovative digital elevation model (DEM) scanning methods. The combination of multi-trajectory options and new scanning methods demonstrated the ability to reduce the nuisance potential of the SUAV while maintaining robust terrain avoidance. Third, the Auto GCAS algorithms were hosted on the processor inside a smartphone, providing a lightweight hardware configuration for use in either the ground control station or on board the test aircraft. Finally, compression of DEM data for the entire Earth and successful hosting of that data on the smartphone was demonstrated. The SUAV Auto GCAS project demonstrated that together these methods and technologies have the potential to dramatically reduce the number of controlled flight into terrain mishaps across a wide range of aviation platforms with similar capabilities including UAVs, general aviation aircraft, helicopters, and model aircraft.

  19. Semi-automatic handling of meteorological ground measurements using WeatherProg: prospects and practical implications

    Science.gov (United States)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; De Mascellis, Roberto; Manna, Piero; Terribile, Fabio

    2016-04-01

    WeatherProg is a computer program for the semi-automatic handling of data measured at ground stations within a climatic network. The program performs a set of tasks ranging from gathering raw point-based sensors measurements to the production of digital climatic maps. Originally the program was developed as the baseline asynchronous engine for the weather records management within the SOILCONSWEB Project (LIFE08 ENV/IT/000408), in which daily and hourly data where used to run water balance in the soil-plant-atmosphere continuum or pest simulation models. WeatherProg can be configured to automatically perform the following main operations: 1) data retrieval; 2) data decoding and ingestion into a database (e.g. SQL based); 3) data checking to recognize missing and anomalous values (using a set of differently combined checks including logical, climatological, spatial, temporal and persistence checks); 4) infilling of data flagged as missing or anomalous (deterministic or statistical methods); 5) spatial interpolation based on alternative/comparative methods such as inverse distance weighting, iterative regression kriging, and a weighted least squares regression (based on physiography), using an approach similar to PRISM. 6) data ingestion into a geodatabase (e.g. PostgreSQL+PostGIS or rasdaman). There is an increasing demand for digital climatic maps both for research and development (there is a gap between the major of scientific modelling approaches that requires digital climate maps and the gauged measurements) and for practical applications (e.g. the need to improve the management of weather records which in turn raises the support provided to farmers). The demand is particularly burdensome considering the requirement to handle climatic data at the daily (e.g. in the soil hydrological modelling) or even at the hourly time step (e.g. risk modelling in phytopathology). The key advantage of WeatherProg is the ability to perform all the required operations and

  20. Addressing the social dimensions of citizen observatories: The Ground Truth 2.0 socio-technical approach for sustainable implementation of citizen observatories

    Science.gov (United States)

    Wehn, Uta; Joshi, Somya; Pfeiffer, Ellen; Anema, Kim; Gharesifard, Mohammad; Momani, Abeer

    2017-04-01

    Owing to ICT-enabled citizen observatories, citizens can take on new roles in environmental monitoring, decision making and co-operative planning, and environmental stewardship. And yet implementing advanced citizen observatories for data collection, knowledge exchange and interactions to support policy objectives is neither always easy nor successful, given the required commitment, trust, and data reliability concerns. Many efforts are facing problems with the uptake and sustained engagement by citizens, limited scalability, unclear long-term sustainability and limited actual impact on governance processes. Similarly, to sustain the engagement of decision makers in citizen observatories, mechanisms are required from the start of the initiative in order to have them invest in and, hence, commit to and own the entire process. In order to implement sustainable citizen observatories, these social dimensions therefore need to be soundly managed. We provide empirical evidence of how the social dimensions of citizen observatories are being addressed in the Ground Truth 2.0 project, drawing on a range of relevant social science approaches. This project combines the social dimensions of citizen observatories with enabling technologies - via a socio-technical approach - so that their customisation and deployment is tailored to the envisaged societal and economic impacts of the observatories. The projects consists of the demonstration and validation of six scaled up citizen observatories in real operational conditions both in the EU and in Africa, with a specific focus on flora and fauna as well as water availability and water quality for land and natural resources management. The demonstration cases (4 EU and 2 African) cover the full 'spectrum' of citizen-sensed data usage and citizen engagement, and therefore allow testing and validation of the socio-technical concept for citizen observatories under a range of conditions.

  1. Computer aided detection in prostate cancer diagnostics: A promising alternative to biopsy? A retrospective study from 104 lesions with histological ground truth.

    Directory of Open Access Journals (Sweden)

    Anika Thon

    Full Text Available Prostate cancer (PCa diagnosis by means of multiparametric magnetic resonance imaging (mpMRI is a current challenge for the development of computer-aided detection (CAD tools. An innovative CAD-software (Watson Elementary™ was proposed to achieve high sensitivity and specificity, as well as to allege a correlate to Gleason grade.To assess the performance of Watson Elementary™ in automated PCa diagnosis in our hospital´s database of MRI-guided prostate biopsies.The evaluation was retrospective for 104 lesions (47 PCa, 57 benign from 79, 64.61±6.64 year old patients using 3T T2-weighted imaging, Apparent Diffusion Coefficient (ADC maps and dynamic contrast enhancement series. Watson Elementary™ utilizes signal intensity, diffusion properties and kinetic profile to compute a proportional Gleason grade predictor, termed Malignancy Attention Index (MAI. The analysis focused on (i the CAD sensitivity and specificity to classify suspect lesions and (ii the MAI correlation with the histopathological ground truth.The software revealed a sensitivity of 46.80% for PCa classification. The specificity for PCa was found to be 75.43% with a positive predictive value of 61.11%, a negative predictive value of 63.23% and a false discovery rate of 38.89%. CAD classified PCa and benign lesions with equal probability (P 0.06, χ2 test. Accordingly, receiver operating characteristic analysis suggests a poor predictive value for MAI with an area under curve of 0.65 (P 0.02, which is not superior to the performance of board certified observers. Moreover, MAI revealed no significant correlation with Gleason grade (P 0.60, Pearson´s correlation.The tested CAD software for mpMRI analysis was a weak PCa biomarker in this dataset. Targeted prostate biopsy and histology remains the gold standard for prostate cancer diagnosis.

  2. Application of ground-truth for classification and quantification of bird movements on migratory bird habitat initiative sites in southwest Louisiana: final report

    Science.gov (United States)

    Barrow, Wylie C.; Baldwin, Michael J.; Randall, Lori A.; Pitre, John; Dudley, Kyle J.

    2013-01-01

    This project was initiated to assess migrating and wintering bird use of lands enrolled in the Natural Resources Conservation Service’s (NRCS) Migratory Bird Habitat Initiative (MBHI). The MBHI program was developed in response to the Deepwater Horizon oil spill in 2010, with the goal of improving/creating habitat for waterbirds affected by the spill. In collaboration with the University of Delaware (UDEL), we used weather surveillance radar data (Sieges 2014), portable marine radar data, thermal infrared images, and visual observations to assess bird use of MBHI easements. Migrating and wintering birds routinely make synchronous flights near dusk (e.g., departure during migration, feeding flights during winter). Weather radars readily detect birds at the onset of these flights and have proven to be useful remote sensing tools for assessing bird-habitat relations during migration and determining the response of wintering waterfowl to wetland restoration (e.g., Wetlands Reserve Program lands). However, ground-truthing is required to identify radar echoes to species or species group. We designed a field study to ground-truth a larger-scale, weather radar assessment of bird use of MBHI sites in southwest Louisiana. We examined seasonal bird use of MBHI fields in fall, winter, and spring of 2011-2012. To assess diurnal use, we conducted total area surveys of MBHI sites in the afternoon, collecting data on bird species composition, abundance, behavior, and habitat use. In the evenings, we quantified bird activity at the MBHI easements and described flight behavior (i.e., birds landing in, departing from, circling, or flying over the MBHI tract). Our field sampling captured the onset of evening flights and spanned the period of collection of the weather radar data analyzed. Pre- and post-dusk surveys were conducted using a portable radar system and a thermal infrared camera. Landbirds, shorebirds, and wading birds were commonly found on MBHI fields during diurnal

  3. Archaeogeophysical data acquisition and analysis at Tel Burna, Israel: a valuable opportunity for ongoing ground-truth investigation and collaboration (Invited)

    Science.gov (United States)

    Pincus, J. A.

    2013-12-01

    , acquired in a zigzag east-west direction, proceeding south. The area extended from the present excavation border to the north and east. The following paper will discuss the method of data acquisition, post-processing, and analysis of the results. The final conclusions of the survey show a continuation of several key walls to the east, a valuable sub-surface tracing of the limestone bedrock, and the limit to which the archaeological material is present spatially in Area B to the north. These results play a major role in determining where to focus excavation efforts in the 2014 excavation season. This unique collaboration with the archaeological team and ongoing opportunity for archaeological ground-truthing will be documented and published as the site develops. As there is a limited presence of such data within the corpus of published archaeogeophysical research, we look forward to further investigations at the site in the coming years.

  4. The Truth of Wikipedia

    Directory of Open Access Journals (Sweden)

    Nathaniel Tkacz

    2012-05-01

    Full Text Available What does it mean to assert that Wikipedia has a relation to truth? That there is, despite regular claims to the contrary, an entire apparatus of truth in Wikipedia? In this article, I show that Wikipedia has in fact two distinct relations to truth: one which is well known and forms the basis of existing popular and scholarly commentaries, and another which refers to equally well-known aspects of Wikipedia, but has not been understood in terms of truth. I demonstrate Wikipedia's dual relation to truth through a close analysis of the Neutral Point of View core content policy (and one of the project's 'Five Pillars'. I conclude by indicating what is at stake in the assertion that Wikipedia has a regime of truth and what bearing this has on existing commentaries.

  5. Rheticus Displacement: an Automatic Geo-Information Service Platform for Ground Instabilities Detection and Monitoring

    Science.gov (United States)

    Chiaradia, M. T.; Samarelli, S.; Agrimano, L.; Lorusso, A. P.; Nutricato, R.; Nitti, D. O.; Morea, A.; Tijani, K.

    2016-12-01

    Rheticus® is an innovative cloud-based data and services hub able to deliver Earth Observation added-value products through automatic complex processes and a minimum interaction with human operators. This target is achieved by means of programmable components working as different software layers in a modern enterprise system which relies on SOA (service-oriented-architecture) model. Due to its architecture, where every functionality is well defined and encapsulated in a standalone component, Rheticus is potentially highly scalable and distributable allowing different configurations depending on the user needs. Rheticus offers a portfolio of services, ranging from the detection and monitoring of geohazards and infrastructural instabilities, to marine water quality monitoring, wildfires detection or land cover monitoring. In this work, we outline the overall cloud-based platform and focus on the "Rheticus Displacement" service, aimed at providing accurate information to monitor movements occurring across landslide features or structural instabilities that could affect buildings or infrastructures. Using Sentinel-1 (S1) open data images and Multi-Temporal SAR Interferometry techniques (i.e., SPINUA), the service is complementary to traditional survey methods, providing a long-term solution to slope instability monitoring. Rheticus automatically browses and accesses (on a weekly basis) the products of the rolling archive of ESA S1 Scientific Data Hub; S1 data are then handled by a mature running processing chain, which is responsible of producing displacement maps immediately usable to measure with sub-centimetric precision movements of coherent points. Examples are provided, concerning the automatic displacement map generation process, as well as the integration of point and distributed scatterers, the integration of multi-sensors displacement maps (e.g., Sentinel-1 IW and COSMO-SkyMed HIMAGE), the combination of displacement rate maps acquired along both ascending

  6. Optimal Recovery Trajectories for Automatic Ground Collision Avoidance Systems (Auto GCAS)

    Science.gov (United States)

    2015-03-01

    Harmon) Weilhouwer, Judson Brohmer, Aaron ‘Cdot’ George, Dave ‘Cools’ Cooley, and all the brave men and women who have lost their lives to ground... inequality constraints µ aircraft bank angle µmax upper bounds on aircraft bank angle µmin lower bounds on aircraft bank angle ω aircraft turn rate τ...interested in implementing a safety campaign to re- duce workplace injury rate just as then Treasury Secretary Paul O’Neill had done while President and Chief

  7. Frege on Truths, Truth and the True

    Directory of Open Access Journals (Sweden)

    Wolfgang Künne

    2008-08-01

    Full Text Available The founder of modern logic and grandfather of analytic philosophy was 70 years old when he published his paper 'Der Gedanke' (The Thought in 1918. This essay contains some of Gottlob Frege's deepest and most provocative reflections on the concept of truth, and it will play a prominent role in my lectures. The plan for my lectures is as follows. What is it that is (primarily true or false? 'Thoughts', is Frege's answer. In §1, I shall explain and defend this answer. In §2, I shall briefly consider his enthymematic argument for the conclusion that the word 'true' resists any attempt at defining it. In §3, I shall discuss his thesis that the thought that things are thus and so is identical with the thought that it is true that things are thus and so. The reasons we are offered for this thesis will be found wanting. In §4, I shall comment extensively on Frege's claim that, in a non-formal language like the one I am currently trying to speak, we can say whatever we want to say without ever using the word 'true' or any of its synonyms. I will reject the propositional-redundancy claim, endorse the assertive-redundancy claim and deny the connection Frege ascribes to them. In his classic 1892 paper 'Über Sinn und Bedeutung' (On Sense and Signification Frege argues that truth-values are objects. In §5, I shall scrutinize his argument. In §6, I will show that in Frege's ideography (Begriffsschrift truth, far from being redundant, is omnipresent. The final §7 is again on truth-bearers, this time as a topic in the theory of intentionality and in metaphysics. In the course of discussing Frege's views on the objecthood, the objectivity of thoughts and the timelessness of truth(s, I will plead for a somewhat mitigated Platonism.

  8. Truth and Methods.

    Science.gov (United States)

    Dasenbrock, Reed Way

    1995-01-01

    Examines literary theory's displacing of "method" in the New Historicist criticism. Argues that Stephen Greenblatt and Lee Paterson imply that no objective historical truth is possible and as a result do not give methodology its due weight in their criticism. Questions the theory of "truth" advanced in this vein of literary…

  9. Withholding truth from patients.

    LENUS (Irish Health Repository)

    O'Sullivan, Elizabeth

    2012-01-31

    The issue of whether patients should always be told the truth regarding their diagnosis and prognosis has afforded much debate in healthcare literature. This article examines telling the truth from an ethical perspective. It puts forward arguments for and against being honest with patients, using a clinical example to illustrate each point.

  10. Truthful Monadic Abstractions

    DEFF Research Database (Denmark)

    Brock-Nannestad, Taus; Schürmann, Carsten

    2012-01-01

    indefinitely, finding neither a proof nor a disproof of a given subgoal. In this paper we characterize a family of truth-preserving abstractions from intuitionistic first-order logic to the monadic fragment of classical first-order logic. Because they are truthful, these abstractions can be used to disprove...

  11. Automatic vetting of planet candidates from ground based surveys: Machine learning with NGTS

    Science.gov (United States)

    Armstrong, David J.; Günther, Maximilian N.; McCormac, James; Smith, Alexis M. S.; Bayliss, Daniel; Bouchy, François; Burleigh, Matthew R.; Casewell, Sarah; Eigmüller, Philipp; Gillen, Edward; Goad, Michael R.; Hodgkin, Simon T.; Jenkins, James S.; Louden, Tom; Metrailler, Lionel; Pollacco, Don; Poppenhaeger, Katja; Queloz, Didier; Raynard, Liam; Rauer, Heike; Udry, Stéphane; Walker, Simon R.; Watson, Christopher A.; West, Richard G.; Wheatley, Peter J.

    2018-05-01

    State of the art exoplanet transit surveys are producing ever increasing quantities of data. To make the best use of this resource, in detecting interesting planetary systems or in determining accurate planetary population statistics, requires new automated methods. Here we describe a machine learning algorithm that forms an integral part of the pipeline for the NGTS transit survey, demonstrating the efficacy of machine learning in selecting planetary candidates from multi-night ground based survey data. Our method uses a combination of random forests and self-organising-maps to rank planetary candidates, achieving an AUC score of 97.6% in ranking 12368 injected planets against 27496 false positives in the NGTS data. We build on past examples by using injected transit signals to form a training set, a necessary development for applying similar methods to upcoming surveys. We also make the autovet code used to implement the algorithm publicly accessible. autovet is designed to perform machine learned vetting of planetary candidates, and can utilise a variety of methods. The apparent robustness of machine learning techniques, whether on space-based or the qualitatively different ground-based data, highlights their importance to future surveys such as TESS and PLATO and the need to better understand their advantages and pitfalls in an exoplanetary context.

  12. Evaluating the truth brand.

    Science.gov (United States)

    Evans, W Douglas; Price, Simani; Blahut, Steven

    2005-03-01

    The American Legacy Foundation developed the truth campaign, an aspirational antismoking brand for adolescents. This study tested whether a multidimensional scale, brand equity in truth, mediates the relationship between campaign exposure and youth smoking. We collected brand equity responses from 2,306 youth on a nationally representative telephone survey. Factor analysis indicates that the scale has excellent psychometric properties and effectively measures brand equity. We developed a structural equation model to test the mediation hypothesis. Results show that brand equity mediates the relationship between truth and smoking. Analyses of potential cofounders show this relationship is robust. Behavioral branding (brands about a behavior or a lifestyle) is an important public health strategy.

  13. The end of truth?

    OpenAIRE

    C. W. du Toit

    1997-01-01

    As we are approaching the end of the century, many ideas, systems, and certainties, previously taken for granted, seem to be questioned, altered and rejected. One of these is the notion of truth, which pervades the very fibre of Western thinking. Rejecting the relevant critique as simply a postmodem fad, this article proceeds to give attention to the questions regarding the end of religious, scientific, and metaphysical truth. Truth and power are dealt with, as well as the narrative nature of...

  14. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    Science.gov (United States)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  15. Goedel, truth and proof

    International Nuclear Information System (INIS)

    Peregrin, Jaroslav

    2007-01-01

    The usual way of interpreting Goedel's (1931) incompleteness result is as showing that there is a gap between truth and provability, i.e. that we can never prove everything that is true. Moreover, this result is supposed to show that there are unprovable truths which we can know to be true. This, so the story goes, shows that we are more than machines that are restricted to acting as proof systems. Hence our minds are 'not mechanical'

  16. Evaluation of the Performance Characteristics of CGLSS II and U.S. NLDN Using Ground-Truth Dalta from Launch Complex 398, Kennedy Space Center, Florida

    Science.gov (United States)

    Mata, C. T.; Mata, A. G.; Rakov, V. A.; Nag, A.; Saul, J.

    2012-01-01

    A new comprehensive lightning instrumentation system has been designed for Launch Complex 39B (LC39B) at the Kennedy Space Center, Florida. This new instrumentation system includes seven synchronized high-speed video cameras, current sensors installed on the nine downconductors of the new lightning protection system (LPS) for LC39B; four dH/dt, 3-axis measurement stations; and five dE/dt stations composed of two antennas each. The LPS received 8 direct lightning strikes (a total of 19 strokes) from March 31 through December 31 2011. The measured peak currents and locations are compared to those reported by the Cloud-to-Ground Lightning Surveillance System (CGLSS II) and the National Lightning Detection Network (NLDN). Results of comparison are presented and analyzed in this paper.

  17. THE USE OF UAS FOR ASSESSING AGRICULTURAL SYSTEMS IN AN WETLAND IN TANZANIA IN THE DRY- AND WET-SEASON FOR SUSTAINABLE AGRICULTURE AND PROVIDING GROUND TRUTH FOR TERRA-SAR X DATA

    Directory of Open Access Journals (Sweden)

    H.-P. Thamm

    2013-08-01

    Full Text Available The paper describes the assessment of the vegetation and the land use systems of the Malinda Wetland in the Usambara Mountains in Tanzania with the parachute UAS (unmanned aerial system SUSI 62. The area of investigation was around 8 km2. In two campaigns, one in the wet season and one in the dry season, approximately 2600 aerial photos of the wetland were taken using the parachute UAS SUSI 62; of these images, ortho-photos with a spatial resolution of 20 cm x 20 cm, were computed with an advanced block bundle approach. The block bundles were geo-referenced using control points taken with differential GPS. As well a digital surface model (DSM of the wetland was created out of the UAS photos. Using the ortho-photos it is possible to assess the different land use systems; the differences in the phenology of the vegetation between wet and dry season can be investigated. In addition, the regionalisation of bio mass samples on smaller test plots was possible. The ortho-photos and the DSM derived from the UAS proved to be a valuable ground truth for the interpretation of Terra-SAR X images. The campaigns demonstrated that SUSI 62 was a suitable, robust tool to obtain the valuable information under harsh conditions.

  18. Moball-Buoy Network: A Near-Real-Time Ground-Truth Distributed Monitoring System to Map Ice, Weather, Chemical Species, and Radiations, in the Arctic

    Science.gov (United States)

    Davoodi, F.; Shahabi, C.; Burdick, J.; Rais-Zadeh, M.; Menemenlis, D.

    2014-12-01

    The work had been funded by NASA HQ's office of Cryospheric Sciences Program. Recent observations of the Arctic have shown that sea ice has diminished drastically, consequently impacting the environment in the Arctic and beyond. Certain factors such as atmospheric anomalies, wind forces, temperature increase, and change in the distribution of cold and warm waters contribute to the sea ice reduction. However current measurement capabilities lack the accuracy, temporal sampling, and spatial coverage required to effectively quantify each contributing factor and to identify other missing factors. Addressing the need for new measurement capabilities for the new Arctic regime, we propose a game-changing in-situ Arctic-wide Distributed Mobile Monitoring system called Moball-buoy Network. Moball-buoy Network consists of a number of wind-propelled self-powered inflatable spheres referred to as Moball-buoys. The Moball-buoys are self-powered. They use their novel mechanical control and energy harvesting system to use the abundance of wind in the Arctic for their controlled mobility and energy harvesting. They are equipped with an array of low-power low-mass sensors and micro devices able to measure a wide range of environmental factors such as the ice conditions, chemical species wind vector patterns, cloud coverage, air temperature and pressure, electromagnetic fields, surface and subsurface water conditions, short- and long-wave radiations, bathymetry, and anthropogenic factors such as pollutions. The stop-and-go motion capability, using their novel mechanics, and the heads up cooperation control strategy at the core of the proposed distributed system enable the sensor network to be reconfigured dynamically according to the priority of the parameters to be monitored. The large number of Moball-buoys with their ground-based, sea-based, satellite and peer-to-peer communication capabilities would constitute a wireless mesh network that provides an interface for a global

  19. Truth, body and religion

    Directory of Open Access Journals (Sweden)

    Jarl-Thure Eriksson

    2011-01-01

    Full Text Available This paper is based on the words of welcome to the symposium on Religion and the Body on 16 June 2010. In a religious context ‘truth’ is like a mantra, a certain imperative to believe in sacred things. The concept of truth and falseness arises, when we as humans compare reality, as we experience it through our senses, with the representation we have in our memory, a comparison of new information with stored information. If we look for the truth, we have to search in the human mind. There we will also find religion.

  20. Truth in Philosophy

    Directory of Open Access Journals (Sweden)

    Tibor R. Machan

    2011-03-01

    Full Text Available Can there be truth in philosophy? A problem: it is philosophy, its various schools, that advances what counts as true versus false, how to go about making the distinction. This is what I wish to focus on here and see if some coherent, sensible position could be reached on the topic.

  1. Verifiably Truthful Mechanisms

    DEFF Research Database (Denmark)

    Branzei, Simina; Procaccia, Ariel D.

    2015-01-01

    the computational sense). Our approach involves three steps: (i) specifying the structure of mechanisms, (ii) constructing a verification algorithm, and (iii) measuring the quality of verifiably truthful mechanisms. We demonstrate this approach using a case study: approximate mechanism design without money...

  2. Experience, Poetry and Truth

    DEFF Research Database (Denmark)

    Gahrn-Andersen, Rasmus

    2017-01-01

    of philosophical thinking. Specifically, I show that, beneath a highly poetic and obscure prose, Jünger posits how subjective experience and poetry allow individuals to realize truth. I relate parts of Jünger’s insights to contributions by Husserl, Heidegger and Merleau-Ponty, arguing that Jünger offers a unique...

  3. An existential theoiy of truth

    African Journals Online (AJOL)

    of the theory of truth expressed in the writings of certain existentialist writers ... gical) indeterminateness of meaning and truth, apart from one's .... dual human perspective and the unavoidable existential tasks of deciphering for oneself what is ...

  4. Truth as a Mathematical Object

    Directory of Open Access Journals (Sweden)

    Jean-Yves Béziau

    2010-04-01

    Full Text Available In this paper we discuss in which sense truth is considered as a mathematical object in propositional logic. After clarifying how this concept is used in classical logic, through the notions of truth-table, truth-function and bivaluation, we examine some generalizations of it in non-classical logics: many-valued matrix semantics with three and four values, non-truth-functional bivalent semantics, Kripke possible world semantics.

  5. A novel approach for automatic snow depth estimation using UAV-taken images without ground control points

    Science.gov (United States)

    Mizinski, Bartlomiej; Niedzielski, Tomasz

    2017-04-01

    Recent developments in snow depth reconstruction based on remote sensing techniques include the use of photographs of snow-covered terrain taken by unmanned aerial vehicles (UAVs). There are several approaches that utilize visible-light photos (RGB) or near infrared images (NIR). The majority of the methods in question are based on reconstructing the digital surface model (DSM) of the snow-covered area with the use of the Structure-from-Motion (SfM) algorithm and the stereo-vision software. Having reconstructed the above-mentioned DSM it is straightforward to calculate the snow depth map which may be produced as a difference between the DSM of snow-covered terrain and the snow-free DSM, known as the reference surface. In order to use the aforementioned procedure, the high spatial accuracy of the two DSMs must be ensured. Traditionally, this is done using the ground control points (GCPs), either artificial or natural terrain features that are visible on aerial images, the coordinates of which are measured in the field using the Global Navigation Satellite System (GNSS) receiver by qualified personnel. The field measurements may be time-taking (GCPs must be well distributed in the study area, therefore the field experts should travel over long distances) and dangerous (the field experts may be exposed to avalanche risk or cold). Thus, there is a need to elaborate methods that enable the above-mentioned automatic snow depth map production without the use of GCPs. One of such attempts is shown in this paper which aims to present the novel method which is based on real-time processing of snow-covered and snow-free dense point clouds produced by SfM. The two stage georeferencing is proposed. The initial (low accuracy) one assigns true geographic, and subsequently projected, coordinates to the two dense point clouds, while the said initially-registered dense point clouds are matched using the iterative closest point (ICP) algorithm in the final (high accuracy) stage. The

  6. Ground Truth in Building Human Security

    Science.gov (United States)

    2012-11-01

    structured. This allows for the creation of one master matrix where the assessment results are collected for each AO and then weighted ... weighted criteria for a geographic region’s value to overall policy aims, aids in decision- making where to best allocate resources when, as in this...Snow Leopards and Cadastres: Rare Sightings in Afghanistan,” in Land and Post-Conflict Peacebuild- ing, Jon Unruh and Rhodri Williams, Eds

  7. A Robust Bayesian Truth Serum for Small Populations

    OpenAIRE

    Parkes, David C.; Witkowski, Jens

    2012-01-01

    Peer prediction mechanisms allow the truthful elicitation of private signals (e.g., experiences, or opinions) in regard to a true world state when this ground truth is unobservable. The original peer prediction method is incentive compatible for any number of agents n >= 2, but relies on a common prior, shared by all agents and the mechanism. The Bayesian Truth Serum (BTS) relaxes this assumption. While BTS still assumes that agents share a common prior, this prior need not be known to the me...

  8. An inconvenient truth

    International Nuclear Information System (INIS)

    Al, Gore

    2007-01-01

    Our climate crisis may at times appear to be happening slowly, but in fact it is happening very quickly-and has become a true planetary emergency. The Chinese expression for crisis consists of two characters. The first is a symbol for danger; the second is a symbol for opportunity. In order to face down the danger that is stalking us and move through it, we first have to recognize that we are facing a crisis. So why is it that our leaders seem not to hear such clarion warnings? Are they resisting the truth because they know that the moment they acknowledge it, they will face a moral imperative to act? Is it simply more convenient to ignore the warnings? Perhaps, but inconvenient truths do not go away just because they are not seen. Indeed, when they are responded to, their significance does not diminish; it grows. (author)

  9. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  10. On Truth and Emancipation

    Directory of Open Access Journals (Sweden)

    Andreas Hjort Bundgaard

    2012-09-01

    Full Text Available This article has two main currents. First, it argues that an affinity or similarity can be identified between the philosophy of Gianni Vattimo (the so-called “Weak Thinking” and the “Discourse Theory” of Ernesto Laclau and Chantal Mouffe. The two theorizations are engaged with related problems, but have conceptualized them differently; they share central insights, but understand them with different vocabularies. The article furthermore illuminates as regards what this affinity consists in, and it discusses the differences and similarities between the two theoretical positions. The second current of the article takes the ‘postmodern’ philosophical problems of anti-foundationalism and nihilism as its point of departure. It raises the questions of: 1 how it is possible at the same time to take the critique of universality and objectivity seriously and still believe in the value of ethics and science; and, 2 how we are to understand emancipation if there is no necessary relationship between truth and freedom. The article investigates the status, meaning and interconnection of the categories of truth, knowledge, ethics, politics and emancipation in the light of the absence of metaphysical first principles. The article concludes that: A faith can constitute a “weak foundation” of knowledge and ethics; and, B nihilism can be combined with the political and ethical ambitions of universal human emancipation and radical democracy.

  11. Lying relies on the truth

    NARCIS (Netherlands)

    Debey, E.; De Houwer, J.; Verschuere, B.

    2014-01-01

    Cognitive models of deception focus on the conflict-inducing nature of the truth activation during lying. Here we tested the counterintuitive hypothesis that the truth can also serve a functional role in the act of lying. More specifically, we examined whether the construction of a lie can involve a

  12. Twardowski On Truth

    Directory of Open Access Journals (Sweden)

    Peter Simons

    2009-10-01

    Full Text Available Of those students of Franz Brentano who went on to become professional philosophers, Kazimierz Twardowski (1866-1938 is much less well-known than his older contemporaries Edmund Husserl and Alexius Meinong. Yet in terms of the importance of his contribution to the history of philosophy, he ranks among Brentano’s students behind at most those two, possibly only behind Husserl. The chief contribution of Twardowski to global philosophy came indirectly, through the influence of his theory of truth on his students, and they on their students, and so on. The most important of these grandstudents is one whom Twardowski presumably knew but never taught, and whose adopted name is obtained by deleting four letters from his own: Tarski.

  13. Climate: 15 inconvenient truths

    International Nuclear Information System (INIS)

    Marko, Istvan E.; Furfari, Samuel; Masson, Henri; Preat, Alain; Debeil, Anne; Delory, Ludovic; Godefridi, Drieu; Myren, Lars; Ripa di Meana, Carlo

    2013-01-01

    Proposed by professionals of various disciplines, this book is considered as the bible of climate sceptics. It proposes a synthesis of arguments which deny prevailing views in the domain of climate. The authors show how, since fifteen years, reality has systematically denied projections made by the IPCC and its numerous political relays and media coverage. A first objective is therefore to unlatch the debate on the climate issue in front of a systematic practice of monopolization of truth at the expense of an authentic scientific approach, and to restore a democratic debate. A second objective is to put into question again the IPCC scientific character, scientific views which are at the heart of the last report published by the IPCC, and the political, media and economic reception of IPCC reports

  14. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    International Nuclear Information System (INIS)

    Deeley, M A; Cmelak, A J; Malcolm, A W; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Ding, G X; Chen, A; Datteri, R; Noble, J H; Dawant, B M; Donnelly, E F; Yei, F; Koyama, T

    2011-01-01

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.

  15. Davidson, Dualism, and Truth

    Directory of Open Access Journals (Sweden)

    Nathaniel Goldberg

    2012-12-01

    Full Text Available Happy accidents happen even in philosophy. Sometimes our arguments yield insights despite missing their target, though when they do others can often spot it more easily. Consider the work of Donald Davidson. Few did more to explore connections among mind, language, and world. Now that we have critical distance from his views, however, we can see that Davidson’s accomplishments are not quite what they seem. First, while Davidson attacked the dualism of conceptual scheme and empirical content, he in fact illustrated a way to hold it. Second, while Davidson used the principle of charity to argue against the dualism, his argument in effect treats the principle as constitutive of a conceptual scheme. And third, while Davidson asserted that he cannot define what truth ultimately is—and while I do not disagree—his work nonetheless allows us to saymore about truth than Davidson himself does. I aim to establish these three claims. Doing so enriches our understanding of issues central to the history of philosophy concerning how, if at all, to divvy up the mental or linguistic contribution, and the worldly contribution, to knowledge. As we see below, Davidson was right in taking his work to be one stage of a dialectic begun by Immanuel Kant.1 He was just wrong about what that stage is. Reconsidering Davidson’s views also moves the current debate forward, as they reveal a previously unrecognized yet intuitive notion of truth—even if Davidson himself remained largely unaware of it. We begin however with scheme/content dualism and Davidson’s argument against it.

  16. An existential theory of truth

    Directory of Open Access Journals (Sweden)

    Dale Cannon

    1993-01-01

    Full Text Available This article is an attempt to present a simplified account of the theory of truth expressed in the writings of certain existentialist writers - namely, Kierkegaard, Heidegger, Jaspers, and Marcel. It is designed to serve as a supplement to conventional textbook treatments of the nature of truth, which typically ignore the contributions that existentialists have made to the topic. An existential theory of truth stresses the epistemological (not ontological indeterminateness of meaning and truth, apart from one’s personal participation in determining them. Contrary to superficial interpretations, this theory does not do away either with a transcendent reality or with objectivity. What is rejected is anything that would circumvent the necessary task of participating, oneself, in the epistemological determination of truth.

  17. Shy and Ticklish Truths as Species of Scientific and Artistic Perception

    African Journals Online (AJOL)

    ... recognize a 'gay science' (Nietzsche) not as an eccentric construction of merely poetic insights and expressions, but as a necessary part of the fundamentals of knowledge. It is a truth of the human condition that its truths are grounded in a personal embodiment of individuality, ontogeny, momentariness and situationality.

  18. Behavioral pragmatism: No place for reality and truth

    Science.gov (United States)

    Barnes-Holmes, Dermot

    2000-01-01

    The current article begins by reviewing L. J. Hayes's claim that pragmatism relies on a correspondence-based truth criterion. To evaluate her claim, the concept of the observation sentence, proposed by the pragmatist philosopher W. V. Quine, is examined. The observation sentence appears to remove the issue of correspondence from Quine's pragmatist philosophy. Nevertheless, the issue of correspondence reemerges, as the problem of homology, when Quine appeals to agreement between or among observation sentences as the basis for truth. Quine also argues, however, that the problem of homology (i.e., correspondence) should be ignored on pragmatic grounds. Because the problem is simply ignored, but not resolved, there appears to be some substance to Hayes's claim that pragmatism relies ultimately on correspondence as a truth criterion. Behavioral pragmatism is then introduced to circumvent both Hayes's claim and Quine's implicit appeal to correspondence. Behavioral pragmatism avoids correspondence by appealing to the personal goals (i.e., the behavior) of the scientist or philosopher as the basis for establishing truth. One consequence of this approach, however, is that science and philosophy are robbed of any final or absolute objectives and thus may not be a satisfactory solution to philosophers. On balance, behavioral pragmatism avoids any appeal to correspondence-based truth, and thus it cannot be criticized for generating the same philosophical problems that have come to be associated with this truth criterion. PMID:22478346

  19. Redefining Religious Truth as a Challenge for Philosophy of Religion

    NARCIS (Netherlands)

    Jonkers, P.H.A.I.

    2012-01-01

    One of the most important features of contemporary Western societies is the rise of (religious) pluralism. Whereas (philosophical) theism used to serve as a common ground to discuss the truth-claims of religion, this approach seems to have lost much of its plausibility. What I want to argue in this

  20. Automatic aortic root segmentation in CTA whole-body dataset

    Science.gov (United States)

    Gao, Xinpei; Kitslaar, Pieter H.; Scholte, Arthur J. H. A.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke; Reiber, Johan H. C.

    2016-03-01

    Trans-catheter aortic valve replacement (TAVR) is an evolving technique for patients with serious aortic stenosis disease. Typically, in this application a CTA data set is obtained of the patient's arterial system from the subclavian artery to the femoral arteries, to evaluate the quality of the vascular access route and analyze the aortic root to determine if and which prosthesis should be used. In this paper, we concentrate on the automated segmentation of the aortic root. The purpose of this study was to automatically segment the aortic root in computed tomography angiography (CTA) datasets to support TAVR procedures. The method in this study includes 4 major steps. First, the patient's cardiac CTA image was resampled to reduce the computation time. Next, the cardiac CTA image was segmented using an atlas-based approach. The most similar atlas was selected from a total of 8 atlases based on its image similarity to the input CTA image. Third, the aortic root segmentation from the previous step was transferred to the patient's whole-body CTA image by affine registration and refined in the fourth step using a deformable subdivision surface model fitting procedure based on image intensity. The pipeline was applied to 20 patients. The ground truth was created by an analyst who semi-automatically corrected the contours of the automatic method, where necessary. The average Dice similarity index between the segmentations of the automatic method and the ground truth was found to be 0.965±0.024. In conclusion, the current results are very promising.

  1. Study on the Feasibility of RGB Substitute CIR for Automatic Removal Vegetation Occlusion Based on Ground Close-Range Building Images

    Science.gov (United States)

    Li, C.; Li, F.; Liu, Y.; Li, X.; Liu, P.; Xiao, B.

    2012-07-01

    Building 3D reconstruction based on ground remote sensing data (image, video and lidar) inevitably faces the problem that buildings are always occluded by vegetation, so how to automatically remove and repair vegetation occlusion is a very important preprocessing work for image understanding, compute vision and digital photogrammetry. In the traditional multispectral remote sensing which is achieved by aeronautics and space platforms, the Red and Near-infrared (NIR) bands, such as NDVI (Normalized Difference Vegetation Index), are useful to distinguish vegetation and clouds, amongst other targets. However, especially in the ground platform, CIR (Color Infra Red) is little utilized by compute vision and digital photogrammetry which usually only take true color RBG into account. Therefore whether CIR is necessary for vegetation segmentation or not has significance in that most of close-range cameras don't contain such NIR band. Moreover, the CIE L*a*b color space, which transform from RGB, seems not of much interest by photogrammetrists despite its powerfulness in image classification and analysis. So, CIE (L, a, b) feature and support vector machine (SVM) is suggested for vegetation segmentation to substitute for CIR. Finally, experimental results of visual effect and automation are given. The conclusion is that it's feasible to remove and segment vegetation occlusion without NIR band. This work should pave the way for texture reconstruction and repair for future 3D reconstruction.

  2. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    Science.gov (United States)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic

  3. Impact of the accuracy of automatic tumour functional volume delineation on radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Le Maitre, Amandine; Hatt, Mathieu; Pradier, Olivier; Cheze-le Rest, Catherine; Visvikis, Dimitris

    2012-01-01

    Over the past few years several automatic and semi-automatic PET segmentation methods for target volume definition in radiotherapy have been proposed. The objective of this study is to compare different methods in terms of dosimetry. For such a comparison, a gold standard is needed. For this purpose, realistic GATE-simulated PET images were used. Three lung cases and three H and N cases were designed with various shapes, contrasts and heterogeneities. Four different segmentation approaches were compared: fixed and adaptive thresholds, a fuzzy C-mean and the fuzzy locally adaptive Bayesian method. For each of these target volumes, an IMRT treatment plan was defined. The different algorithms and resulting plans were compared in terms of segmentation errors and ground-truth volume coverage using different metrics (V 95 , D 95 , homogeneity index and conformity index). The major differences between the threshold-based methods and automatic methods occurred in the most heterogeneous cases. Within the two groups, the major differences occurred for low contrast cases. For homogeneous cases, equivalent ground-truth volume coverage was observed for all methods but for more heterogeneous cases, significantly lower coverage was observed for threshold-based methods. Our study demonstrates that significant dosimetry errors can be avoided by using more advanced image-segmentation methods. (paper)

  4. The Truth of Sacred Scripture

    Directory of Open Access Journals (Sweden)

    Tomasz Jelonek

    2006-09-01

    Full Text Available Article presents the history of contradiction between science and the Bible and how it was solved in Dogmatic Constitution on Divine Revelation Dei Verbum of the II Vatican Council. Since biblical truth was given to us “for the sake of our salvation,” and not in order to teach us natural science or history for their own sake, Sacred Scripture cannot be fairly judged to be in error when it sometimes presents historical or scientific truth in a less complete, less detailed, more popular, or more imprecise (i.e. merely approximate fashion than would be acceptable in modern texts dedicated formally to those disciplines.

  5. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...

  6. TRUTH AS DETERMINANT OF RELIGIOUS FAITH

    African Journals Online (AJOL)

    Admin

    engendered by either the myths of the religion or the historical personages. The truth we intend to .... personage and source of Islamic religion was an orphan boy Muhammad, born ... The Fundamental Truth about Some Religions of the World.

  7. Heart Health: The Heart Truth Campaign 2009

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Cover Story Heart Health The Heart Truth Campaign 2009 Past Issues / Winter 2009 Table ... one of the celebrities supporting this year's The Heart Truth campaign. Both R&B singer Ashanti (center) ...

  8. Truths, lies, and statistics.

    Science.gov (United States)

    Thiese, Matthew S; Walker, Skyler; Lindsey, Jenna

    2017-10-01

    Distribution of valuable research discoveries are needed for the continual advancement of patient care. Publication and subsequent reliance of false study results would be detrimental for patient care. Unfortunately, research misconduct may originate from many sources. While there is evidence of ongoing research misconduct in all it's forms, it is challenging to identify the actual occurrence of research misconduct, which is especially true for misconduct in clinical trials. Research misconduct is challenging to measure and there are few studies reporting the prevalence or underlying causes of research misconduct among biomedical researchers. Reported prevalence estimates of misconduct are probably underestimates, and range from 0.3% to 4.9%. There have been efforts to measure the prevalence of research misconduct; however, the relatively few published studies are not freely comparable because of varying characterizations of research misconduct and the methods used for data collection. There are some signs which may point to an increased possibility of research misconduct, however there is a need for continued self-policing by biomedical researchers. There are existing resources to assist in ensuring appropriate statistical methods and preventing other types of research fraud. These included the "Statistical Analyses and Methods in the Published Literature", also known as the SAMPL guidelines, which help scientists determine the appropriate method of reporting various statistical methods; the "Strengthening Analytical Thinking for Observational Studies", or the STRATOS, which emphases on execution and interpretation of results; and the Committee on Publication Ethics (COPE), which was created in 1997 to deliver guidance about publication ethics. COPE has a sequence of views and strategies grounded in the values of honesty and accuracy.

  9. "#Factsmustfall"?--Education in a Post-Truth, Post-Truthful World

    Science.gov (United States)

    Horsthemke, Kai

    2017-01-01

    Taking its inspiration from the name of the recent "#FeesMustFall" movement on South African university campuses, this paper takes stock of the apparent disrepute into which truth, facts and also rationality have fallen in recent times. In the post-truth world, the blurring of borders between truth and deception, truthfulness and…

  10. Truth and the Capability of Learning

    Science.gov (United States)

    Hinchliffe, Geoffrey

    2007-01-01

    This paper examines learning as a capability, taking as its starting point the work of Amartya Sen and Martha Nussbaum. The paper is concerned to highlight the relation between learning and truth, and it does so by examining the idea of a genealogy of truth and also Donald Davidson's coherence theory. Thus the notion of truth is understood to be…

  11. Normativity and deflationary theories of truth

    Directory of Open Access Journals (Sweden)

    Bruno Mölder

    2008-12-01

    Full Text Available It has been argued that deflationary theories of truth stumble over the normativity of truth. This paper maintains that the normativity objection does not pose problems to at least one version of deflationism, minimalism. The rest of the paper discusses truth-related norms, showing that either they do not hold or they are not troublesome for deflationism.

  12. TRUTH AS DETERMINANT OF RELIGIOUS FAITH

    African Journals Online (AJOL)

    Admin

    of values like any other institution”. Our concern here is how religious truth that ought to be absolute has become relative thus producing many different religions in the world. Relativity of Religious Truths As Determinant. Of Religious Faith. Truth has been defined as that which conforms to essential reality, but is it absolute?

  13. Truth Obviousness in Ancient Greek Philosophy

    Directory of Open Access Journals (Sweden)

    Halyna I. Budz

    2013-01-01

    Full Text Available The article examines the features of the axiomatic approach to the truth understanding in ancient Greek philosophy. Truth in the works by ancient philosophers has axiomatic essence, basing on divine origin of truth. As the truth has a divine origin, it is in reality. The reality, created by Gods is the solemn reality. Therefore, understanding of reality by man is the display of divine reality, which is true and clever. In of the context of ancient Greek philosophy, to know truth is to know something, existing in reality, in other words, something, truly existing, eternal reality. Consequently, to know truth is it to know the substantial reality base. That’s why the justification of the reality origin is the axiomatic doctrine of truth at the same time, because only fundamental principle “truly” exists and is the truth itself. The idea of fundamental principle in ancient Greek philosophy is the axiom, universal principle, which is the base of reality as a substance from ontological perspective and is realized as the truth from gnosiological perspective. Fundamental principle, as Greeks understand it, coincides with the truth, in other words, reality and thinking are identical. The idea of reality source is the universal criterion of world perception at the same time, in other words, it is the truth, which is perceived axiomatically.

  14. The Inconvenient Truth. Part 2

    International Nuclear Information System (INIS)

    Athanasiou, T.

    2007-01-01

    Essay-type of publication on what should happen next after Al Gore's presentations on the Inconvenient Truth about the impacts of climate change. The essay states in the first lines: 'We've seen the movie, so we know the first part - we're in trouble deep. And it's time, past time, for at least some of us to go beyond warning to planning, to start talking seriously about a global crash program to stabilize the climate

  15. Visual truths of citizen reportage

    DEFF Research Database (Denmark)

    Allan, Stuart; Peters, Chris

    2015-01-01

    In striving to better understand issues associated with citizen contributions to newsmaking in crisis situations, this article identifies and elaborates four specific research problematics – bearing witness, technologies of truth-telling, mediating visualities and affectivities of othering...... – in order to recast more familiar modes of enquiry. Specifically, it provides an alternative heuristic to theorize the journalistic mediation of citizen imagery, and the myriad ways this process of negotiation maintains, repairs and at times disrupts the interstices of professional–amateur boundaries...

  16. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.

  17. Automatic defect detection in video archives: application to Montreux Jazz Festival digital archives

    Science.gov (United States)

    Hanhart, Philippe; Rerabek, Martin; Ivanov, Ivan; Dufaux, Alain; Jones, Caryl; Delidais, Alexandre; Ebrahimi, Touradj

    2013-09-01

    Archival of audio-visual databases has become an important discipline in multimedia. Various defects are typ- ically present in such archives. Among those, one can mention recording related defects such as interference between audio and video signals, optical related artifacts, recording and play out artifacts such as horizontal lines, and dropouts, as well as those due to digitization such as diagonal lines. An automatic or semi-automatic detection to identify such defects is useful, especially for large databases. In this paper, we propose two auto- matic algorithms for detection of horizontal and diagonal lines, as well as dropouts that are among the most typical artifacts encountered. We then evaluate the performance of these algorithms by making use of ground truth scores obtained by human subjects.

  18. ARCOCT: Automatic detection of lumen border in intravascular OCT images.

    Science.gov (United States)

    Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos

    2017-11-01

    Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border

  19. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  20. A Bayesian truth serum for subjective data.

    Science.gov (United States)

    Prelec, Drazen

    2004-10-15

    Subjective judgments, an essential information source for science and policy, are problematic because there are no public criteria for assessing judgmental truthfulness. I present a scoring method for eliciting truthful subjective data in situations where objective truth is unknowable. The method assigns high scores not to the most common answers but to the answers that are more common than collectively predicted, with predictions drawn from the same population. This simple adjustment in the scoring criterion removes all bias in favor of consensus: Truthful answers maximize expected score even for respondents who believe that their answer represents a minority view.

  1. Climate science, truth, and democracy.

    Science.gov (United States)

    Keller, Evelyn Fox

    2017-08-01

    This essay was written almost ten years ago when the urgency of America's failure as a nation to respond to the threats of climate change first came to preoccupy me. Although the essay was never published in full, I circulated it informally in an attempt to provoke a more public engagement among my colleagues in the history, philosophy, and sociology of science. In particular, it was written in almost direct response to Philip Kitcher's own book, Science, Truth and Democracy (2001), in an attempt to clarify what was special about Climate Science in its relation to truth and democracy. Kitcher's response was immensely encouraging, and it led to an extended dialogue that resulted, first, in a course we co-taught at Columbia University, and later, to the book The Seasons Alter: How to Save Our Planet in Six Acts (W. W. Norton) published this spring. The book was finished just after the Paris Climate Accord, and it reflects the relative optimism of that moment. Unfortunately events since have begun to evoke, once again, the darker mood of this essay. I am grateful to Greg Radick for suggesting its publication. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. THEOLOGICAL AND PHILOSOPHICAL THEORIES OF TRUTH

    Directory of Open Access Journals (Sweden)

    Hans-Peter Grosshans

    2012-02-01

    Full Text Available Examining some theological and philosophical theories of truth, the author concentrates his attention on the experience of giving concrete reality to the Christian discourse about truth and at the same time contrasting this search with the attempts of philosophy to define truth. He draws the reader’s attention to the understanding of truth in language and communication. In his article he discusses the essential theories of truth which are characteristic of western philosophy: classical, correspondent, coherent, pragmatic, communicative and ontic. The author notes the specific traits of a theological understanding of truth and contends that it is based on an ontologically higher level than that of the classic definition of truth viewed simply in relation to reality and the understanding. The knowledge of God given to the Christian faith by the activity of the triune God, is in itself perfect and therefore in no need of further development. It is on this basis that theology develops its knowledge of faith, sweeping aside everything which is not in accord with this fundamental affirmation of faith or with the witness of revealed truth

  3. Objective Truth Institution in Criminal Procedure

    Directory of Open Access Journals (Sweden)

    Voltornist O. A.

    2012-11-01

    Full Text Available The article deals with the category of objective truth in criminal procedure, its importance for correct determination of criminal court procedure aims. The author analyzes also the bill draft offered by the RF Committee of Inquiry “On amending in the RF Criminal Procedure Code due to the implementation ofobjective truth institution in criminal procedure”

  4. Ethics and Truth in Archival Research

    Science.gov (United States)

    Tesar, Marek

    2015-01-01

    The complexities of the ethics and truth in archival research are often unrecognised or invisible in educational research. This paper complicates the process of collecting data in the archives, as it problematises notions of ethics and truth in the archives. The archival research took place in the former Czechoslovakia and its turbulent political…

  5. Does the Truth Matter in Science?

    Science.gov (United States)

    Lipton, Peter

    2005-01-01

    Is science in the truth business, discovering ever more about an independent and largely unobservable world? Karl Popper and Thomas Kuhn, two of the most important figures in science studies in the 20th century, gave accounts of science that are in some tension with the truth view. Their central claims about science are considered here, along with…

  6. The Philosophical Problem of Truth in Librarianship

    Science.gov (United States)

    Labaree, Robert V.; Scimeca, Ross

    2008-01-01

    The authors develop a framework for addressing the question of truth in librarianship and in doing so attempt to move considerations of truth closer to the core of philosophical debates within the profession. After establishing ways in which philosophy contributes to social scientific inquiry in library science, the authors examine concepts of…

  7. Truth and falsehood an inquiry into generalized logical values

    CERN Document Server

    Shramko, Yaroslav

    2012-01-01

    Here is a thoroughly elaborated logical theory of generalized truth-values, presenting the idea of a trilattice of truth values - a specific algebraic structure with information ordering and two distinct logical orderings, one for truth and another for falsity.

  8. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  9. Automatic Imitation

    Science.gov (United States)

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  10. Image simulation for automatic license plate recognition

    Science.gov (United States)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  11. Japanese attitudes towards truth disclosure in cancer.

    Science.gov (United States)

    Tanida, N

    1994-03-01

    Despite the increasing concerns of truth disclosure, most cancer patients are not told the truth about their disease in Japan. The author has tried to provide some insight into this issue by evaluating results from questionnaires given to hospital patients, clients in a mass cancer survey, and doctors of a college hospital. Results showed that 72% of patients and 83% of clients wanted to be told the truth, but only 33% and 34% of them thought that the truth should be told to cancer patients. These attitudes of patients and clients regarding truth disclosure were more positive than those of the general public and health care workers in previous studies. At present, 13% of doctors inform cancer patients of their disease. These trends indicate that the Japanese attitude toward avoiding truth disclosure stems primarily from paternalism but is also influenced by social characteristics including insufficient understanding of this issue. Open discussion involving all factions of society is necessary to attain a better understanding of this issue and to promote eventual truth disclosure.

  12. [Medicine and truth: between science and narrative].

    Science.gov (United States)

    Materia, Enrico; Baglio, Giovanni

    2009-01-01

    To which idea of truth may medicine refer? Evidence-based medicine (EBM) is rooted in the scientific truth. To explain the meaning and to trace the evolution of scientific truth, this article outlines the history of the Scientific Revolution and of the parable of Modernity, up to the arrival of pragmatism and hermeneutics. Here, the concept of truth becomes somehow discomfiting and the momentum leans towards the integration of different points of view. The fuzzy set theory for the definition of disease, as well as the shift from disease to syndrome (which has operational relevance for geriatrics), seems to refer to a more complex perspective on knowledge, albeit one that is less defined as compared to the nosology in use. Supporters of narrative medicine seek the truth in the interpretation of the patients' stories, and take advantage of the medical humanities to find the truth in words, feelings and contact with the patients. Hence, it is possible to mention the parresia, which is the frank communication espoused by stoicism and epicureanism, a technical and ethical quality which allows one to care in the proper way, a true discourse for one's own moral stance. Meanwhile, EBM and narrative medicine are converging towards a point at which medicine is considered a practical knowledge. It is the perspective of complexity that as a zeitgeist explains these multiple instances and proposes multiplicity and uncertainty as key referents for the truth and the practice of medicine.

  13. Unveiling the truth: warnings reduce the repetition-based truth effect.

    Science.gov (United States)

    Nadarevic, Lena; Aßfalg, André

    2017-07-01

    Typically, people are more likely to consider a previously seen or heard statement as true compared to a novel statement. This repetition-based "truth effect" is thought to rely on fluency-truth attributions as the underlying cognitive mechanism. In two experiments, we tested the nature of the fluency-attribution mechanism by means of warning instructions, which informed participants about the truth effect and asked them to prevent it. In Experiment 1, we instructed warned participants to consider whether a statement had already been presented in the experiment to avoid the truth effect. However, warnings did not significantly reduce the truth effect. In Experiment 2, we introduced control questions and reminders to ensure that participants understood the warning instruction. This time, warning reduced, but did not eliminate the truth effect. Assuming that the truth effect relies on fluency-truth attributions, this finding suggests that warned participants could control their attributions but did not disregard fluency altogether when making truth judgments. Further, we found no evidence that participants overdiscount the influence of fluency on their truth judgments.

  14. Development of Mine Explosion Ground Truth Smart Sensors

    Science.gov (United States)

    2011-09-01

    interest. The two candidates are the GS11-D by Oyo Geospace that is used extensively in seismic monitoring of geothermal fields and the Sensor Nederland SM...Technologies 853 Figure 4. Our preferred sensors and processor for the GTMS. (a) Sensor Nederland SM-6 geophone with emplacement spike. (b

  15. A ship-borne meteorological station for ground truth measurements

    Digital Repository Service at National Institute of Oceanography (India)

    Desai, R.G.P.; Desa, B.A.E.

    that with the high performance components available, and modular software construction in a Higher Level Language, it is possible to design equipment specially tailored to ones needs, rapidly and cost effectively...

  16. Outdoor surface temperature measurement: ground truth or lie?

    Science.gov (United States)

    Skauli, Torbjorn

    2004-08-01

    Contact surface temperature measurement in the field is essential in trials of thermal imaging systems and camouflage, as well as for scene modeling studies. The accuracy of such measurements is challenged by environmental factors such as sun and wind, which induce temperature gradients around a surface sensor and lead to incorrect temperature readings. In this work, a simple method is used to test temperature sensors under conditions representative of a surface whose temperature is determined by heat exchange with the environment. The tested sensors are different types of thermocouples and platinum thermistors typically used in field trials, as well as digital temperature sensors. The results illustrate that the actual measurement errors can be much larger than the specified accuracy of the sensors. The measurement error typically scales with the difference between surface temperature and ambient air temperature. Unless proper care is taken, systematic errors can easily reach 10% of this temperature difference, which is often unacceptable. Reasonably accurate readings are obtained using a miniature platinum thermistor. Thermocouples can perform well on bare metal surfaces if the connection to the surface is highly conductive. It is pointed out that digital temperature sensors have many advantages for field trials use.

  17. Global Ground Truth Data Set with Waveform and Arrival Data

    Science.gov (United States)

    2007-07-30

    local del Proyecto GASPI (1999-2002), Trabajos de Geología, Univ. de Oviedo, 24, 91-106, 2004. 10116 Terceira, Azores Islands 28/0 Kennett, B. L. N...in the inversion are well-connected, i.e., stations observed many events in the cluster. To assess cluster connectivity, we employ a graph...information. In the RCA inversion we fix the pattern of relative hypocenters and origin times obtained from HDC, and locate the centroid of the local

  18. The Incoherence of Post-Truth

    OpenAIRE

    Taylor, Dom

    2018-01-01

    Ostensibly, there has been a recent rise in ‘post-truth’ thinking (Higgins, 2016; Rochlin, 2017; Speed & Mannion, 2017; Suiter, 2016). The Oxford English Dictionary, which made ‘post-truth’ its word of the year for 2016, defines post-truth as “[r]elating to or denoting circumstances in which objective facts are less influential in shaping political debate or public opinion than appeals to emotion and personal belief” (“Post-truth,” 2017). Going into more detail, post-truth is described not ju...

  19. Power and Truth in Foucault and Habermas

    OpenAIRE

    Oliveira, Amurabi; Universidade Federal de Santa Catarina

    2014-01-01

    Current paper examines how truth is enmeshed with power in Foucault´s and Habermas´s theories, highlighting similarities and differences within the two theoretical perspectives. If, on the one hand, truth in Foucault is based on a monologic imposition, on the other hand, Habermas insists on the dialogic understanding of the truth, although in both cases, related to power, at opposite positions, as Habermas himself points out in ‘The Philosophical Discourse of Modernity’. Foucault takes on a c...

  20. Receiver operator characteristic (ROC) analysis without truth

    International Nuclear Information System (INIS)

    Henkelman, R.M.; Kay, I.; Bronskill, M.J.

    1990-01-01

    Receiver operator characteristic (ROC) analysis, the preferred method of evaluating diagnostic imaging tests, requires an independent assessment of the true state of disease, which can be difficult to obtain and is often of questionable accuracy. A new method of analysis is described which does not require independent truth data and which can be used when several accurate tests are being compared. This method uses correlative information to estimate the underlying model of multivariate normal distributions of disease-positive and disease-negative patients. The method is shown to give results equivalent to conventional ROC analysis in a comparison of computed tomography, radionuclide scintigraphy, and magnetic resonance imaging for liver metastasis. When independent truth is available, the method can be extended to incorporate truth data or to evaluate the consistency of the truth data with the imaging data

  1. Truthfulness in science teachers’ corporeal performances

    DEFF Research Database (Denmark)

    Daugbjerg, Peer

    2014-01-01

    , sincerity and trustworthiness in dealing with classroom management. Jane shows effort, fidelity and honesty in developing outdoor teaching. Simon shows transparency, objectivity and sincerity in his support of colleagues. By addressing the relations in the vocabulary of truthfulness the teachers...

  2. Crowd-sourced data collection to support automatic classification of building footprint data

    Science.gov (United States)

    Hecht, Robert; Kalla, Matthias; Krüger, Tobias

    2018-05-01

    Human settlements are mainly formed by buildings with their different characteristics and usage. Despite the importance of buildings for the economy and society, complete regional or even national figures of the entire building stock and its spatial distribution are still hardly available. Available digital topographic data sets created by National Mapping Agencies or mapped voluntarily through a crowd via Volunteered Geographic Information (VGI) platforms (e.g. OpenStreetMap) contain building footprint information but often lack additional information on building type, usage, age or number of floors. For this reason, predictive modeling is becoming increasingly important in this context. The capabilities of machine learning allow for the prediction of building types and other building characteristics and thus, the efficient classification and description of the entire building stock of cities and regions. However, such data-driven approaches always require a sufficient amount of ground truth (reference) information for training and validation. The collection of reference data is usually cost-intensive and time-consuming. Experiences from other disciplines have shown that crowdsourcing offers the possibility to support the process of obtaining ground truth data. Therefore, this paper presents the results of an experimental study aiming at assessing the accuracy of non-expert annotations on street view images collected from an internet crowd. The findings provide the basis for a future integration of a crowdsourcing component into the process of land use mapping, particularly the automatic building classification.

  3. Comparison Of Semi-Automatic And Automatic Slick Detection Algorithms For Jiyeh Power Station Oil Spill, Lebanon

    Science.gov (United States)

    Osmanoglu, B.; Ozkan, C.; Sunar, F.

    2013-10-01

    After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.

  4. Automatic segmentation of the left ventricle in a cardiac MR short axis image using blind morphological operation

    Science.gov (United States)

    Irshad, Mehreen; Muhammad, Nazeer; Sharif, Muhammad; Yasmeen, Mussarat

    2018-04-01

    Conventionally, cardiac MR image analysis is done manually. Automatic examination for analyzing images can replace the monotonous tasks of massive amounts of data to analyze the global and regional functions of the cardiac left ventricle (LV). This task is performed using MR images to calculate the analytic cardiac parameter like end-systolic volume, end-diastolic volume, ejection fraction, and myocardial mass, respectively. These analytic parameters depend upon genuine delineation of epicardial, endocardial, papillary muscle, and trabeculations contours. In this paper, we propose an automatic segmentation method using the sum of absolute differences technique to localize the left ventricle. Blind morphological operations are proposed to segment and detect the LV contours of the epicardium and endocardium, automatically. We test the benchmark Sunny Brook dataset for evaluation of the proposed work. Contours of epicardium and endocardium are compared quantitatively to determine contour's accuracy and observe high matching values. Similarity or overlapping of an automatic examination to the given ground truth analysis by an expert are observed with high accuracy as with an index value of 91.30% . The proposed method for automatic segmentation gives better performance relative to existing techniques in terms of accuracy.

  5. Dual-model automatic detection of nerve-fibres in corneal confocal microscopy images.

    Science.gov (United States)

    Dabbah, M A; Graham, J; Petropoulos, I; Tavakoli, M; Malik, R A

    2010-01-01

    Corneal Confocal Microscopy (CCM) imaging is a non-invasive surrogate of detecting, quantifying and monitoring diabetic peripheral neuropathy. This paper presents an automated method for detecting nerve-fibres from CCM images using a dual-model detection algorithm and compares the performance to well-established texture and feature detection methods. The algorithm comprises two separate models, one for the background and another for the foreground (nerve-fibres), which work interactively. Our evaluation shows significant improvement (p approximately 0) in both error rate and signal-to-noise ratio of this model over the competitor methods. The automatic method is also evaluated in comparison with manual ground truth analysis in assessing diabetic neuropathy on the basis of nerve-fibre length, and shows a strong correlation (r = 0.92). Both analyses significantly separate diabetic patients from control subjects (p approximately 0).

  6. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    Science.gov (United States)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  7. Comparative Analysis of Automatic Exudate Detection between Machine Learning and Traditional Approaches

    Science.gov (United States)

    Sopharak, Akara; Uyyanonvara, Bunyarit; Barman, Sarah; Williamson, Thomas

    To prevent blindness from diabetic retinopathy, periodic screening and early diagnosis are neccessary. Due to lack of expert ophthalmologists in rural area, automated early exudate (one of visible sign of diabetic retinopathy) detection could help to reduce the number of blindness in diabetic patients. Traditional automatic exudate detection methods are based on specific parameter configuration, while the machine learning approaches which seems more flexible may be computationally high cost. A comparative analysis of traditional and machine learning of exudates detection, namely, mathematical morphology, fuzzy c-means clustering, naive Bayesian classifier, Support Vector Machine and Nearest Neighbor classifier are presented. Detected exudates are validated with expert ophthalmologists' hand-drawn ground-truths. The sensitivity, specificity, precision, accuracy and time complexity of each method are also compared.

  8. Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Guan Wang

    2017-01-01

    Full Text Available Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.

  9. Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning.

    Science.gov (United States)

    Wang, Guan; Sun, Yu; Wang, Jianxin

    2017-01-01

    Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.

  10. STS, symmetry and post-truth.

    Science.gov (United States)

    Lynch, Michael

    2017-08-01

    This essay takes up a series of questions about the connection between 'symmetry' in Science and Technology Studies (STS) and 'post-truth' in contemporary politics. A recent editorial in this journal by Sergio Sismondo argues that current discussions of 'post-truth' have little to do with conceptions of 'symmetry' or with concerns about 'epistemic democracy' in STS, while others, such as Steve Fuller and Harry Collins, insist that there are such connections. The present essay discusses a series of questions about the meaning of 'post-truth' and 'symmetry', and the connections of those concepts to each other and to 'epistemic democracy'. The essay ends with a series of other questions about STS and contemporary politics, and an invitation to further discussions.

  11. Can partisan voting lead to truth?

    Science.gov (United States)

    Masuda, Naoki; Redner, S.

    2011-02-01

    We study an extension of the voter model in which each agent is endowed with an innate preference for one of two states that we term as 'truth' or 'falsehood'. Due to interactions with neighbors, an agent that innately prefers truth can be persuaded to adopt a false opinion (and thus be discordant with its innate preference) or the agent can possess an internally concordant 'true' opinion. Parallel states exist for agents that inherently prefer falsehood. We determine the conditions under which a population of such agents can ultimately reach a consensus for the truth, reach a consensus for falsehood, or reach an impasse where an agent tends to adopt the opinion that is in internal concordance with its innate preference with the outcome that consensus is never achieved.

  12. The Experience of Truth in Jazz Improvisation

    DEFF Research Database (Denmark)

    Olsen, Jens Skou

    2015-01-01

    This is a book on truth, experience, and the interrelations between these two fundamental philosophical notions. The questions of truth and experience have their roots at the very heart of philosophy, both historically and thematically. This book gives an insight into how philosophers working...... in the fields of philosophical phenomenology and hermeneutics respond to challenges posed by these questions, not only in relation to the history of philosophy, but to philosophy itself. The book contains texts written by distinguished professors and in particular by young scholars. It is the result...

  13. Truth, laws and the progress of science

    Directory of Open Access Journals (Sweden)

    Mauro Dorato

    2011-06-01

    Full Text Available In this paper I analyze the difficult question of the truth of mature scientific theories by tackling the problem of the truth of laws. After introducing the main philosophical positions in the field of scientific realism, I discuss and then counter the two main arguments against realism, namely the pessimistic meta-induction and the abstract and idealized character of scientific laws. I conclude by defending the view that well-confirmed physical theories are true only relatively to certain values of the variables that appear in the laws.

  14. Protection of ground water at shallow geothermal power plants by means of an automatic leakage detection and liquid backwashing; Grundwasserschutz bei flachen Geothermieanlagen durch automatische Leckagenerkennung und Fluessigkeitsrueckspuelung

    Energy Technology Data Exchange (ETDEWEB)

    Wohnlich, Stefan; Scheliga, Roman [Bochum Univ. (Germany). Lehrstuhl fuer Angewandte Geologie; Bonin, Juergen [Umwelt und Technik, Xanten (Germany)

    2011-10-24

    A protective device is examined with which the contamination of groundwater in an accident can be reduced to a minimum. The proposed device (geo-protector) registered a leak in two stages of pressure sensors. If the value falls below the lower minimum pressure, the entire system is flushed with water. The brine can be collected in a separate vessel. Thus, only drinking water escapes from the leak. A further contamination of ground water with brine is avoided. In order to investigate the functionality and reliability of the geo-protectors, a model of a geothermal power plant was created. With this, leakages of varying sizes were simulated at different places of the geothermal probes. All measured leakage scenarios could be registered. The output amount of the brine during the flushing process was at most 5% of the total volume of the system. The output amount could be further minimized by means of a pressure reducer. The outflow quantity of the brine is reduced to a minimum by means of a shallow geothermal system. This significantly may contribute to the protection of groundwater.

  15. Automatic apparatus and data transmission for field response tests of the ground; Automatisation et teletransmission des donnees pour les tests de reponse du terrain

    Energy Technology Data Exchange (ETDEWEB)

    Laloui, L.; Steinmann, G.

    2004-07-01

    This is the report on the third part of a development started 1998 at the Swiss Federal Institute of Technology Lausanne (EPFL) in Lausanne, Switzerland. Energy piles are becoming increasingly used as a heat exchanger and heat storage device, as are geothermal probes. Their design and sizing is subject to some uncertainty due to the fact that the planner has to estimate the thermal and mechanical properties of the ground surrounding the piles or probes. The aim of the project was to develop an apparatus for field measurements of thermal and mechanical properties of an energy pile or a geothermal probe (thermal response tests). In the reported third phase of the project the portable apparatus was equipped with a data transmission device using the Internet. Real-time data acquisition and supervision is now implemented and data processing has been improved. Another goal of the project was to obtain the official accreditation of such response tests according to the European standard EN 45,000. First operation experience from a test in Lyon, France is reported.

  16. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach.

    Science.gov (United States)

    Avendi, Michael R; Kheradvar, Arash; Jafarkhani, Hamid

    2017-12-01

    This study aims to accurately segment the right ventricle (RV) from cardiac MRI using a fully automatic learning-based method. The proposed method uses deep learning algorithms, i.e., convolutional neural networks and stacked autoencoders, for automatic detection and initial segmentation of the RV chamber. The initial segmentation is then combined with the deformable models to improve the accuracy and robustness of the process. We trained our algorithm using 16 cardiac MRI datasets of the MICCAI 2012 RV Segmentation Challenge database and validated our technique using the rest of the dataset (32 subjects). An average Dice metric of 82.5% along with an average Hausdorff distance of 7.85 mm were achieved for all the studied subjects. Furthermore, a high correlation and level of agreement with the ground truth contours for end-diastolic volume (0.98), end-systolic volume (0.99), and ejection fraction (0.93) were observed. Our results show that deep learning algorithms can be effectively used for automatic segmentation of the RV. Computed quantitative metrics of our method outperformed that of the existing techniques participated in the MICCAI 2012 challenge, as reported by the challenge organizers. Magn Reson Med 78:2439-2448, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. A logical approach to fuzzy truth hedges

    Czech Academy of Sciences Publication Activity Database

    Esteva, F.; Godo, L.; Noguera, Carles

    2013-01-01

    Roč. 232, č. 1 (2013), s. 366-385 ISSN 0020-0255 Institutional support: RVO:67985556 Keywords : Mathematical fuzzy logic * Standard completeness * Truth hedges Subject RIV: BA - General Mathematics Impact factor: 3.893, year: 2013 http://library.utia.cas.cz/separaty/2016/MTR/noguera-0469148.pdf

  18. Corporate truth: the limits to transparency

    National Research Council Canada - National Science Library

    Henriques, Adrian

    2007-01-01

    ... plc What the directors' report should cover Conditions of use of the Rio Tinto website Practical complicity 75 83 90 95 98 160 FIGURES 3.1 7.1 7.2 7.3 The ring of truth The global trend in non-fi...

  19. Gödel, Truth and Proof

    Czech Academy of Sciences Publication Activity Database

    Peregrin, Jaroslav

    -, č. 82 (2007), s. 1-10 E-ISSN 1742-6596 R&D Projects: GA ČR(CZ) GA401/04/0117 Institutional research plan: CEZ:AV0Z90090514 Keywords : Gödel * incompleteness of arithmetic * proof vs. truth Subject RIV: AA - Philosophy ; Religion http://www.iop.org/EJ/ toc /1742-6596/82/1

  20. La verità scientifica - Scientific truth

    Directory of Open Access Journals (Sweden)

    Marco Mazzeo

    2012-10-01

    Full Text Available A scientific theory is not a speculation. On the contrary it is based on facts and observations. Nevertheless the facts and the observations are unable to show us the truth about the world. Indeed to understand the facts or even to discover them through experiments we need a starting theory about the world. Therefore the world is not only discovered by us, but we can say that it is created by our brain. Facts are the constraints for the possible theories and theories are creations of our minds to understand the facts. There are no facts without a theory in mind, and there are no scientific theories about the world without facts. It is obvious therefore that science cannot give any absolute truth but “only” temporary truths which will change with new discoveries and theories. The scientific truth is therefore unstable: after few decades the concepts become unable to explain the news discoveries and become old, but the new concepts will include the old ones. This is called science progress. In this work we analyze all these points discussing the historical creation of the gravitational theory from Aristotle to Newton.

  1. An Exchange on "Truth and Methods."

    Science.gov (United States)

    Caughie, Pamela L.; Dasenbrock, Reed Way

    1996-01-01

    Takes issue with Reed Way Dasenbrock's criticism of literary theory and the terms under which literary interpretation and discussion take place. Presents Dasenbrock's reply, which discusses his understanding of certain terms (evidence, truth, debate), his description of the problem, and the logical contradictions he finds internal to…

  2. 76 FR 18354 - Truth in Lending

    Science.gov (United States)

    2011-04-04

    ... extent that a creditor imposed charges that were inconsistent with Regulation Z while the account was... amounts charged during the period the account was exempt or to provide disclosures regarding transactions...) amends the Truth in Lending Act (TILA) by increasing the threshold for exempt consumer credit...

  3. Lost Academic Souls and the Truth.

    Science.gov (United States)

    Birenbaum, William M.

    The connection between knowing the truth and some version of how men should live has always guided those who would lead the university. Walls around a campus or geographic isolation cannot prevent social pressures from affecting the institution. Colleges and universities have always been politicalized. The danger lies not in that fact but in the…

  4. What Truth in Lending Means to You.

    Science.gov (United States)

    Board of Governors of the Federal Reserve System, Washington, DC.

    Designed for the general public and possibly suitable also for high school economics students, this pamphlet discusses the provisions of the Truth in Lending Law. The act requires that creditors state credit charges in a uniform way. The pamphlet provides a brief description of finance charges and annual percentage rates. It also focuses on…

  5. Problematizing Religious Truth: Implications for Public Education

    Science.gov (United States)

    Rosenblith, Suzanne; Priestman, Scott

    2004-01-01

    The question motivating this paper is whether or not there can be standards governing the evaluation of truth claims in religion. In other areas of study such as physics, math, history, and even value-laden realms like morality there is some widespread agreement as to what constitutes good thinking. If such a standard existed in religion, then our…

  6. Beauty, a road to the truth

    NARCIS (Netherlands)

    Kuipers, T.A.F.

    In this article I give a naturalistic-cum-formal analysis of the relation between beauty, empirical success, and truth. The analysis is based on the one hand on a hypothetical variant of the so-called 'mere-exposure effect' which has been more or less established in experimental psychology regarding

  7. 76 FR 11319 - Truth in Lending

    Science.gov (United States)

    2011-03-02

    ... Congress enacted the Truth in Lending Act (TILA) based on findings that economic stability would be... Final Rule Congress enacted TILA based on findings that economic stability would be enhanced and... Economic Recovery Act of 2008 (HERA), also provides that its principal obligation limitations are subject...

  8. 75 FR 81836 - Truth in Lending

    Science.gov (United States)

    2010-12-29

    ... the Truth in Lending Act (TILA) based on findings that economic stability would be enhanced and... MDIA is contained in Sections 2501 through 2503 of the Housing and Economic Recovery Act of 2008, Public Law 110-289, enacted on July 30, 2008. The MDIA was later amended by the Emergency Economic...

  9. Truth and (self) censorship in military memoirs

    NARCIS (Netherlands)

    Kleinreesink, E.; Soeters, J.M.M.L.

    2016-01-01

    It can be difficult for researchers from outside the military to gain access to the field. However, there is a rich source on the military that is readily available for every researcher: military memoirs. This source does provide some methodological challenges with regard to truth and (self)

  10. Scientific revolution, incommensurability and truth in theories ...

    African Journals Online (AJOL)

    Scientific revolution, incommensurability and truth in theories: objection to Kuhn's perspective. ... AFRREV STECH: An International Journal of Science and Technology ... The core of our discussion is, ultimately, to provide a clearer and broader picture of the general characteristics of scientific revolution or theory change.

  11. Bargaining for Truth and Reconciliation in South Africa: A Game ...

    African Journals Online (AJOL)

    Bargaining for Truth and Reconciliation in South Africa: A Game-Theoretic Analysis. ... Using game-theoreticanalysis, the authors model the truth-amnesty game and predict the optimal commission strategy. ... AJOL African Journals Online.

  12. Bargaining for Truth and Reconciliation in South Africa: A Game ...

    African Journals Online (AJOL)

    Bargaining for Truth and Reconciliation in South Africa: A Game-Theoretic Analysis. ... Using game-theoretic analysis, the authors model the truth-amnesty game and predict the optimal commission strategy. ... AJOL African Journals Online.

  13. Automatic polyp detection in colonoscopy videos

    Science.gov (United States)

    Yuan, Zijie; IzadyYazdanabadi, Mohammadhassan; Mokkapati, Divya; Panvalkar, Rujuta; Shin, Jae Y.; Tajbakhsh, Nima; Gurudu, Suryakanth; Liang, Jianming

    2017-02-01

    Colon cancer is the second cancer killer in the US [1]. Colonoscopy is the primary method for screening and prevention of colon cancer, but during colonoscopy, a significant number (25% [2]) of polyps (precancerous abnormal growths inside of the colon) are missed; therefore, the goal of our research is to reduce the polyp miss-rate of colonoscopy. This paper presents a method to detect polyp automatically in a colonoscopy video. Our system has two stages: Candidate generation and candidate classification. In candidate generation (stage 1), we chose 3,463 frames (including 1,718 with-polyp frames) from real-time colonoscopy video database. We first applied processing procedures, namely intensity adjustment, edge detection and morphology operations, as pre-preparation. We extracted each connected component (edge contour) as one candidate patch from the pre-processed image. With the help of ground truth (GT) images, 2 constraints were implemented on each candidate patch, dividing and saving them into polyp group and non-polyp group. In candidate classification (stage 2), we trained and tested convolutional neural networks (CNNs) with AlexNet architecture [3] to classify each candidate into with-polyp or non-polyp class. Each with-polyp patch was processed by rotation, translation and scaling for invariant to get a much robust CNNs system. We applied leave-2-patients-out cross-validation on this model (4 of 6 cases were chosen as training set and the rest 2 were as testing set). The system accuracy and sensitivity are 91.47% and 91.76%, respectively.

  14. Assessment of Machine Learning Algorithms for Automatic Benthic Cover Monitoring and Mapping Using Towed Underwater Video Camera and High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Hassan Mohamed

    2018-05-01

    Full Text Available Benthic habitat monitoring is essential for many applications involving biodiversity, marine resource management, and the estimation of variations over temporal and spatial scales. Nevertheless, both automatic and semi-automatic analytical methods for deriving ecologically significant information from towed camera images are still limited. This study proposes a methodology that enables a high-resolution towed camera with a Global Navigation Satellite System (GNSS to adaptively monitor and map benthic habitats. First, the towed camera finishes a pre-programmed initial survey to collect benthic habitat videos, which can then be converted to geo-located benthic habitat images. Second, an expert labels a number of benthic habitat images to class habitats manually. Third, attributes for categorizing these images are extracted automatically using the Bag of Features (BOF algorithm. Fourth, benthic cover categories are detected automatically using Weighted Majority Voting (WMV ensembles for Support Vector Machines (SVM, K-Nearest Neighbor (K-NN, and Bagging (BAG classifiers. Fifth, WMV-trained ensembles can be used for categorizing more benthic cover images automatically. Finally, correctly categorized geo-located images can provide ground truth samples for benthic cover mapping using high-resolution satellite imagery. The proposed methodology was tested over Shiraho, Ishigaki Island, Japan, a heterogeneous coastal area. The WMV ensemble exhibited 89% overall accuracy for categorizing corals, sediments, seagrass, and algae species. Furthermore, the same WMV ensemble produced a benthic cover map using a Quickbird satellite image with 92.7% overall accuracy.

  15. Automatic Evaluation of Photovoltaic Power Stations from High-Density RGB-T 3D Point Clouds

    Directory of Open Access Journals (Sweden)

    Luis López-Fernández

    2017-06-01

    Full Text Available A low-cost unmanned aerial platform (UAV equipped with RGB (Red, Green, Blue and thermographic sensors is used for the acquisition of all the data needed for the automatic detection and evaluation of thermal pathologies on photovoltaic (PV surfaces and geometric defects in the mounting on photovoltaic power stations. RGB imagery is used for the generation of a georeferenced 3D point cloud through digital image preprocessing, photogrammetric and computer vision algorithms. The point cloud is complemented with temperature values measured by the thermographic sensor and with intensity values derived from the RGB data in order to obtain a multidimensional product (5D: 3D geometry plus temperature and intensity on the visible spectrum. A segmentation workflow based on the proper integration of several state-of-the-art geomatic and mathematic techniques is applied to the 5D product for the detection and sizing of thermal pathologies and geometric defects in the mounting in the PV panels. It consists of a three-step segmentation procedure, involving first the geometric information, then the radiometric (RGB information, and last the thermal data. No configuration of parameters is required. Thus, the methodology presented contributes to the automation of the inspection of PV farms, through the maximization of the exploitation of the data acquired in the different spectra (visible and thermal infrared bands. Results of the proposed workflow were compared with a ground truth generated according to currently established protocols and complemented with a topographic survey. The proposed methodology was able to detect all pathologies established by the ground truth without adding any false positives. Discrepancies in the measurement of damaged surfaces regarding established ground truth, which can reach the 5% of total panel surface for the visual inspection by an expert operator, decrease with the proposed methodology under the 2%. The geometric evaluation

  16. The Truth and Harriet Martineau: Interpreting a Life.

    Science.gov (United States)

    Weiner, Gaby

    This paper explores the difficulty of claims to truth in the analysis of the life of the Victorian feminist, reformer, educationist, and celebrity, Harriet Martineau (1802-76). She was widely known as a truthful person. For example, her contemporary, the poet Elizabeth Barrett Browning, wrote in 1845 that "her love of the truth is proverbial…

  17. The Logic of Truth in Paraconsistent Internal Realism

    Directory of Open Access Journals (Sweden)

    Manuel Bremer

    2008-08-01

    Full Text Available The paper discusses which modal principles should hold for a truth operator answering to the truth theory of internal realism. It turns out that the logic of truth in internal realism is isomorphic to the modal system S4.

  18. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G.; Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-01-01

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  19. Truth and beauty in contemporary urban photography

    Directory of Open Access Journals (Sweden)

    Daniele Colistra

    2014-05-01

    Full Text Available Does city still need photography? Or does it show itself more effectively through other forms of communication? The question brings us back almost two hundred years ago, at the time of the spread of the first daguerreotypes, when the query was: Does city still need painting? The question raises several other issues - truth and beauty, analogical and digital, truth and photo editing - that this essay examines by comparing some images. We are convinced that “the more we can speak of a picture, the more unlikely it is to speak of photography” (R. Barthes. The essay describes the work of some artists/photographers who have addressed the issue of urban photography. Works in which the figurative and visionary component is based on the interaction of traditional shooting techniques and processes of digital post-production.

  20. An inconvenient truth; Une verite qui derange

    Energy Technology Data Exchange (ETDEWEB)

    Al, Gore

    2007-01-15

    Our climate crisis may at times appear to be happening slowly, but in fact it is happening very quickly-and has become a true planetary emergency. The Chinese expression for crisis consists of two characters. The first is a symbol for danger; the second is a symbol for opportunity. In order to face down the danger that is stalking us and move through it, we first have to recognize that we are facing a crisis. So why is it that our leaders seem not to hear such clarion warnings? Are they resisting the truth because they know that the moment they acknowledge it, they will face a moral imperative to act? Is it simply more convenient to ignore the warnings? Perhaps, but inconvenient truths do not go away just because they are not seen. Indeed, when they are responded to, their significance does not diminish; it grows. (author)

  1. Do citation systems represent theories of truth?

    Directory of Open Access Journals (Sweden)

    Betsy Van der Veer Martens

    2001-01-01

    Full Text Available This article suggests that the citation can be viewed not only as a "concept symbol" but also as a "boundary object". The scientific, legal, and patent citation systems in America are examined at the micro, meso, and macro levels in order to understand how they function as commodified theories of truth in contemporary knowledge representation. This approach also offers a meta-theoretical overview of existing citation research efforts in science, law, and technology that may be of interdisciplinary interest.

  2. Political Corruption as Deformities of Truth

    Directory of Open Access Journals (Sweden)

    Yann Allard-Tremblay

    2014-01-01

    Full Text Available This paper presents a conception of corruption informed by epistemic democratic theory. I first explain the view of corruption as a disease of the political body. Following this view, we have to consider the type of actions that debase a political entity of its constitutive principal in order to assess corruption. Accordingly, we need to consider what the constitutive principle of democracy is. This is the task I undertake in the second section where I explicate democratic legitimacy. I present democracy as a procedure of social inquiry about what ought to be done that includes epistemic and practical considerations. In the third section, I argue that the problem of corruption for a procedural conception of democracy is that the epistemic value of the procedure is diminished by corrupted agents’ lack of concern for truth. Corruption, according to this view, consists in two deformities of truth: lying and bullshit. These deformities corrupt since they conceal private interests under the guise of a concern for truth. In the fourth section, I discuss the difficulties a procedural account may face in formulating solutions to the problem of corruption.

  3. Finding a single point of truth

    Energy Technology Data Exchange (ETDEWEB)

    Sokolov, S.; Thijssen, H. [Autodesk Inc, Toronto, ON (Canada); Laslo, D.; Martin, J. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Electric utilities collect large volumes of data at every level of their business, including SCADA, Smart Metering and Smart Grid initiatives, LIDAR and other 3D imagery surveys. Different types of database systems are used to store the information, rendering data flow within the utility business process extremely complicated. The industry trend has been to endure redundancy of data input and maintenance of multiple copies of the same data across different solution data sets. Efforts have been made to improve the situation with point to point interfaces, but with the tools and solutions available today, a single point of truth can be achieved. Consolidated and validated data can be published into a data warehouse at the right point in the process, making the information available to all other enterprise systems and solutions. This paper explained how the single point of truth spatial data warehouse and process automation services can be configured to streamline the flow of data within the utility business process using the initiate-plan-execute-close (IPEC) utility workflow model. The paper first discussed geospatial challenges faced by utilities and then presented the approach and technology aspects. It was concluded that adoption of systems and solutions that can function with and be controlled by the IPEC workflow can provide significant improvement for utility operations, particularly if those systems are coupled with the spatial data warehouse that reflects a single point of truth. 6 refs., 3 figs.

  4. Culture, Truth, and Science After Lacan.

    Science.gov (United States)

    Gillett, Grant

    2015-12-01

    Truth and knowledge are conceptually related and there is a way of construing both that implies that they cannot be solely derived from a description that restricts itself to a set of scientific facts. In the first section of this essay, I analyse truth as a relation between a praxis, ways of knowing, and the world. In the second section, I invoke the third thing-the objective reality on which we triangulate as knowing subjects for the purpose of complex scientific endeavours like medical science and clinical care. Such praxes develop robust methods of "keeping in touch" with disease and illness (like biomarkers). An analysis drawing on philosophical semantics motivates the needed (anti-scientistic) account of meaning and truth (and therefore knowledge) and underpins the following argument: (i) the formulation and dissemination of knowledge rests on language; (ii) language is selective in what it represents in any given situation; (iii) the praxes of a given (sub)culture are based on this selectivity; but (iv) human health and illness involve whole human beings in a human life-world; therefore, (v) medical knowledge should reflectively transcend, where required, biomedical science towards a more inclusive view. Parts three and four argue that a post-structuralist (Lacanian) account of the human subject can avoid both scientism and idealism or unconstrained relativism.

  5. Knowledge does not protect against illusory truth.

    Science.gov (United States)

    Fazio, Lisa K; Brashier, Nadia M; Payne, B Keith; Marsh, Elizabeth J

    2015-10-01

    In daily life, we frequently encounter false claims in the form of consumer advertisements, political propaganda, and rumors. Repetition may be one way that insidious misconceptions, such as the belief that vitamin C prevents the common cold, enter our knowledge base. Research on the illusory truth effect demonstrates that repeated statements are easier to process, and subsequently perceived to be more truthful, than new statements. The prevailing assumption in the literature has been that knowledge constrains this effect (i.e., repeating the statement "The Atlantic Ocean is the largest ocean on Earth" will not make you believe it). We tested this assumption using both normed estimates of knowledge and individuals' demonstrated knowledge on a postexperimental knowledge check (Experiment 1). Contrary to prior suppositions, illusory truth effects occurred even when participants knew better. Multinomial modeling demonstrated that participants sometimes rely on fluency even if knowledge is also available to them (Experiment 2). Thus, participants demonstrated knowledge neglect, or the failure to rely on stored knowledge, in the face of fluent processing experiences. (c) 2015 APA, all rights reserved).

  6. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.

    Science.gov (United States)

    Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej

    2015-09-01

    CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

  7. A framework for automatic information quality ranking of diabetes websites.

    Science.gov (United States)

    Belen Sağlam, Rahime; Taskaya Temizel, Tugba

    2015-01-01

    Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).

  8. AND THE TRUTH SHALL SET YOU FREE. TRUTH COMMISSIONS AND CIVIL-MILITARY RELATIONS

    Directory of Open Access Journals (Sweden)

    Michael DELOACH

    2015-04-01

    Full Text Available For societies suffering in the wake of a repressive regime, truth commissions may be a necessary compromise regarding the form of transitional justice pursued, but they can still play a far-reaching role in the democratization of civil-military relations. Because the perpetrators of past abuses are likely to continue to wield some level of power at the time of transition, prosecution of these members may be politically infeasible. Lacking the mandate to prosecute guilty parties or implement recommendations, truth commissions can still lay the foundation for a new era of civil-military relations. By distinguishing contemporary institutions from their past acts, revealing the patterns that allowed abuses to be carried out , and helping garner the political will for reforms, truth commissions can provide the impetus for the security sector reforms necessary to ensure a democratic future.

  9. Automatic coronary calcium scoring using noncontrast and contrast CT images

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Guanyu, E-mail: yang.list@seu.edu.cn; Chen, Yang; Shu, Huazhong [Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, No. 2, Si Pai Lou, Nanjing 210096 (China); Centre de Recherche en Information Biomédicale Sino-Français (LIA CRIBs), Nanjing 210096 (China); Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096 (China); Ning, Xiufang; Sun, Qiaoyu [Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, No. 2, Si Pai Lou, Nanjing 210096 (China); Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096 (China); Coatrieux, Jean-Louis [INSERM-U1099, Rennes F-35000 (France); Labotatoire Traitement du Signal et de l’Image (LTSI), Université de Rennes 1, Campus de Beaulieu, Bat. 22, Rennes 35042 Cedex (France); Centre de Recherche en Information Biomédicale Sino-Français (LIA CRIBs), Nanjing 210096 (China)

    2016-05-15

    Purpose: Calcium scoring is widely used to assess the risk of coronary heart disease (CHD). Accurate coronary artery calcification detection in noncontrast CT image is a prerequisite step for coronary calcium scoring. Currently, calcified lesions in the coronary arteries are manually identified by radiologists in clinical practice. Thus, in this paper, a fully automatic calcium scoring method was developed to alleviate the work load of the radiologists or cardiologists. Methods: The challenge of automatic coronary calcification detection is to discriminate the calcification in the coronary arteries from the calcification in the other tissues. Since the anatomy of coronary arteries is difficult to be observed in the noncontrast CT images, the contrast CT image of the same patient is used to extract the regions of the aorta, heart, and coronary arteries. Then, a patient-specific region-of-interest (ROI) is generated in the noncontrast CT image according to the segmentation results in the contrast CT image. This patient-specific ROI focuses on the regions in the neighborhood of coronary arteries for calcification detection, which can eliminate the calcifications in the surrounding tissues. A support vector machine classifier is applied finally to refine the results by removing possible image noise. Furthermore, the calcified lesions in the noncontrast images belonging to the different main coronary arteries are identified automatically using the labeling results of the extracted coronary arteries. Results: Forty datasets from four different CT machine vendors were used to evaluate their algorithm, which were provided by the MICCAI 2014 Coronary Calcium Scoring (orCaScore) Challenge. The sensitivity and positive predictive value for the volume of detected calcifications are 0.989 and 0.948. Only one patient out of 40 patients had been assigned to the wrong risk category defined according to Agatston scores (0, 1–100, 101–300, >300) by comparing with the ground

  10. The use of the truth and deception in dementia care amongst general hospital staff.

    Science.gov (United States)

    Turner, Alex; Eccles, Fiona; Keady, John; Simpson, Jane; Elvish, Ruth

    2017-08-01

    Deceptive practice has been shown to be endemic in long-term care settings. However, little is known about the use of deception in dementia care within general hospitals and staff attitudes towards this practice. This study aimed to develop understanding of the experiences of general hospital staff and explore their decision-making processes when choosing whether to tell the truth or deceive a patient with dementia. This qualitative study drew upon a constructivist grounded theory approach to analyse data gathered from semi-structured interviews with a range of hospital staff. A model, grounded in participant experiences, was developed to describe their decision-making processes. Participants identified particular triggers that set in motion the need for a response. Various mediating factors influenced how staff chose to respond to these triggers. Overall, hospital staff were reluctant to either tell the truth or to lie to patients. Instead, 'distracting' or 'passing the buck' to another member of staff were preferred strategies. The issue of how truth and deception are defined was identified. The study adds to the growing research regarding the use of lies in dementia care by considering the decision-making processes for staff in general hospitals. Various factors influence how staff choose to respond to patients with dementia and whether deception is used. Similarities and differences with long-term dementia care settings are discussed. Clinical and research implications include: opening up the topic for further debate, implementing staff training about communication and evaluating the impact of these processes.

  11. Automatic bone detection and soft tissue aware ultrasound-CT registration for computer-aided orthopedic surgery.

    Science.gov (United States)

    Wein, Wolfgang; Karamalis, Athanasios; Baumgartner, Adrian; Navab, Nassir

    2015-06-01

    The transfer of preoperative CT data into the tracking system coordinates within an operating room is of high interest for computer-aided orthopedic surgery. In this work, we introduce a solution for intra-operative ultrasound-CT registration of bones. We have developed methods for fully automatic real-time bone detection in ultrasound images and global automatic registration to CT. The bone detection algorithm uses a novel bone-specific feature descriptor and was thoroughly evaluated on both in-vivo and ex-vivo data. A global optimization strategy aligns the bone surface, followed by a soft tissue aware intensity-based registration to provide higher local registration accuracy. We evaluated the system on femur, tibia and fibula anatomy in a cadaver study with human legs, where magnetically tracked bone markers were implanted to yield ground truth information. An overall median system error of 3.7 mm was achieved on 11 datasets. Global and fully automatic registration of bones aquired with ultrasound to CT is feasible, with bone detection and tracking operating in real time for immediate feedback to the surgeon.

  12. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    Science.gov (United States)

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    Science.gov (United States)

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  14. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs

    International Nuclear Information System (INIS)

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den

    2010-01-01

    Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4±1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images

  15. The vital role of transcendental truth in science.

    Science.gov (United States)

    Charlton, Bruce G

    2009-04-01

    I have come to believe that science depends for its long-term success on an explicit and pervasive pursuit of the ideal of transcendental truth. 'Transcendental' implies that a value is ideal and ultimate - it is aimed-at but can only imperfectly be known, achieved or measured. So, transcendental truth is located outside of science; beyond scientific methods, processes and peer consensus. Although the ultimate scientific authority of a transcendental value of truth was a view held almost universally by the greatest scientists throughout recorded history, modern science has all-but banished references to truth from professional scientific discourse - these being regarded as wishful, mystical and embarrassing at best, and hypocritical or manipulative at worst. With truth excluded, the highest remaining evaluation mechanism is 'professional consensus' or peer review - beyond which there is no higher court of appeal. Yet in Human accomplishment, Murray argues that cultures which foster great achievement need transcendental values (truth, beauty and virtue) to be a live presence in the culture; such that great artists and thinkers compete to come closer to the ideal. So a scientific system including truth as a live presence apparently performs better than a system which excludes truth. Transcendental truth therefore seems to be real in the pragmatic sense that it makes a difference. To restore the primacy of truth to science a necessary step would be to ensure that only truth-seekers were recruited to the key scientific positions, and to exclude from leadership those who are untruthful or exhibit insufficient devotion to the pursuit of truth. In sum, to remain anchored in its proper role, science should through 'truth talk' frequently be referencing normal professional practice to transcendental truth values. Ultimately, science should be conducted at every level, from top to bottom, on the basis of what Bronowski termed the 'habit of truth'. Such a situation currently

  16. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu [Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan, 62102 (China); Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut 06510-3220 (United States)

    2013-01-15

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  17. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    International Nuclear Information System (INIS)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu

    2013-01-01

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  18. Authenticity, Post-truth and Populism

    Directory of Open Access Journals (Sweden)

    Vintilă Mihăilescu

    2017-11-01

    Full Text Available The article discusses the fake news phenomenon as a social act, analyzed together with what has caused it and what accompanies it: the culture of authenticity, digital communication and its specificity (emphasis on image, not on concepts, post-truth and populism (with its emotional dimension. The premise is that fake news is immanent to the social space, but in the context of globalization and under the development of information technology and social media, it has a greater social impact and carries higher risks for the society.

  19. Freedom of Expression, Diversity, and Truth

    DEFF Research Database (Denmark)

    Kappel, Klemens; Hallsson, Bjørn Gunnar; Møller, Emil Frederik Lundbjerg

    2016-01-01

    The aim of this chapter is to examine how diversity benefits deliberation, information exchange and other socio-epistemic practices associated with free speech. We separate five distinct dimensions of diversity, and discuss a variety of distinct mechanisms by which various forms of diversity may...... be thought to have epistemically valuable outcomes. We relate these results to the moral justification of free speech. Finally, we characterise a collective action problem concerning the compliance with truth-conducive norms of deliberation, and suggest what may solve this problem....

  20. MATHEMATICAL-Universe-Hypothesis(MUH) BECOME SCENARIO(MUS)!!! (NOT YET A THEORY) VIA 10-DIGITS[ 0 --> 9] SEPHIROT CREATION AUTOMATICALLY from DIGITS AVERAGED-PROBABILITY Newcomb-Benford LOG-Law; UTTER-SIMPLICITY!!!: It's a Jack-in-the-Box Universe: Accidental?/Purposeful?; EMET/TRUTH!!!

    Science.gov (United States)

    Siegel, Edward Carl-Ludwig

    2015-04-01

    Siegel(2012) 10-DIGITS[0 --> 9] AVERAGE PROBABILITY LOG-Law SCALE-INVARIANCE UTTER-SIMPLICITY: Kabbala SEPHIROT SCENARIO AUTOMATICALLY CREATES a UNIVERSE: (1) a big-bang[bosons(BEQS) created from Newcomb[Am.J.Math.4(1),39(1881;THE discovery of the QUANTUM!!!)-Poincare[Calcul des Probabilites,313(12)]-Weyl[Goett.Nach.(14);Math.Ann.77,313(16)] DIGITS AVERAGE STATISTICS LOG-Law[ = log(1 +1/d) = log([d +1]/d)] algebraic-inversion, (2)[initial (at first space-time point created) c = ∞ elongating to timelike-pencil spreading into finite-c light-cone] hidden-dark-energy (HDE)[forming at every-spacetime-point], (3) inflation[logarithm algebraic-inversion-to exponential], (4) hidden[in Siegel(87) ``COMPLEX quantum-statistics in (Nottale-Linde)FRACTAL-dimensions'' expansion around unit-circle/roots-of-unity]-dark-matter(HDM), (4)null massless bosons(E) --> Mellin-(light-speed squared)-transform/Englert-Higgs ``mechanism'' -->(timelike) massive fermions(m), (5) cosmic-microwave-background (CMB)[power-spectrum] Zipf-law HYPERBOLICITY, (6) supersymmetry(SUSY) [projective-geometry conic-sections/conics merging in R/ C projective-plane point at ∞]. UTTER-SIMPLICITY!!!

  1. SEPHIROT: Scenario for Universe-Creation AUTOMATICALLY from Digits On-Average Euler-Bernoulli-Kummer-Riemann-Newcomb-Poincare-Weyl-Benford-Kac-Raimi-Hill-Antonoff-Siegel ``Digit-Physics'' Logarithm-Law: ``It's a Jack-in-the-Box Universe'': EMET/TRUTH!!!

    Science.gov (United States)

    Siegel, Edward Carl-Ludwig; Young, Frederic; Wignall, Janis

    2013-04-01

    SEPHIROT: Siegel[http://fqxi.org/community/forum/topic/1553]: Ten-[0->9]-Digits; Average Log-Law SCALE-Invariance; Utter-Simplicity: ``Complexity'' (vs. ``Complicatedness''); Zipf-law/Hyperbolicity/ Inevitability SCENARIO AUTOMATICALLY CREATES & EVOLVES a UNIVERSE: inflation, a big-bang, bosons(E)->Mellin-(c2)-tranform->fermions(m), hidden-dark-energy(HDE), hidden-dark-matter (HDM), cosmic-microwave-background(CMB), supersymmetry(SUSY), PURPOSELY NO: theories,models,mechanisms,processes, parameters,assumptions,WHATSOEVER: It's a ``Jack-in-the-Box'' Universe!!!: ONLY VIA: Newcomb [Am.J.Math.4(1),39(1881)]QUANTUM-discovery!!!-Benford-Siegel-Antonoff[AMS.Joint-Mtg.(02)-Abs.#973-60-124!!!] inversion to ONLY BEQS with d=0 BEC: ``Digit-Physics''!; Log fixed-point invariance(s): [base=units=SCALE] of digits classic (not classical!) average [CAUSING] log statistical-correlations =log(1+1/d), with physics-crucial d=0 BEC singularity/pole, permits SEPHIROT!!!: ``digits are quanta are bosons because bosons are and always were digits!!!'': Digits = Bosons with d=0 BEC(!!!) & expansion to Zipf-law Hyperbolicity INEVITABILITY CMB!

  2. Automatic detection of axillary lymphadenopathy on CT scans of untreated chronic lymphocytic leukemia patients

    Science.gov (United States)

    Liu, Jiamin; Hua, Jeremy; Chellappa, Vivek; Petrick, Nicholas; Sahiner, Berkman; Farooqui, Mohammed; Marti, Gerald; Wiestner, Adrian; Summers, Ronald M.

    2012-03-01

    Patients with chronic lymphocytic leukemia (CLL) have an increased frequency of axillary lymphadenopathy. Pretreatment CT scans can be used to upstage patients at the time of presentation and post-treatment CT scans can reduce the number of complete responses. In the current clinical workflow, the detection and diagnosis of lymph nodes is usually performed manually by examining all slices of CT images, which can be time consuming and highly dependent on the observer's experience. A system for automatic lymph node detection and measurement is desired. We propose a computer aided detection (CAD) system for axillary lymph nodes on CT scans in CLL patients. The lung is first automatically segmented and the patient's body in lung region is extracted to set the search region for lymph nodes. Multi-scale Hessian based blob detection is then applied to detect potential lymph nodes within the search region. Next, the detected potential candidates are segmented by fast level set method. Finally, features are calculated from the segmented candidates and support vector machine (SVM) classification is utilized for false positive reduction. Two blobness features, Frangi's and Li's, are tested and their free-response receiver operating characteristic (FROC) curves are generated to assess system performance. We applied our detection system to 12 patients with 168 axillary lymph nodes measuring greater than 10 mm. All lymph nodes are manually labeled as ground truth. The system achieved sensitivities of 81% and 85% at 2 false positives per patient for Frangi's and Li's blobness, respectively.

  3. Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer

    International Nuclear Information System (INIS)

    Thariat, Juliette; Ramus, Liliane; Maingon, Philippe; Odin, Guillaume; Gregoire, Vincent; Darcourt, Vincent; Guevara, Nicolas; Orlanducci, Marie-Helene; Marcie, Serge; Poissonnet, Gilles; Marcy, Pierre-Yves

    2012-01-01

    Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas–based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as ground truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.

  4. Fully automatic detection of deep white matter T1 hypointense lesions in multiple sclerosis

    Science.gov (United States)

    Spies, Lothar; Tewes, Anja; Suppa, Per; Opfer, Roland; Buchert, Ralph; Winkler, Gerhard; Raji, Alaleh

    2013-12-01

    A novel method is presented for fully automatic detection of candidate white matter (WM) T1 hypointense lesions in three-dimensional high-resolution T1-weighted magnetic resonance (MR) images. By definition, T1 hypointense lesions have similar intensity as gray matter (GM) and thus appear darker than surrounding normal WM in T1-weighted images. The novel method uses a standard classification algorithm to partition T1-weighted images into GM, WM and cerebrospinal fluid (CSF). As a consequence, T1 hypointense lesions are assigned an increased GM probability by the standard classification algorithm. The GM component image of a patient is then tested voxel-by-voxel against GM component images of a normative database of healthy individuals. Clusters (≥0.1 ml) of significantly increased GM density within a predefined mask of deep WM are defined as lesions. The performance of the algorithm was assessed on voxel level by a simulation study. A maximum dice similarity coefficient of 60% was found for a typical T1 lesion pattern with contrasts ranging from WM to cortical GM, indicating substantial agreement between ground truth and automatic detection. Retrospective application to 10 patients with multiple sclerosis demonstrated that 93 out of 96 T1 hypointense lesions were detected. On average 3.6 false positive T1 hypointense lesions per patient were found. The novel method is promising to support the detection of hypointense lesions in T1-weighted images which warrants further evaluation in larger patient samples.

  5. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    Science.gov (United States)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  6. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    Science.gov (United States)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  7. Automatic orientation and 3D modelling from markerless rock art imagery

    Science.gov (United States)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  8. Automatic Measurement of Fetal Brain Development from Magnetic Resonance Imaging: New Reference Data.

    Science.gov (United States)

    Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna

    2018-01-01

    Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.

  9. A Robust Bayesian Truth Serum for Non-binary Signals

    OpenAIRE

    Radanovic, Goran; Faltings, Boi

    2013-01-01

    Several mechanisms have been proposed for incentivizing truthful reports of a private signals owned by rational agents, among them the peer prediction method and the Bayesian truth serum. The robust Bayesian truth serum (RBTS) for small populations and binary signals is particularly interesting since it does not require a common prior to be known to the mechanism. We further analyze the problem of the common prior not known to the mechanism and give several results regarding the restrictions ...

  10. Truth Commissions, Education, and Positive Peace: An Analysis of Truth Commission Final Reports (1980-2015)

    Science.gov (United States)

    Paulson, Julia; Bellino, Michelle J.

    2017-01-01

    Transitional justice and education both occupy increasingly prominent space on the international peacebuilding agenda, though less is known about the ways they might reinforce one another to contribute towards peace. This paper presents a cross-national analysis of truth commission (TC) reports spanning 1980-2015, exploring the range of…

  11. To Tell the Truth: The Challenge of Military Leadership

    National Research Council Canada - National Science Library

    Henderson, Jr, Ronald H

    1998-01-01

    The story of Regulus, while certainly apocryphal, nevertheless illustrates a fundamental tension of military leadership -- the moral imperative for military leaders to tell the truth, even when that...

  12. The truth on journalism: relations between its practice and discourse

    Directory of Open Access Journals (Sweden)

    Daiane Bertasso Ribeiro

    2011-04-01

    Full Text Available This article proposes a theoretically approach over the relations that journalism establishes with the concept of truth. The notion of truth in Foucault leads the debate. This reflection centers on how journalism builds discursive strategies that produces effects of truth on itsreports. The journalist discourse presents itselfas truthful, although its constructive discourse of the world is a result of rules, practices and values. The debate of “truth” allows us to comprehend the complexity and particularities of journalism as a discursive practice that has reflection in the social knowledge of reality.

  13. Truthfulness in transplantation: non-heart-beating organ donation

    Directory of Open Access Journals (Sweden)

    Potts Michael

    2007-08-01

    Full Text Available Abstract The current practice of organ transplantation has been criticized on several fronts. The philosophical and scientific foundations for brain death criteria have been crumbling. In addition, donation after cardiac death, or non-heartbeating-organ donation (NHBD has been attacked on grounds that it mistreats the dying patient and uses that patient only as a means to an end for someone else's benefit. Verheijde, Rady, and McGregor attack the deception involved in NHBD, arguing that the donors are not dead and that potential donors and their families should be told that is the case. Thus, they propose abandoning the dead donor rule and allowing NHBD with strict rules concerning adequate informed consent. Such honesty about NHBD should be welcomed. However, NHBD violates a fundamental end of medicine, nonmaleficience, "do no harm." Physicians should not be harming or killing patients, even if it is for the benefit of others. Thus, although Verheijde and his colleages should be congratulated for calling for truthfulness about NHBD, they do not go far enough and call for an elimination of such an unethical procedure from the practice of medicine.

  14. Sojourner Truth as an Essential Part of Rhetorical Theory.

    Science.gov (United States)

    Romans, Bevin A.

    To affirm Sojourner Truth as a powerful rhetor who advanced the equality and empowerment of women, a study examined several of her speeches on women's suffrage. Although the value of using such role models as Sojourner Truth has been demonstrated in various grade levels, and in the study of history and English, the approach is too seldom employed…

  15. Literature and Truth : Imaginative Writing as a Medium for Ideas

    NARCIS (Netherlands)

    Lansdown, Richard

    2017-01-01

    In Literature and Truth Richard Lansdown continues a discussion concerning the truth-bearing status of imaginative literature that pre-dates Plato. The book opens with a general survey of contemporary approaches in philosophical aesthetics, and a discussion of the contribution to the question made

  16. On authenticity: the question of truth in construction and autobiography.

    Science.gov (United States)

    Collins, Sara

    2011-12-01

    Freud was occupied with the question of truth and its verification throughout his work. He looked to archaeology for an evidence model to support his ideas on reconstruction. He also referred to literature regarding truth in reconstruction, where he saw shifts between historical fact and invention, and detected such swings in his own case histories. In his late work Freud pondered over the impossibility of truth in reconstruction by juxtaposing truth with 'probability'. Developments on the role of fantasy and myth in reconstruction and contemporary debates over objectivity have increasingly highlighted the question of 'truth' in psychoanalysis. I will argue that 'authenticity' is a helpful concept in furthering the discussion over truth in reconstruction. Authenticity denotes that which is genuine, trustworthy and emotionally accurate in a reconstruction, as observed within the immediacy of the analyst/patient interaction. As authenticity signifies genuineness in a contemporary context its origins are verifiable through the analyst's own observations of the analytic process itself. Therefore, authenticity is about the likelihood and approximation of historical truth rather than its certainty. In that respect it links with Freud's musings over 'probability'. Developments on writing 'truths' in autobiography mirror those in reconstruction, and lend corroborative support from another source. Copyright © 2011 Institute of Psychoanalysis.

  17. Truth as determinant of religious faith | Emeng | Global Journal of ...

    African Journals Online (AJOL)

    This study investigates how varying religious truth has determined different religious faiths in the world. One God created all human kind and placed them in their different environments, but the allegiance, service, worship and honour to him varies due to the different truths at the foundations of the many faiths. This article ...

  18. Women's Heart Disease: Join the Heart Truth Community

    Science.gov (United States)

    ... this page please turn JavaScript on. Feature: Women's Heart Disease Join The Heart Truth Community Past Issues / Winter 2014 Table of Contents National Symbol The centerpiece of The Heart Truth ® is The Red Dress ® which was introduced ...

  19. How Does Telling the Truth Help Educational Action Research?

    Science.gov (United States)

    Blair, Erik

    2010-01-01

    A number of key constructs underpin educational action research. This paper focuses on the concept of "truth" and by doing so hopes to highlight some debate in this area. In reflecting upon what "truth" might mean to those involved in action research, I shall critically evaluate Thorndike's "Law of Effect" and Bruner's "Three Forms of…

  20. Keep Changing Your Beliefs, Aiming for the Truth

    NARCIS (Netherlands)

    Baltag, Alexandru; Smets, Sonja

    We investigate the process of truth-seeking by iterated belief revision with higher-level doxastic information. We elaborate further on the main results in Baltag and Smets (Proceedings of TARK, 2009a, Proceedings of WOLLIC'09 LNAI 5514, 2009b), applying them to the issue of convergence to truth. We

  1. Academic Training: Telling the truth with statistics

    CERN Multimedia

    Françoise Benz

    2005-01-01

    2004-2005 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 21, 22, 23, 24 & 25 February from 11.00 to 12.00 hrs - Main Auditorium, bldg. 500 Telling the truth with statistics by G. D'Agostini / NFN, Roma, Italy The issue of evaluating and expressing the uncertainty in measurements, as well as that of testing hypotheses, is reviewed, with particular emphasis on the frontier cases typical of particle physics experiments. Fundamental aspects of probability will be addressed and the applications, solely based on probability theory, will cover several topics of practical interest, including counting experiments, upper/lower bounds, systematic errors, fits and comparison of hypotheses. ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch

  2. Autonomy, nudging and post-truth politics.

    Science.gov (United States)

    Keeling, Geoff

    2017-11-16

    In his excellent essay, 'Nudges in a post-truth world', Neil Levy argues that 'nudges to reason', or nudges which aim to make us more receptive to evidence, are morally permissible. A strong argument against the moral permissibility of nudging is that nudges fail to respect the autonomy of the individuals affected by them. Levy argues that nudges to reason do respect individual autonomy, such that the standard autonomy objection fails against nudges to reason. In this paper, I argue that Levy fails to show that nudges to reason respect individual autonomy. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. HOW TRUTHFUL ARE WATER ACCOUNTING DATA?

    Directory of Open Access Journals (Sweden)

    Libor Ansorge

    2016-01-01

    Full Text Available Water accounting is an important tool for water managers. Many studies use official water accounting data or similar data for their assessment. In particular, large-scale studies or water footprint studies have limited opportunities for “in-situ” data collection. In many cases, the processors of studies do not know the origin of the data and their limitations. Water accounting data are very often used for decision-making process, water resource management, and planning in the water sector. This article tries to answer the question “How truthful are water accounting data?” For this task water accounting in the agriculture sector in the Czech Republic was selected. The data on water withdrawals for the agriculture purposes was analysed and compared with water needs estimation based on additional data on agricultural production.

  4. Truth as Humility Nourishing Compassion Through Wisdom.

    Science.gov (United States)

    Young, John L

    2018-03-01

    Among the strengths of forensic psychiatry as a profession is its ability to support lively discussion of critical questions, such as how to characterize its own essence and whether it belongs to the practice of medicine. The American Academy of Psychiatry and the Law is fortunate that Michael Norko has taken the occasion of his presidential address to describe in depth the results of the advanced stage of his probing on a truly spiritual level the fundamental place of compassion in the practice of forensic psychiatry. In so doing, he casts inevitable light on the seamless connections binding forensic psychiatry and medicine, particularly the importance for both of practicing compassion in our search for truth. © 2018 American Academy of Psychiatry and the Law.

  5. Truth or dare: expertise and risk governance

    International Nuclear Information System (INIS)

    Paterson, J.

    2002-01-01

    There is increasing evidence that the public is as concerned with the risks associated with technology as it is enthused by the opportunities that technology presents. Experts are increasingly referred to not so much for solutions to social problems per se, but paradoxically to problems attendant on technological solutions themselves. In these circumstances, there is an urgent need for the role of the expert to be clarified. While the public and political actors have essentially looked to experts for certainty in an uncertain world, this is precisely what scientific rationality cannot provide. The inherent modesty of science (exemplified, for example, by the need for falsifiability) must always be compromised at the point when a decision is made, when 'knowledge' becomes 'action'. There is accordingly a need to be clear about the status of scientific information or knowledge on the one hand, and the effect of the decision to act on the other - and hence the appropriate locus of responsibility. Analysing the process from expert advice through to political or economic decision can help to clarify the point at which misunderstanding arises, at which the inherently provisional truth of science is transformed into the effectively absolute truth implied by a decision to apply knowledge as technology. Recognizing that it is at this point that risks are run (as well as the opportunity for rewards created) may lead to greater clarity as to the respective roles. It may in turn offer some lessons as regards the design of risk governance arrangements and the place of experts in them. (author)

  6. Link Anchors in Images: Is there Truth?

    NARCIS (Netherlands)

    Aly, Robin; McGuinness, Kevin; Kleppe, Martijn; Ordelman, Roeland J.F.; O'Connor, Noel; de Jong, Franciska M.G.

    2012-01-01

    While automatic linking in text collections is well understood, little is known about links in images. In this work, we investigate two aspects of anchors, the origin of a link, in images: 1) the requirements of users for such anchors, e.g. the things users would like more information on, and 2)

  7. Automatic Quantification of Radiographic Wrist Joint Space Width of Patients With Rheumatoid Arthritis.

    Science.gov (United States)

    Huo, Yinghe; Vincken, Koen L; van der Heijde, Desiree; de Hair, Maria J H; Lafeber, Floris P; Viergever, Max A

    2017-11-01

    Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints are located around the scaphoid bone, viz. the multangular-navicular, capitate-navicular-lunate, and radiocarpal joints. Methods: The joint space around the scaphoid bone is detected by using consecutive searches of separate path segments, where each segment location aids in constraining the subsequent one. For joint margin delineation, first the boundary not affected by X-ray projection is extracted, followed by a backtrace process to obtain the actual joint margin. The accuracy of the quantified JSW is evaluated by comparison with the manually obtained ground truth. Results: Two of the 50 radiographs used for evaluation of the method did not yield a correct path through all three wrist joints. The delineated joint margins of the remaining 48 radiographs were used for JSW quantification. It was found that 90% of the joints had a JSW deviating less than 20% from the mean JSW of manual indications, with the mean JSW error less than 10%. Conclusion: The proposed method is able to automatically quantify the JSW of radiographic wrist joints reliably. The proposed method may aid clinical researchers to study the progression of wrist joint damage in RA studies. Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints

  8. Automatic design of digital synthetic gene circuits.

    Directory of Open Access Journals (Sweden)

    Mario A Marchisio

    2011-02-01

    Full Text Available De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input-output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions.

  9. Automatic generation of statistical pose and shape models for articulated joints.

    Science.gov (United States)

    Xin Chen; Graham, Jim; Hutchinson, Charles; Muir, Lindsay

    2014-02-01

    Statistical analysis of motion patterns of body joints is potentially useful for detecting and quantifying pathologies. However, building a statistical motion model across different subjects remains a challenging task, especially for a complex joint like the wrist. We present a novel framework for simultaneous registration and segmentation of multiple 3-D (CT or MR) volumes of different subjects at various articulated positions. The framework starts with a pose model generated from 3-D volumes captured at different articulated positions of a single subject (template). This initial pose model is used to register the template volume to image volumes from new subjects. During this process, the Grow-Cut algorithm is used in an iterative refinement of the segmentation of the bone along with the pose parameters. As each new subject is registered and segmented, the pose model is updated, improving the accuracy of successive registrations. We applied the algorithm to CT images of the wrist from 25 subjects, each at five different wrist positions and demonstrated that it performed robustly and accurately. More importantly, the resulting segmentations allowed a statistical pose model of the carpal bones to be generated automatically without interaction. The evaluation results show that our proposed framework achieved accurate registration with an average mean target registration error of 0.34 ±0.27 mm. The automatic segmentation results also show high consistency with the ground truth obtained semi-automatically. Furthermore, we demonstrated the capability of the resulting statistical pose and shape models by using them to generate a measurement tool for scaphoid-lunate dissociation diagnosis, which achieved 90% sensitivity and specificity.

  10. On a Philosophical Motivation for Mutilating Truth Tables

    Directory of Open Access Journals (Sweden)

    Marcos Silva

    2016-06-01

    Full Text Available One of the reasons colours, or better the conceptual organisation of the colour system, could be relevant to the philosophy of logic is that they necessitate some mutilation of truth tables by restricting truth functionality. This paper argues that the so-called ‘Colour Exclusion Problem’, the first great challenge for Wittgenstein’s Tractatus, is a legitimate philosophical motivation for a systematic mutilation of truth tables. It shows how one can express, through these mutilations, some intensional logical relations usually expressed by the Aristotelian Square of Oppositions, as contrariety and subcontrariety.

  11. Truthfulness in science teachers’ bodily and verbal actions

    DEFF Research Database (Denmark)

    Daugbjerg, Peer

    2013-01-01

    A dramaturgical approach to teacher’s personal bodily and verbal actions is applied through the vocabulary of truthfulness. Bodily and verbal actions have been investigated among Danish primary and lower secondary school science teachers based on their narratives and observations of their classroom...... actions. The analysis shows how science teachers engage truthfully in pupil relations through an effort of applying classroom management, among other things. In all, this indicates that if science education research wants to understand science teachers’ personal relations to teaching science it could...... be beneficial to address the truthfulness of science teachers’ narratives and actions....

  12. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    Science.gov (United States)

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. "Angels & Demons" - Distinguishing truth from fiction

    CERN Multimedia

    2005-01-01

    Dan Brown's best-selling novel "Angels & Demons" was published in French on 2 March. A web page on CERN's public site is dedicated to separating truth from fiction in this novel. After the extraordinary success of Dan Brown's "Da Vinci Code", one of his earlier novels "Angels & Demons", published in 2000, has now become a best seller and has generated a flood of questions about CERN. This detective story is about a secret society, the Illuminati, who wish to destroy the Vatican with an antimatter bomb stolen from - wait for it - CERN! Inevitably, CERN has been bombarded with calls about the technologies described in the novel that are supposed to be under development in the Laboratory. The Press Office has always explained that, even if the novel appears to be very informative, it is in fact a mixture of fact and fiction. For instance, according to the novel CERN is supposed to own a plane that can cover the distance between Massachusetts in the United States and Switzerland in just over an hour! ...

  14. Telling the truth in a crisis

    International Nuclear Information System (INIS)

    Shea, Jamie

    2002-01-01

    Full text: My presentation will focus on communicating in a crisis. Drawing on my experience as NATO's Spokesman during the Kosovo air campaign and, more recently, the fight against terrorism in Afghanistan, I will outline the pressures that the media place on organizations once they are in the media spotlight. I will discuss the importance of real time information, how to deal with mistakes and how to organize a media campaign in order to seize the initiative and put one's message across. What kind of organization do you need and how do you ensure that all the players are properly coordinated? What are the target audiences and how do you deal effectively with opponents seeking to discredit your handling of the crisis? My presentation will look at the respective merits of using TV, radio, newspapers and Internet as means of crisis communication. It will also highlight the increasing role of NGOs in setting the media agenda. In an environment dominated by PR executives and 'spin doctors', the challenge is to combine truth and credibility with organization and professionalism while staying calm under pressure. How NATO attempted to meet this challenge in Kosovo, learning from the inevitable mistakes it made along the way, will be a lesson that applies to any organization caught - voluntarily or involuntarily - in a sudden crisis. (author)

  15. Truth, Representation and Interpretation: The Popper Case

    Directory of Open Access Journals (Sweden)

    Gerard Stan

    2009-06-01

    Full Text Available The aim of this study is to determine several points of reference regarding the way in which Karl Popper built up his philosophical discourse. I locate two specific ways in which Popper interpreted and used ideas belonging to other philosophers. Thus I distinguish in Popper between a projective hermeneutics (where the author uses a thesis that forms a part of his own philosophy in order to reconstruct and understand the ideas of another philosopher and anideological hermeneutics (where he uses a statement expressing an interest of the community whereof he is a member in order to interpret and reconstruct the text of another philosopher. In so doing I also highlight the considerable asymmetry between a representationalist hermeneutics, and a projective and, respectively, an ideological one. Whereas in the first case the interpreter wishes to unveil a truth about the philosophical text, in the other two he israther expressing a desire to talk about himself, his own beliefs and convictions, or about the beliefs of his community of reference.

  16. Climate: the truth and the false

    International Nuclear Information System (INIS)

    Masson-Delmotte, V.

    2011-01-01

    Climate sciences have become extraordinary instruments for the media and the politicians. The climate debates regularly put forward the question of our knowledge: what do we know exactly, what can we assert, and of what are we doubtful? A subsidiary question would be: are scientists telling us the truth? For the scientific community, there is no doubt that human activities are modifying the Earth's atmosphere composition and that the surface temperature of the Earth has globally warmed up since the beginning of the 20. century. We live on a planet with a finite dimension: fossil resources are not forever and wastes are accumulating. Is the greenhouse effect already changing climate? Is it the only factor which will control the climate evolution? What would be the natural rhythm of climate? Are scientists capable to unravel the wheels of the climate machine? What is the situation of the climate change under way with respect to the turbulences of the past? What is an acceptable climate change? This book gives a clear answer to each of these questions as far as the answer is known. (J.S.)

  17. Facing the truth: An appraisal of the potential contributions ...

    African Journals Online (AJOL)

    Facing the truth: An appraisal of the potential contributions, paradoxes and challenges of implementing the United Nations conventions on Contracts for the International Sale of Goods (CISG) in Nigeria.

  18. This Is My (Post) Truth, Tell Me Yours

    Science.gov (United States)

    Powell, Martin

    2017-01-01

    This is a commentary on the article ‘The rise of post-truth populism in pluralist liberal democracies: challenges for health policy.’ It critically examines two of its key concepts: populism and ‘post truth.’ This commentary argues that there are different types of populism, with unclear links to impacts, and that in some ways, ‘post-truth’ has resonances with arguments advanced in the period at the beginning of the British National Health Service (NHS). In short, ‘post-truth’ populism’ may be ‘déjà vu all over again,’ and there are multiple (post) truths: this is my (post) truth, tell me yours. PMID:29172380

  19. News, truth and crime: the Westray disaster and its aftermath

    Energy Technology Data Exchange (ETDEWEB)

    McMullan, J.L. [Saint Mary' s University, Halifax, NS (Canada). Department of Sociology and Criminology

    2005-07-01

    A study of the way the media portrayed the Westray Mine disaster and its aftermath over the period 1992 to 2002 is presented. The chapters titles are; power, discourse, and the production of news as truth; the explosion and its aftermath; studying the press and Westray; the press and the presentation of Westray's truth; and the politics of truth and the invisibility of corporate crime. News articles reporting the accident and outcome were sampled, coded, and evaluated by content analysis. It is concluded that the various media represented alternative truths, but did not label the corporation as criminal. This was missing from the media's reporting of the disaster.

  20. Descartes on the Creation of the Eternal Truths

    Directory of Open Access Journals (Sweden)

    Danielle Macbeth

    2017-06-01

    Full Text Available On 15 April 1630, in a letter to Mersenne, Descartes announced that on his view God creates the truths of mathematics. Descartes returned to the theme in subsequent letters and some of his Replies but nowhere is the view systematically developed and defended. It is not clear why Descartes came to espouse the creation doctrine, nor even what exactly it is. Some have argued that his motivation was theological, that God creates the eternal truths, including the truths of logic, because and insofar as God is omnipotent andthe creator of all things. I develop and defend a different reading according to which Descartes was led to espouse the creation doctrine by a fundamental shift in his understanding of the correct mode of inquiry in metaphysics and mathematics: by 1630, the God-created truths came to play the role in inquiry that until then, in the Rules for the Direction of the Mind, had been played by images.

  1. The Value of Instruction for a Commitment to Truth.

    Science.gov (United States)

    Bugeja, Michael J.

    1997-01-01

    Describes the redesign of a media ethics course in which students analyze such topics as truth, falsehood, manipulation, temptation, unfairness, and power. Notes that students keep an ethics journal in the course, and discusses sample journal topics. (PA)

  2. The source of the truth bias: Heuristic processing?

    Science.gov (United States)

    Street, Chris N H; Masip, Jaume

    2015-06-01

    People believe others are telling the truth more often than they actually are; this is called the truth bias. Surprisingly, when a speaker is judged at multiple points across their statement the truth bias declines. Previous claims argue this is evidence of a shift from (biased) heuristic processing to (reasoned) analytical processing. In four experiments we contrast the heuristic-analytic model (HAM) with alternative accounts. In Experiment 1, the decrease in truth responding was not the result of speakers appearing more deceptive, but was instead attributable to the rater's processing style. Yet contrary to HAMs, across three experiments we found the decline in bias was not related to the amount of processing time available (Experiments 1-3) or the communication channel (Experiment 2). In Experiment 4 we found support for a new account: that the bias reflects whether raters perceive the statement to be internally consistent. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  3. Ground System Survivability Overview

    Science.gov (United States)

    2012-03-27

    Avoidance Blast Mitigation Optimization Customer ILIR RDT&E Funding 5.0 % 0.5% GSS has a proven, technically proficient workforce that meets...Evaluation of Defensive-Aid Suites (ARMED) Common Automatic Fire Extinguishing System ( CAFES ) Transparent Armor Development Ground Combat Vehicle...Survey TRADOC (WFO, CNA, etc) Voice of the Customer Sy st em s En gi ne er in g Publish overarching MIL-STD, design guidelines, technical

  4. What Justice for Rwanda? Gacaca versus Truth Commission?

    OpenAIRE

    Reuchamps, Min

    2008-01-01

    In post-genocide Rwanda, in addition to gacaca courts, a truth commission is needed in order to promote justice and foster reconciliation. In the context of transitional justice, retributive justice, which seeks justice and focuses on the perpetrators, appears to be inadequate to lead a society towards reconciliation. Therefore, some forms of restorative justice, which emphasize the healing of the whole society, seem necessary. In Rwanda, gacaca courts and a truth commission are complementary...

  5. Public relations and journalism: truth, trust, transparency and integrity

    OpenAIRE

    Davies, Frank

    2008-01-01

    Truth, trust, integrity and reputation are key concepts for understanding the relationship between journalists and public relations practitioners. This the paper: first, considers the current debate on the inter-relationship between journalism and public relations; second distinguishes varieties of public relations and journalism; third, analyses the Editorial Intelligence controversy; fourth, deconstructs aspects of "truth" and "trust" in the context of that debate; fifth, considers why the ...

  6. Realism without truth: a review of Giere's science without laws and scientific perspectivism.

    Science.gov (United States)

    Hackenberg, Timothy D

    2009-05-01

    An increasingly popular view among philosophers of science is that of science as action-as the collective activity of scientists working in socially-coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science-as-social-process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science.

  7. Truth Space Method for Caching Database Queries

    Directory of Open Access Journals (Sweden)

    S. V. Mosin

    2015-01-01

    Full Text Available We propose a new method of client-side data caching for relational databases with a central server and distant clients. Data are loaded into the client cache based on queries executed on the server. Every query has the corresponding DB table – the result of the query execution. These queries have a special form called "universal relational query" based on three fundamental Relational Algebra operations: selection, projection and natural join. We have to mention that such a form is the closest one to the natural language and the majority of database search queries can be expressed in this way. Besides, this form allows us to analyze query correctness by checking lossless join property. A subsequent query may be executed in a client’s local cache if we can determine that the query result is entirely contained in the cache. For this we compare truth spaces of the logical restrictions in a new user’s query and the results of the queries execution in the cache. Such a comparison can be performed analytically , without need in additional Database queries. This method may be used to define lacking data in the cache and execute the query on the server only for these data. To do this the analytical approach is also used, what distinguishes our paper from the existing technologies. We propose four theorems for testing the required conditions. The first and the third theorems conditions allow us to define the existence of required data in cache. The second and the fourth theorems state conditions to execute queries with cache only. The problem of cache data actualizations is not discussed in this paper. However, it can be solved by cataloging queries on the server and their serving by triggers in background mode. The article is published in the author’s wording.

  8. Towards an automatic tool for resolution evaluation of mammographic images

    Energy Technology Data Exchange (ETDEWEB)

    De Oliveira, J. E. E. [FUMEC, Av. Alfonso Pena 3880, CEP 30130-009 Belo Horizonte - MG (Brazil); Nogueira, M. S., E-mail: juliae@fumec.br [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Pte. Antonio Carlos 6627, 31270-901, Belo Horizonte - MG (Brazil)

    2014-08-15

    Quality of Mammographies from the Public and Private Services of the State. With an essentially educational character, an evaluation of the image quality is monthly held from a breast phantom in each mammographic equipment. In face of this, this work proposes to develop a protocol for automatic evaluation of image quality of mammograms so that the radiological protection and image quality requirements are met in the early detection of breast cancer. Specifically, image resolution will be addressed and evaluated, as a part of the program of image quality evaluation. Results show that for the fourth resolution and using 28 phantom images with the ground truth settled, the computer analysis of the resolution is promising and may be used as a tool for the assessment of the image quality. (Author)

  9. Towards an automatic tool for resolution evaluation of mammographic images

    International Nuclear Information System (INIS)

    De Oliveira, J. E. E.; Nogueira, M. S.

    2014-08-01

    Quality of Mammographies from the Public and Private Services of the State. With an essentially educational character, an evaluation of the image quality is monthly held from a breast phantom in each mammographic equipment. In face of this, this work proposes to develop a protocol for automatic evaluation of image quality of mammograms so that the radiological protection and image quality requirements are met in the early detection of breast cancer. Specifically, image resolution will be addressed and evaluated, as a part of the program of image quality evaluation. Results show that for the fourth resolution and using 28 phantom images with the ground truth settled, the computer analysis of the resolution is promising and may be used as a tool for the assessment of the image quality. (Author)

  10. Degrees of Truthfulness in Accepted Scientific Claims.

    Directory of Open Access Journals (Sweden)

    Ahmed Hassan Mabrouk

    2008-12-01

    Full Text Available Normal 0 false false false EN-MY X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Abstract: Sciences adopt different methodologies in deriving claims and establishing theories. As a result, two accepted claims or theories belonging to two different sciences may not necessarily carry the same degree of truthfulness. Examining the different methodologies of deriving claims in the sciences of ʿaqīdah (Islamic Creed, fiqh (Islamic Jurisprudence and physics, the study shows that ʿaqīdah provides a holistic understanding of the universe. Physics falls short of interpreting physical phenomena unless these phenomena are looked at through the ʿaqīdah holistic view. Left to itself, error may creep into laws of physics due to the methodology of conducting the physical experiments, misinterpreting the experimental results, or accepting invalid assumptions. As for fiqh, it is found that apart from apparent errors, fiqh views cannot be falsified. It is, therefore, useful to consider ʿaqīdah as a master science which would permit all other sciences to live in harmony.

  11. Competence and Performance in Belief-Desire Reasoning across Two Cultures: The Truth, the Whole Truth and Nothing but the Truth about False Belief?

    Science.gov (United States)

    Yazdi, Amir Amin; German, Tim P.; Defeyter, Margaret Anne; Siegal, Michael

    2006-01-01

    There is a change in false belief task performance across the 3-5 year age range, as confirmed in a recent meta-analysis [Wellman, H. M., Cross, D., & Watson, J. (2001). Meta-analysis of theory mind development: The truth about false-belief. "Child Development," 72, 655-684]. This meta-analysis identified several performance factors influencing…

  12. Finding weak points automatically

    International Nuclear Information System (INIS)

    Archinger, P.; Wassenberg, M.

    1999-01-01

    Operators of nuclear power stations have to carry out material tests at selected components by regular intervalls. Therefore a full automaticated test, which achieves a clearly higher reproducibility, compared to part automaticated variations, would provide a solution. In addition the full automaticated test reduces the dose of radiation for the test person. (orig.) [de

  13. IceMap250—Automatic 250 m Sea Ice Extent Mapping Using MODIS Data

    Directory of Open Access Journals (Sweden)

    Charles Gignac

    2017-01-01

    Full Text Available The sea ice cover in the North evolves at a rapid rate. To adequately monitor this evolution, tools with high temporal and spatial resolution are needed. This paper presents IceMap250, an automatic sea ice extent mapping algorithm using MODIS reflective/emissive bands. Hybrid cloud-masking using both the MOD35 mask and a visibility mask, combined with downscaling of Bands 3–7 to 250 m, are utilized to delineate sea ice extent using a decision tree approach. IceMap250 was tested on scenes from the freeze-up, stable cover, and melt seasons in the Hudson Bay complex, in Northeastern Canada. IceMap250 first product is a daily composite sea ice presence map at 250 m. Validation based on comparisons with photo-interpreted ground-truth show the ability of the algorithm to achieve high classification accuracy, with kappa values systematically over 90%. IceMap250 second product is a weekly clear sky map that provides a synthesis of 7 days of daily composite maps. This map, produced using a majority filter, makes the sea ice presence map even more accurate by filtering out the effects of isolated classification errors. The synthesis maps show spatial consistency through time when compared to passive microwave and national ice services maps.

  14. DESIGN AND DEVELOP A COMPUTER AIDED DESIGN FOR AUTOMATIC EXUDATES DETECTION FOR DIABETIC RETINOPATHY SCREENING

    Directory of Open Access Journals (Sweden)

    C. A. SATHIYAMOORTHY

    2016-04-01

    Full Text Available Diabetic Retinopathy is a severe and widely spread eye disease which can lead to blindness. One of the main symptoms for vision loss is Exudates and it could be prevented by applying an early screening process. In the Existing systems, a Fuzzy C-Means Clustering technique is used for detecting the exudates for analyzation. The main objective of this paper is, to improve the efficiency of the Exudates detection in diabetic retinopathy images. To do this, a three Stage – [TS] approach is introduced for detecting and extracting the exudates automatically from the retinal images for screening the Diabetic retinopathy. TS functions on the image in three levels such as Pre-processing the image, enhancing the image and detecting the Exudates accurately. After successful detection, the detected exudates are classified using GLCM method for finding the accuracy. The TS approach is experimented using MATLAB software and the performance evaluation can be proved by comparing the results with the existing approach’s result and with the hand-drawn ground truths images from the expert ophthalmologist.

  15. Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting

    Science.gov (United States)

    Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2012-02-01

    We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.

  16. The Medawar Lecture 2004 the truth about science.

    Science.gov (United States)

    Lipton, Peter

    2005-06-29

    The attitudes of scientists towards the philosophy of science is mixed and includes considerable indifference and some hostility. This may be due in part to unrealistic expectation and to misunderstanding. Philosophy is unlikely directly to improve scientific practices, but scientists may find the attempt to explain how science works and what it achieves of considerable interest nevertheless. The present state of the philosophy of science is illustrated by recent work on the 'truth hypothesis', according to which, science is generating increasingly accurate representations of a mind-independent and largely unobservable world. According to Karl Popper, although truth is the aim of science, it is impossible to justify the truth hypothesis. According to Thomas Kuhn, the truth hypothesis is false, because scientists can only describe a world that is partially constituted by their own theories and hence not mind-independent. The failure of past scientific theories has been used to argue against the truth hypothesis; the success of the best current theories has been used to argue for it. Neither argument is sound.

  17. DID RAMSEY EVER ENDORSE A REDUNDANCY THEORY OF TRUTH?

    Directory of Open Access Journals (Sweden)

    María J. Frápolli

    2013-11-01

    Full Text Available This paper deals with Ramsey´s theory of truth and its aim is twofold: on the one hand, it will explain what position about truth Ramsey actually defended, and, on the other hand, we will pursue Ramsey’s insight in the XXth century. When the name of Frank Ramsey is mentioned, one of the things that comes to mind is the theory of truth as redundancy. In the following pages we will argue that Ramsey never supported such a theory, but rather an analysis of truth noticeably similar to the prosentential account. In fact, the very word “pro-sentence” appears for the first time in the XXth Century in Ramsey´s unfinished work “The nature of truth”, written around 1929. Besides, we will show that the prosentential account of truth is a neglected trend throughout the history of analytic philosophy, even though relevant analytic philosophers, such as Prior, Strawson, Williams, Grover and Brandom, have endorsed it.

  18. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    Science.gov (United States)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  19. Automatic Segmentation and Online virtualCT in Head-and-Neck Adaptive Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Peroni, Marta, E-mail: marta.peroni@mail.polimi.it [Department of Bioengineering, Politecnico di Milano, Milano (Italy); Ciardo, Delia [Advanced Radiotherapy Center, European Institute of Oncology, Milano (Italy); Spadea, Maria Francesca [Department of Experimental and Clinical Medicine, Universita degli Studi Magna Graecia, Catanzaro (Italy); Riboldi, Marco [Department of Bioengineering, Politecnico di Milano, Milano (Italy); Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy); Comi, Stefania; Alterio, Daniela [Advanced Radiotherapy Center, European Institute of Oncology, Milano (Italy); Baroni, Guido [Department of Bioengineering, Politecnico di Milano, Milano (Italy); Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy); Orecchia, Roberto [Advanced Radiotherapy Center, European Institute of Oncology, Milano (Italy); Universita degli Studi di Milano, Milano (Italy); Medical Department, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy)

    2012-11-01

    Purpose: The purpose of this work was to develop and validate an efficient and automatic strategy to generate online virtual computed tomography (CT) scans for adaptive radiation therapy (ART) in head-and-neck (HN) cancer treatment. Method: We retrospectively analyzed 20 patients, treated with intensity modulated radiation therapy (IMRT), for an HN malignancy. Different anatomical structures were considered: mandible, parotid glands, and nodal gross tumor volume (nGTV). We generated 28 virtualCT scans by means of nonrigid registration of simulation computed tomography (CTsim) and cone beam CT images (CBCTs), acquired for patient setup. We validated our approach by considering the real replanning CT (CTrepl) as ground truth. We computed the Dice coefficient (DSC), center of mass (COM) distance, and root mean square error (RMSE) between correspondent points located on the automatically segmented structures on CBCT and virtualCT. Results: Residual deformation between CTrepl and CBCT was below one voxel. Median DSC was around 0.8 for mandible and parotid glands, but only 0.55 for nGTV, because of the fairly homogeneous surrounding soft tissues and of its small volume. Median COM distance and RMSE were comparable with image resolution. No significant correlation between RMSE and initial or final deformation was found. Conclusion: The analysis provides evidence that deformable image registration may contribute significantly in reducing the need of full CT-based replanning in HN radiation therapy by supporting swift and objective decision-making in clinical practice. Further work is needed to strengthen algorithm potential in nGTV localization.

  20. Automatic segmentation and online virtualCT in head-and-neck adaptive radiation therapy.

    Science.gov (United States)

    Peroni, Marta; Ciardo, Delia; Spadea, Maria Francesca; Riboldi, Marco; Comi, Stefania; Alterio, Daniela; Baroni, Guido; Orecchia, Roberto

    2012-11-01

    The purpose of this work was to develop and validate an efficient and automatic strategy to generate online virtual computed tomography (CT) scans for adaptive radiation therapy (ART) in head-and-neck (HN) cancer treatment. We retrospectively analyzed 20 patients, treated with intensity modulated radiation therapy (IMRT), for an HN malignancy. Different anatomical structures were considered: mandible, parotid glands, and nodal gross tumor volume (nGTV). We generated 28 virtualCT scans by means of nonrigid registration of simulation computed tomography (CTsim) and cone beam CT images (CBCTs), acquired for patient setup. We validated our approach by considering the real replanning CT (CTrepl) as ground truth. We computed the Dice coefficient (DSC), center of mass (COM) distance, and root mean square error (RMSE) between correspondent points located on the automatically segmented structures on CBCT and virtualCT. Residual deformation between CTrepl and CBCT was below one voxel. Median DSC was around 0.8 for mandible and parotid glands, but only 0.55 for nGTV, because of the fairly homogeneous surrounding soft tissues and of its small volume. Median COM distance and RMSE were comparable with image resolution. No significant correlation between RMSE and initial or final deformation was found. The analysis provides evidence that deformable image registration may contribute significantly in reducing the need of full CT-based replanning in HN radiation therapy by supporting swift and objective decision-making in clinical practice. Further work is needed to strengthen algorithm potential in nGTV localization. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Automatic lung segmentation using control feedback system: morphology and texture paradigm.

    Science.gov (United States)

    Noor, Norliza M; Than, Joel C M; Rijal, Omar M; Kassim, Rosminah M; Yunus, Ashari; Zeki, Amir A; Anzidei, Michele; Saba, Luca; Suri, Jasjit S

    2015-03-01

    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.

  2. Love the Truth in the Franciscan School (XIIIth century

    Directory of Open Access Journals (Sweden)

    Manuel Lázaro Pulido

    2013-11-01

    Full Text Available Love to the truth is a fundamental question in the Franciscan School. It has your origin on the Franciscan practical needs to transmit the evangelical message to all the men. The universality of the message inspires the concept of wisdom as a base to love the truth. The truth appears as occasion of reference to God, the significatio never subordinates to the res. The article exposes the fundamental milestones of this construction from the origins of the Franciscan School to the ends of the 13th century with Gonzalo Hispano, indicating the common points and the internal discussions of a School according Anthony of Lisbon/Padua, Alexander of Hales, Odo Rigaldus, William of Melitona, Robert Grosseteste, Roger Bacon, Bonaventure, Matthew of Aquasparta, Peter John Olivi and Gonsalvus of Spain

  3. The Social Sciences and their compromisse with truth and justice

    Directory of Open Access Journals (Sweden)

    Mauro W. Barbosa de Almeida

    2015-06-01

    Full Text Available This paper discusses the social scientists responsability in relation to justice and truth, based in the practical and theoretical experiences of the author in the field of Social Anthropology. Although the text adresses the Social Sciences from the perspective of Social Anthropology, it deals with topics in which the researchers and ativists activities require a cooperative action of lawyers, engineers and biologists among the work of sociologists and geographers – all that is involved in the situations when it is necessary to tell the truth and also to judge about justice and injustice in social life. Justice and truth notions are social scientists weapons and they can not be abandoned in the hands of conservative thought.

  4. Love and Truth in Social Involvement of the Church

    Directory of Open Access Journals (Sweden)

    Henryk Szmulewicz

    2012-09-01

    Full Text Available This study begins with a brief outline of the essence of the whole encyclical Caritas in veritate . Benedict XVI expresses the desire for „the dialogue with the world”. He understands this dialogue as a special kind of the service of the Church towards eternal love and truth, fully revealed in Christ. The dialogue of the Church with the world, in the spirit of love and truth, is accomplished every day at the level of so-called official relations. There are numerous opinions that in the past the Church repeatedly neglected the dialogue with the world. Indeed, the Church historians point out the existence of examples of the fall of the authority of the Holy See in particular countries and circumstances. Similarly, the Church is the sign of objection in the contemporary world. Instructed by past experiences the Church is aware that what is necessary for the renewal of culture and society, is evangelical love and truth.

  5. This Is My (Post) Truth, Tell Me Yours

    OpenAIRE

    Powell, Martin

    2017-01-01

    This is a commentary on the article ‘The rise of post-truth populism in pluralist liberal democracies: challenges for health policy.’ It critically examines two of its key concepts: populism and ‘post truth.’ This commentary argues that there are different types of populism, with unclear links to impacts, and that in some ways, ‘post-truth’ has resonances with arguments advanced in the period at the beginning of the British National Health Service (NHS). In short, ‘post-truth’ populism’ may b...

  6. Automatic assessment of average diaphragm motion trajectory from 4DCT images through machine learning.

    Science.gov (United States)

    Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O

    2015-12-01

    To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors

  7. Review of commonly used remote sensing and ground-based ...

    African Journals Online (AJOL)

    This review provides an overview of the use of remote sensing data, the development of spectral reflectance indices for detecting plant water stress, and the usefulness of field measurements for ground-truthing purposes. Reliable measurements of plant water stress over large areas are often required for management ...

  8. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    Science.gov (United States)

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a

  9. Automatic Photoelectric Telescope Service

    International Nuclear Information System (INIS)

    Genet, R.M.; Boyd, L.J.; Kissell, K.E.; Crawford, D.L.; Hall, D.S.; BDM Corp., McLean, VA; Kitt Peak National Observatory, Tucson, AZ; Dyer Observatory, Nashville, TN)

    1987-01-01

    Automatic observatories have the potential of gathering sizable amounts of high-quality astronomical data at low cost. The Automatic Photoelectric Telescope Service (APT Service) has realized this potential and is routinely making photometric observations of a large number of variable stars. However, without observers to provide on-site monitoring, it was necessary to incorporate special quality checks into the operation of the APT Service at its multiple automatic telescope installation on Mount Hopkins. 18 references

  10. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  11. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [comp.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  12. Ernst von Glasersfeld's Radical Constructivism and Truth as Disclosure

    Science.gov (United States)

    Joldersma, Clarence W.

    2011-01-01

    In this essay Clarence Joldersma explores radical constructivism through the work of its most well-known advocate, Ernst von Glasersfeld, who combines a sophisticated philosophical discussion of knowledge and truth with educational practices. Joldersma uses Joseph Rouse's work in philosophy of science to criticize the antirealism inherent in…

  13. Views from the field Truth seeking and gender: The Liberian ...

    African Journals Online (AJOL)

    to gender influence truth-seeking in a post-conflict situation? Following ..... of how apartheid structured identities not simply along the fault lines of race, but also ... the Second World War in Europe and all black people (African, Coloured and ... and power of Afrikaner nationalism by means of an exclusive system of white.

  14. Truth telling in a South African tertiary hospital | Vangu | South ...

    African Journals Online (AJOL)

    Introduction. Truth telling forms part of the contemporary debate in clinical bioethics and centres around the right of the patient to receive honest information concerning his or her medical condition/illness and the duty of the doctor to give this information to the patient. Many patients complain that they are not being informed, ...

  15. 78 FR 18795 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2013-03-28

    ... of credit at account opening. The consumer is also required to pay a cash advance fee that is equal... amount equal to any fees the consumer was required to pay with respect to the account that exceed 25... Regulation Z, which implements the Truth in Lending Act, and the Official Interpretations of the regulation...

  16. Communicating Truthfully and Positively in Appraising Work Performance.

    Science.gov (United States)

    Pearce, C. Glenn; And Others

    1989-01-01

    Explores the issue of acceptable behavior for managers when giving feedback to their subordinates. Notes that feedback can be either truthful or untruthful, and can be communicated either positively or negatively. Describes the advantages and disadvantages for each feedback approach to work performance. (MM)

  17. Truth or meaning: Ricoeur versus Frei on biblical narrative ...

    African Journals Online (AJOL)

    Truth or meaning: Ricoeur versus Frei on biblical narrative. Of the theologians and philosophers now writing on biblical narrative, Hans Frei and Paul Ricoeur are probably the most prominent. It is significant that their views converge on important issues. Both are uncomfortable with hermeneutic theories that convert the text ...

  18. Negotiating Nation-building and Citizenship through the Truth and ...

    African Journals Online (AJOL)

    This paper, therefore, seeks to interrogate the dramatic world(s) created using the material properties of the TRC in John Kani's Nothing but the Truth and Zakes Mda's The Bells of Amersfoort. The paper argues that the domination and manipulation of this public realm by the state at the expense of the individual is not only ...

  19. Truth commissions and gender: A South African case study ...

    African Journals Online (AJOL)

    South Africa's gendered past was never substantially addressed by the South African Truth and Reconciliation Commission (TRC) despite attempts by women's groups to ensure its inclusion.. The TRC's treatment of gender was in part constrained by its 'gender-blind' mandate, which ignored the different experiences and ...

  20. Art and fiction are signals with indeterminate truth values.

    Science.gov (United States)

    Rabb, Nathaniel

    2017-01-01

    Menninghaus et al. distinguish art from fiction, but no current arguments or data suggest that the concept of art can be meaningfully circumscribed. This is a problem for aesthetic psychology. I sketch a solution by rejecting the distinction: Unlike most animal communication, in which signals are either true or false, art and fiction consist of signals without determinate truth values.

  1. Truth and modes of cognition in Boethius: a Neoplatonic approach

    Directory of Open Access Journals (Sweden)

    José María Zamora Calvo

    2017-07-01

    Full Text Available Boethius does not accept the principle of realism that considers truth as the adaptation – or adequation – of the subject to the knowable object, and instead defends that knowledge should be studied by relating it to the capacity of the cognoscente subject. Thus, truth is relative to the faculty or level of knowledge in which we stand, since each faculty -each level of knowledge- has its own object: the material figure for the senses, the figure without matter for the Imagination, the universal for reason and the simple form for intelligence. But this epistemological relativism is moderate, precisely because of its hierarchical character. Therefore, although in a sense truth is manifold, the perfect truth, proper to divine knowledge, includes and surpasses all others. In order to cement the architecture of this system of relativisation of knowledge, Boethius starts from a Neoplatonic interpretation of the simile of the line of the Republic (VI.510a-b and Plato's Timaeus, but not completely tied to it. The beings endowed with knowledge are ordered according to the Neoplatonic hierarchy of cosmic realities.

  2. What is the Point? Ethics, Truth and the Tractatus

    DEFF Research Database (Denmark)

    Christensen, Anne-Marie Søndergaard

    2007-01-01

    discourse is shaped by both subjective and objective concerns. Moving on, I unfold the subjective side of ethics by drawing on Stanley Cavell's notion of the point of an utterance, while the objective side will be presented via Diamond's writing on the importance of truth in ethics. My goal is to argue...

  3. 78 FR 25818 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2013-05-03

    ... BUREAU OF CONSUMER FINANCIAL PROTECTION 12 CFR Part 1026 [Docket No. CFPB-2012-0039] RIN 3170-AA28 Truth in Lending (Regulation Z) AGENCY: Bureau of Consumer Financial Protection. ACTION: Final rule; official interpretations. SUMMARY: The Bureau of Consumer Financial Protection (Bureau) issues this final...

  4. Micro CT based truth estimation of nodule volume

    Science.gov (United States)

    Kinnard, L. M.; Gavrielides, M. A.; Myers, K. J.; Zeng, R.; Whiting, B.; Lin-Gibson, S.; Petrick, N.

    2010-03-01

    With the advent of high-resolution CT, three-dimensional (3D) methods for nodule volumetry have been introduced, with the hope that such methods will be more accurate and consistent than currently used planar measures of size. However, the error associated with volume estimation methods still needs to be quantified. Volume estimation error is multi-faceted in the sense that there is variability associated with the patient, the software tool and the CT system. A primary goal of our current research efforts is to quantify the various sources of measurement error and, when possible, minimize their effects. In order to assess the bias of an estimate, the actual value, or "truth," must be known. In this work we investigate the reliability of micro CT to determine the "true" volume of synthetic nodules. The advantage of micro CT over other truthing methods is that it can provide both absolute volume and shape information in a single measurement. In the current study we compare micro CT volume truth to weight-density truth for spherical, elliptical, spiculated and lobulated nodules with diameters from 5 to 40 mm, and densities of -630 and +100 HU. The percent differences between micro CT and weight-density volume for -630 HU nodules range from [-21.7%, -0.6%] (mean= -11.9%) and the differences for +100 HU nodules range from [-0.9%, 3.0%] (mean=1.7%).

  5. Propositional matrices as alternative representation of truth values ...

    African Journals Online (AJOL)

    The paper considered the subject of representation of truth values in symbolic logic. An alternative representation was given based on the rows and columns properties of matrices, with the operations involving the logical connectives subjected to the laws of algebra of propositions. Matrices of various propositions detailing ...

  6. 75 FR 58505 - Regulation Z; Truth in Lending

    Science.gov (United States)

    2010-09-24

    .... Reasons for the Proposed Rule Congress enacted TILA based on findings that economic stability would be... and Regulation Z Congress enacted the Truth in Lending Act (TILA) based on findings that economic stability would be enhanced and competition among consumer credit providers would be strengthened by the...

  7. 75 FR 58469 - Regulation Z; Truth in Lending

    Science.gov (United States)

    2010-09-24

    ... Congress enacted the Truth in Lending Act (TILA) based on findings that economic stability would be... 2503 of the Housing and Economic Recovery Act of 2008, Public Law 110-289, enacted on July 30, 2008. The MDIA was later amended by the Emergency Economic Stabilization Act of 2008, Public Law 110-343...

  8. The Autobiographical Photo-textual Devices. Rhetorics and Truth

    Directory of Open Access Journals (Sweden)

    Roberta Coglitore

    2014-05-01

    Full Text Available The case of autobiographical photo-texts is to be analyzed, first of all, as an autobiographical writing that feels the need to express by other means; secondly, as a specific rhetoric practice that chooses the image, next to the word, as a further persuasive force; finally, as a very special case of icono-texts, which uses some variety of the connection between the verbal and the visual. It is not only a matter of analyzing how it works the cooperation between photographs and autobiographical writing, that is, through which connectors – frames, white space, overlays and captions –, but also of understanding what are the functions of the photographs in relation to literature. This is in order to understand what truth is affirmed in the examples chosen: Franca Valeri, Grégoire Bouillier, Roland Barthes, Winfried G. Sebald, Lalla Romano, Jovanotti, Edward Said, Azar Nafisi, Vladimir Nabokov, André Breton, Hannah Höch, Annie Ernaux. Do photographs expose, confirm, add or resist the truth expressed by the literary side? And if narrative expresses the truth and the resistance to the truth of the author himself, what does the photo resist to, while showing?

  9. Green gold : on variations of truth in plantation forestry

    NARCIS (Netherlands)

    Romeijn, P.

    1999-01-01

    The "variations of truth in plantation forestry" is a study on the Teakwood investment program. Teakwood offered the general public in The Netherlands the opportunity to directly invest in a teak plantation in Costa Rica. The program was pioneered in 1989 and truly gained momentum when it

  10. Heart Health: Learn the Truth About Your Heart

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Cover Story Heart Health Learn the Truth About Your Heart Past Issues / Winter 2009 Table of Contents For ... turn Javascript on. Photo: iStock February is American Heart Month. Now is the time to make sure ...

  11. On the semantics of conflict resolution in truth maintenance systems

    NARCIS (Netherlands)

    Jonker, C.M.

    A Truth Maintenance -System (TMS) maintains a consistent state of belief given a set J of justifications, i.e. arguments for belief. To resolve contradictions dependency-directed backtracking is performed. In this paper we introduce a method that can, be used to track all

  12. Grounded theory.

    Science.gov (United States)

    Harris, Tina

    2015-04-29

    Grounded theory is a popular research approach in health care and the social sciences. This article provides a description of grounded theory methodology and its key components, using examples from published studies to demonstrate practical application. It aims to demystify grounded theory for novice nurse researchers, by explaining what it is, when to use it, why they would want to use it and how to use it. It should enable nurse researchers to decide if grounded theory is an appropriate approach for their research, and to determine the quality of any grounded theory research they read.

  13. REALISM WITHOUT TRUTH: A REVIEW OF GIERE'S SCIENCE WITHOUT LAWS AND SCIENTIFIC PERSPECTIVISM

    Science.gov (United States)

    Hackenberg, Timothy D

    2009-01-01

    An increasingly popular view among philosophers of science is that of science as action—as the collective activity of scientists working in socially-coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science-as-social-process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science. PMID:19949495

  14. The Metaphysical Assumptions of the Conception of Truth in Martin Smiglecki’s Logic

    Directory of Open Access Journals (Sweden)

    Tomasz Pawlikowski

    2015-06-01

    Full Text Available The central element of the concept of truth in Smiglecki’s Logica (1618 is his approach to formulating definitions. Where the establishing of the truth is concerned, he always points to compliance at the level of the community (conformitas in respect of whether the intellectual recognition of a thing or things is in accordance with its intellectual equivalent, or the principles behind the latter, where these are understood as designating the corresponding idea inherent in the intellect of God. This is a form of the classical definition of truth --- similar to that used by St. Thomas Aquinas --- with a wide scope of applicability: to the field of existence (transcendental truth, to cognition and language (logical truth, and even to moral beliefs (moral rightness. Smiglecki distinguishes three types of truth: truth assigned to being, truth assigned to cognition, and truth assigned to moral convictions. Of these, the first is identified with transcendental truth, while the second is attributed not only to propositions and sentences, but also to concepts. The truth of concepts results from compliance with things by way of representation, while the truth of propositions and sentences issues from a compliance with things involving the implementation of some form of expression or other. Logical truth pertains to propositions rather than concepts. The kind of moral truth he writes about is what we would now be more likely to call “truthfulness”. With the exception of moral truth, which he defined as compliance of a statement with someone’s internal thoughts, Smiglecki considers every kind of truth to be a conditioned state of the object of knowledge. He says (a that the ultimate object of reference of human cognitive functioning is a real being, absolutely true by virtue of compliance with its internal principles and their idea as present in the intellect of God, and (b that the compatibility of human cognition with a real being is the ultimate

  15. Effects of the truth FinishIt brand on tobacco outcomes

    OpenAIRE

    Evans, W. Douglas; Rath, Jessica M.; Hair, Elizabeth C.; Snider, Jeremy Williams; Pitzer, Lindsay; Greenberg, Marisa; Xiao, Haijun; Cantrell, Jennifer; Vallone, Donna

    2017-01-01

    Since 2000, the truth campaign has grown as a social marketing brand. Back then, truth employed branding to compete directly with the tobacco industry. In 2014, the launch of truth FinishIt reflected changes in the brand's strategy, the tobacco control environment, and youth/young adult behavior.Building on a previous validation study, the current study examined brand equity in truth FinishIt, as measured by validated multi-dimensional scales, and tobacco related attitudes, beliefs, and behav...

  16. Neural Bases of Automaticity

    Science.gov (United States)

    Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.

    2018-01-01

    Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…

  17. Automatic control systems engineering

    International Nuclear Information System (INIS)

    Shin, Yun Gi

    2004-01-01

    This book gives descriptions of automatic control for electrical electronics, which indicates history of automatic control, Laplace transform, block diagram and signal flow diagram, electrometer, linearization of system, space of situation, state space analysis of electric system, sensor, hydro controlling system, stability, time response of linear dynamic system, conception of root locus, procedure to draw root locus, frequency response, and design of control system.

  18. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  19. Automatic differentiation of functions

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1990-06-01

    Automatic differentiation is a method of computing derivatives of functions to any order in any number of variables. The functions must be expressible as combinations of elementary functions. When evaluated at specific numerical points, the derivatives have no truncation error and are automatically found. The method is illustrated by simple examples. Source code in FORTRAN is provided

  20. Initiating GrabCut by Color Difference for Automatic Foreground Extraction of Passport Imagery

    DEFF Research Database (Denmark)

    Sangüesa, Adriá Arbués; Jørgensen, Nicolai Krogh; Larsen, Christian Aagaard

    2016-01-01

    photo. Having gathered our own dataset and generated ground truth images, promising results are obtained in terms of F1-scores, with a maximum mean of 0.975 among all the images, improving the performance of GrabCut in all cases. Some future work directions are given for those unsolved issues that were...

  1. Accuracy evaluation of automatic quantification of the articular cartilage surface curvature from MRI

    DEFF Research Database (Denmark)

    Folkesson, Jenny; Dam, Erik B; Olsen, Ole F

    2007-01-01

    for intersubject comparisons. Digital phantoms were created to establish the accuracy of the curvature estimation methods. RESULTS: A comparison of the two curvature estimation methods to ground truth yielded absolute pairwise differences of 1.1%, and 4.8%, respectively. The interscan reproducibility for the two...

  2. The socio-rhetorical force of 'truth talk' and lies: The case of 1 John ...

    African Journals Online (AJOL)

    This article canvassed Greek and Roman sources for discussions concerning truth talk and lies. It has investigated what social historians and/or anthropologists are saying about truth talking and lying and has developed a model that will examine the issue of truth and lying in socio-religious terms as defined by the ...

  3. Does Truth Exist? Insights from Applied Linguistics for the Rationalism/Postmodern Debate

    Science.gov (United States)

    Ross, David A.

    2008-01-01

    The question of whether or not truth exists is at the center of the rationalism versus postmodern debate. Noting the difficulty of defining truth, the author uses the principles of linguistics to show that semantic skewing has resulted in the concept of truth being encoded as a noun, while it is really an attribute (true). The introduction of a…

  4. This Is My (Post) Truth, Tell Me Yours Comment on "The Rise of Post-truth Populism in Pluralist Liberal Democracies: Challenges for Health Policy".

    Science.gov (United States)

    Powell, Martin

    2017-05-15

    This is a commentary on the article 'The rise of post-truth populism in pluralist liberal democracies: challenges for health policy.' It critically examines two of its key concepts: populism and 'post truth.' This commentary argues that there are different types of populism, with unclear links to impacts, and that in some ways, 'post-truth' has resonances with arguments advanced in the period at the beginning of the British National Health Service (NHS). In short, 'post-truth' populism' may be 'déjà vu all over again,' and there are multiple (post) truths: this is my (post) truth, tell me yours. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  5. Automatic target detection using binary template matching

    Science.gov (United States)

    Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook

    2005-03-01

    This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.

  6. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  7. Automatically Identifying Fusion Events between GLUT4 Storage Vesicles and the Plasma Membrane in TIRF Microscopy Image Sequences

    Directory of Open Access Journals (Sweden)

    Jian Wu

    2015-01-01

    Full Text Available Quantitative analysis of the dynamic behavior about membrane-bound secretory vesicles has proven to be important in biological research. This paper proposes a novel approach to automatically identify the elusive fusion events between VAMP2-pHluorin labeled GLUT4 storage vesicles (GSVs and the plasma membrane. The differentiation is implemented to detect the initiation of fusion events by modified forward subtraction of consecutive frames in the TIRFM image sequence. Spatially connected pixels in difference images brighter than a specified adaptive threshold are grouped into a distinct fusion spot. The vesicles are located at the intensity-weighted centroid of their fusion spots. To reveal the true in vivo nature of a fusion event, 2D Gaussian fitting for the fusion spot is used to derive the intensity-weighted centroid and the spot size during the fusion process. The fusion event and its termination can be determined according to the change of spot size. The method is evaluated on real experiment data with ground truth annotated by expert cell biologists. The evaluation results show that it can achieve relatively high accuracy comparing favorably to the manual analysis, yet at a small fraction of time.

  8. Thoracic lymph node station recognition on CT images based on automatic anatomy recognition with an optimal parent strategy

    Science.gov (United States)

    Xu, Guoping; Udupa, Jayaram K.; Tong, Yubing; Cao, Hanqiang; Odhner, Dewey; Torigian, Drew A.; Wu, Xingyu

    2018-03-01

    Currently, there are many papers that have been published on the detection and segmentation of lymph nodes from medical images. However, it is still a challenging problem owing to low contrast with surrounding soft tissues and the variations of lymph node size and shape on computed tomography (CT) images. This is particularly very difficult on low-dose CT of PET/CT acquisitions. In this study, we utilize our previous automatic anatomy recognition (AAR) framework to recognize the thoracic-lymph node stations defined by the International Association for the Study of Lung Cancer (IASLC) lymph node map. The lymph node stations themselves are viewed as anatomic objects and are localized by using a one-shot method in the AAR framework. Two strategies have been taken in this paper for integration into AAR framework. The first is to combine some lymph node stations into composite lymph node stations according to their geometrical nearness. The other is to find the optimal parent (organ or union of organs) as an anchor for each lymph node station based on the recognition error and thereby find an overall optimal hierarchy to arrange anchor organs and lymph node stations. Based on 28 contrast-enhanced thoracic CT image data sets for model building, 12 independent data sets for testing, our results show that thoracic lymph node stations can be localized within 2-3 voxels compared to the ground truth.

  9. Ground Truth Data Used to Map the Benthic Habitats of Puerto Rico

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project is a cooperative effort among the National Ocean Service, National Centers for Coastal Ocean Science, Center for Coastal Monitoring and Assessment; U.S....

  10. A study on ground truth data for impact damaged polymer matrix composites

    Science.gov (United States)

    Wallentine, Sarah M.; Uchic, Michael D.

    2018-04-01

    This study presents initial results toward correlative characterization of barely-visible impact damage (BVID) in unidirectional carbon fiber reinforced polymer matrix composite laminate plates using nondestructive ultrasonic testing (UT) and destructive serial sectioning microscopy. To produce damage consistent with BVID, plates were impacted using an instrumented drop-weight tower with pneumatic anti-rebound brake. High-resolution, normal-incidence, single-sided, pulse-echo, immersion UT scans were performed to verify and map internal damage after impact testing. UT C-scans were registered to optical images of the specimen via landmark registration and the use of an affine transformation, allowing location of internal damage in reference to the overall plate and enabling specimen preparation for subsequent serial sectioning. The impact-damaged region was extracted from each plate, prepared and mounted for materialographic sectioning. A modified RoboMet.3D version 2 was employed for serial sectioning and optical microscopy characterization of the impact damaged regions. Automated montage capture of sub-micron resolution, bright-field reflection, 12-bit monochrome optical images was performed over the entire specimen cross-section. These optical images were post- processed to produce 3D data sets, including segmentation to improve visualization of damage features. Impact-induced delaminations were analyzed and characterized using both serial sectioning and ultrasonic methods. Those results and conclusions are presented, as well as future direction of the current study.

  11. Dense reconstruction of brain-wide neuronal population close to the ground truth

    OpenAIRE

    Li, Yun; Zhou, Hang; Li, Shiwei; Li, Jing; Su, Lei; Li, Anan; Feng, Xiong; Li, Ning; Han, Jiacheng; Kang, Hongtao; Chen, Yijun; Fang, Wenqian; Liu, Yidong; Lin, Huimin; Jin, Sen

    2017-01-01

    Neuron is the basic structure and functional unit of the brain, its projection and connections with other neurons provide a basic physical infrastructure for neural signal storage, allocation, processing, and integration. Recent technique progresses allow for labeling and imaging specific neuronal populations at single axonal level across a whole mouse brain. However, digital reconstruction of these neuron individuals needs months of human labor or sometimes is even an impossible task. Here w...

  12. Formation of Ground Truth Databases and Related Studies and Regional Seismic Monitoring Research

    Science.gov (United States)

    2006-06-01

    experiments (1997-1999) in the former Semipalatinsk test site , Proceedings of the 22nd Annual DoDLDOE Seismic Research Symposium, Vol. I, U. S. Department of...DefenselEnergy, 55-66. Kim, Won-Young (1998), Waveform Data Information Product: Calibration Explosions at Semipalatinsk Test Site , Kazakstan...from the aftershocks of a 100 ton chemical explosion at the Degelen, Kazakh Test Site on 22 August 1998 (Omega-1). Epicentral locations, based on P

  13. Precipitation measurements for earth-space communications: Accuracy requirements and ground-truth techniques

    Science.gov (United States)

    Ippolito, L. J.; Kaul, R.

    1981-01-01

    Rainfall which is regarded as one of the more important observations for the measurements of this most variable parameter was made continuously, across large areas and over the sea. Ships could not provide the needed resolution nor could available radars provide the needed breadth of coverage. Microwave observations from the Nimbus-5 satellite offered some hope. Another possibility was suggested by the results of many comparisons between rainfall and the clouds seen in satellite pictures. Sequences of pictures from the first geostationary satellites were employed and a general correspondence between rain and the convective clouds visible in satellite pictures was found. It was demonstrated that the agreement was best for growing clouds. The development methods to infer GATE rainfall from geostationary satellite images are examined.

  14. Ground-truthing predicted indoor radon concentrations by using soil-gas radon measurements

    International Nuclear Information System (INIS)

    Reimer, G.M.

    2001-01-01

    Predicting indoor radon potential has gained in importance even as the national radon programs began to wane. A cooperative study to produce radon potential maps was conducted by the Environmental Protection Agency (EPA), U.S. Geological Survey (USGS), Department of Energy (DOE), and Lawrence Berkeley Laboratory (LBL) with the latter taking the lead role. A county-wide predictive model based dominantly on the National Uranium Resource Evaluation (NURE) aerorad data and secondly on geology, both small-scale data bases was developed. However, that model breaks down in counties of complex geology and does not provide a means to evaluate the potential of an individual home or building site. Soil-gas radon measurements on a large scale are currently shown to provide information for estimating radon potential at individual sites sort out the complex geology so that the small-scale prediction index can be validated. An example from Frederick County, Maryland indicates a positive correlation between indoor measurements and soil-gas data. The method does not rely on a single measurement, but a series that incorporate seasonal and meteorological considerations. (author)

  15. Ground truth evaluation of computer vision based 3D reconstruction of synthesized and real plant images

    DEFF Research Database (Denmark)

    Nielsen, Michael; Andersen, Hans Jørgen; Slaughter, David

    2007-01-01

    and finds the optimal hardware and light source setup before investing in expensive equipment and field experiments. It was expected to be a valuable tool to structure the otherwise incomprehensibly large information space and to see relationships between parameter configurations and crop features. Images...... of real plants with similar structural categories were annotated manually for comparison in order to validate the performance results on the synthesised images. The results showed substantial correlation between synthesized and real plants, but only when all error sources were accounted...... for in the simulation. However, there were exceptions where there were structural differences between the virtual plant and the real plant that were unaccounted for by its category. The test framework was evaluated to be a valuable tool to uncover information from complex data structures....

  16. Ground truth of (sub-)micrometre cometary dust - Results of MIDAS onboard Rosetta

    Science.gov (United States)

    Mannel, Thurid; Bentley, Mark; Schmied, Roland; Torkar, Klaus; Jeszenszky, Harald; Romsted, Jens; Levasseur-Regourd, A.; Weber, Iris; Jessberger, Elmar K.; Ehrenfreund, Pascale; Köberl, Christian; Havnes, Ove

    2016-10-01

    The investigation of comet 67P by Rosetta has allowed the comprehensive characterisation of pristine cometary dust particles ejected from the nucleus. Flying alongside the comet at distances as small as a few kilometres, and with a relative velocity of only centimetres per second, the Rosetta payload sampled almost unaltered dust. A key instrument to study this dust was MIDAS (the Micro-Imaging Dust Analysis System), a dedicated atomic force microscope that scanned the surfaces of hundreds of (sub-)micrometre sized particles in 3D with resolutions down to nanometres. This offers the unique opportunity to explore the morphology of smallest cometary dust and expand our current knowledge about cometary material.Here we give an overview of dust collected and analysed by MIDAS and highlight its most important features. These include the ubiquitous agglomerate nature of the dust, which is found at all size scales from the largest (>10 µm) through to the smallest (MIDAS resemble primitive interplanetary dust which is a strong argument for a common cometary origin.

  17. A new benchmark for pose estimation with ground truth from virtual reality

    DEFF Research Database (Denmark)

    Schlette, Christian; Buch, Anders Glent; Aksoy, Eren Erdal

    2014-01-01

    The development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation......, stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi...

  18. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    Science.gov (United States)

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  19. Towards a ground truth of AADT on using video data and tracking software?

    DEFF Research Database (Denmark)

    Øhlenschlæger, Rasmus; Lahrmann, Harry Spaabæk; B. Moeslund, Thomas

    precision on the direction parallel to the camera direction (8% and 3% deviations, respectively); it was less precise regarding transversal-driving vehicles (23% deviation). The on-the-shelf hardware had a significantly higher deviation regarding the two parallel directions, (35% and 67% deviations......, respectively) and a reasonable deviation regarding transversal-driving vehicles (11% deviation). It indicates that on-the-shelf hardware might need further calibration in general....

  20. Field experiment provides ground truth for surface nuclear magnetic resonance measurement

    Science.gov (United States)

    Knight, R.; Grunewald, E.; Irons, T.; Dlubac, K.; Song, Y.; Bachman, H.N.; Grau, B.; Walsh, D.; Abraham, J.D.; Cannia, J.

    2012-01-01

    The need for sustainable management of fresh water resources is one of the great challenges of the 21st century. Since most of the planet's liquid fresh water exists as groundwater, it is essential to develop non-invasive geophysical techniques to characterize groundwater aquifers. A field experiment was conducted in the High Plains Aquifer, central United States, to explore the mechanisms governing the non-invasive Surface NMR (SNMR) technology. We acquired both SNMR data and logging NMR data at a field site, along with lithology information from drill cuttings. This allowed us to directly compare the NMR relaxation parameter measured during logging,T2, to the relaxation parameter T2* measured using the SNMR method. The latter can be affected by inhomogeneity in the magnetic field, thus obscuring the link between the NMR relaxation parameter and the hydraulic conductivity of the geologic material. When the logging T2data were transformed to pseudo-T2* data, by accounting for inhomogeneity in the magnetic field and instrument dead time, we found good agreement with T2* obtained from the SNMR measurement. These results, combined with the additional information about lithology at the site, allowed us to delineate the physical mechanisms governing the SNMR measurement. Such understanding is a critical step in developing SNMR as a reliable geophysical method for the assessment of groundwater resources.

  1. Assessing the Impact of Information Channels on the Understanding of Ground Truth

    Science.gov (United States)

    2012-06-01

    with equipment performance requirements. This change within the Army acquisition life cycle brought together seven fundamental domains of human and...Large)groups)continuing)to)assemble. 14:40 CPOF )bravo)Troop ) Blackjack )6:)Made)contact)with)Jabal)Mayor,)) he)was)eager)to)speak)with) the )CF) commander...possibly)at) Miami)airfield. )radio ) Blackjack )6 )Bengal)X:ray)this)is) Blackjack )6,)made)contact)with) Jabal)Mayor,))he)was)eager)to)speak)with) the

  2. The Krafla International Testbed (KMT): Ground Truth for the New Magma Geophysics

    Science.gov (United States)

    Brown, L. D.; Kim, D.; Malin, P. E.; Eichelberger, J. C.

    2017-12-01

    Recent developments in geophysics such as large N seismic arrays , 4D (time lapse) subsurface imaging and joint inversion algorithms represent fresh approaches to delineating and monitoring magma in the subsurface. Drilling at Krafla, both past and proposed, are unique opportunities to quantitatively corroborate and calibrate these new technologies. For example, dense seismic arrays are capable of passive imaging of magma systems with resolutions comparable to that achieved by more expensive (and often logistically impractical) controlled source surveys such as those used in oil exploration. Fine details of the geometry of magma lenses, feeders and associated fluid bearing fracture systems on the scale of meters to tens of meters are now realistic targets for surface seismic surveys using ambient energy sources, as are detection of their temporal variations. Joint inversions, for example of seismic and MT measurements, offer the promise of tighter quantitative constraints on the physical properties of the various components of magma and related geothermal systems imaged by geophysics. However, the accuracy of such techniques will remain captive to academic debate without testing against real world targets that have been directly sampled. Thus application of these new techniques to both guide future drilling at Krafla and to be calibrated against the resulting borehole observations of magma are an important step forward in validating geophysics for magma studies in general.

  3. Bibliographic Entity Automatic Recognition and Disambiguation

    CERN Document Server

    AUTHOR|(SzGeCERN)766022

    This master thesis reports an applied machine learning research internship done at digital library of the European Organization for Nuclear Research (CERN). The way an author’s name may vary in its representation across scientific publications creates ambiguity when it comes to uniquely identifying an author; In the database of any scientific digital library, the same full name variation can be used by more than one author. This may occur even between authors from the same research affiliation. In this work, we built a machine learning based author name disambiguation solution. The approach consists in learning a distance function from a ground-truth data, blocking publications of broadly similar author names, and clustering the publications using a semi-supervised strategy within each of the blocks. The main contributions of this work are twofold; first, improving the distance model by taking into account the (estimated) ethnicity of the author’s full name. Indeed, names from different ethnicities, for e...

  4. THE JOURNEY OF TRUTH: FROM PLATO TO ZOLA

    Directory of Open Access Journals (Sweden)

    Ribut Basuki

    1999-01-01

    Full Text Available Western theater theory and criticism is generally considered to be set forth by the Greeks. Plato was "the first theater critic" with his negative comments about theater owing to his idealistic views about "the truth." Then came Aristotle who used a different viewpoint from that of Plato, saying that there is "truth" in theater. However, hostile criticism on theater came back in the Middle Ages, championed by Tertulian before Aristotelian theory was revived by the neo-classicists such as Scaliger and Castelvetro. Theater theory and criticism discourse was then made more alive by the romanticists who disagreed with the neo-classicists' rigid rules on theater. As the influence of science became dominant in the theater world, naturalism and realism emerged and became the mainstream of theater theory and criticism until well into the twentieth century.

  5. About the notion of truth in quantum mechanics

    International Nuclear Information System (INIS)

    Omnes, R.

    1991-01-01

    The meaning of truth in quantum mechanics is considered in order to respond to some objections raised by B. d'Espagnat against a logical interpretation of quantum mechanics recently proposed by the author. A complete answer is given. It is shown that not only can factual data be said to be true, but also some of their logical consequences, so that the definition of truth given by Heisenberg is both extended and refined. Some nontrue but reliable propositions may also be used, but they are somewhat arbitrary because of the complementarity principle. For instance, the propositions expressing wave packet reduction can be either true or reliable, according to the case under study. Separability is also discussed: as far as the true properties of an individual system are concerned, quantum mechanics is separable

  6. Truth Seeded Reconstruction for Fast Simulation in the ATLAS Experiment

    CERN Document Server

    Jansky, Roland; Salzburger, Andreas

    The huge success of the ATLAS experiment for particle physics during Run 1 of the LHC would not have been possible without the production of vast amounts of simulated Monte Carlo data. However, the very detailed detector simulation is a highly CPU intensive task and thus resource shortages occurred. Motivated by this, great effort has been put into speeding up the simulation. As a result, other timeconsuming parts became visible. One of which is the track reconstruction. This thesis describes one potential solution to the CPU intensive reconstruction of simulated data: a newly designed truth seeded reconstruction. At its basics is the idea to skip the pattern recognition altogether, instead utilizing the available (truth) information from simulation to directly fit particle trajectories without searching for them. At the same time tracking effects of the standard reconstruction need to be emulated. This approach is validated thoroughly and no critical deviations of the results compared to the standard reconst...

  7. Leadership for reconciliation: A Truth and Reconciliation Commission perspective

    Directory of Open Access Journals (Sweden)

    P. G. J. Meiring

    2002-08-01

    Full Text Available As important as the need for authentic leadership in the fields of politics, economy and education in Africa may be, the continent is also in dire need of leadership for reconciliation. Against the backdrop of the South African Truth and Reconciliation Commission (TRC, the author � who served on the Commission � discusses five characteristics of leaders for reconciliation. Leaders need to be: leaders with a clear understanding of the issues at stake; leaders with respect for the truth; leaders with a sense of justice; leaders with a comprehension of the dynamics of forgiveness; and leaders with a firm commitment. The insights and experiences of both the chairperson of the TRC, Desmond Tutu, and the deputy chair, Alex Boraine, form the backbone of the article.

  8. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and

  9. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different

  10. Automatic segmentation of meningioma from non-contrasted brain MRI integrating fuzzy clustering and region growing

    Directory of Open Access Journals (Sweden)

    Liao Chun-Chih

    2011-08-01

    Full Text Available Abstract Background In recent years, magnetic resonance imaging (MRI has become important in brain tumor diagnosis. Using this modality, physicians can locate specific pathologies by analyzing differences in tissue character presented in different types of MR images. This paper uses an algorithm integrating fuzzy-c-mean (FCM and region growing techniques for automated tumor image segmentation from patients with menigioma. Only non-contrasted T1 and T2 -weighted MR images are included in the analysis. The study's aims are to correctly locate tumors in the images, and to detect those situated in the midline position of the brain. Methods The study used non-contrasted T1- and T2-weighted MR images from 29 patients with menigioma. After FCM clustering, 32 groups of images from each patient group were put through the region-growing procedure for pixels aggregation. Later, using knowledge-based information, the system selected tumor-containing images from these groups and merged them into one tumor image. An alternative semi-supervised method was added at this stage for comparison with the automatic method. Finally, the tumor image was optimized by a morphology operator. Results from automatic segmentation were compared to the "ground truth" (GT on a pixel level. Overall data were then evaluated using a quantified system. Results The quantified parameters, including the "percent match" (PM and "correlation ratio" (CR, suggested a high match between GT and the present study's system, as well as a fair level of correspondence. The results were compatible with those from other related studies. The system successfully detected all of the tumors situated at the midline of brain. Six cases failed in the automatic group. One also failed in the semi-supervised alternative. The remaining five cases presented noticeable edema inside the brain. In the 23 successful cases, the PM and CR values in the two groups were highly related. Conclusions Results indicated

  11. Incentives for Truthful Information Elicitation of Continuous Signals

    OpenAIRE

    Radanovic, Goran; Faltings, Boi

    2014-01-01

    We consider settings where a collective intelligence is formed by aggregating information contributed from many independent agents, such as product reviews, community sensing, or opinion polls. We propose a novel mechanism that elicits both private signals and beliefs. The mechanism extends the previous versions of the Bayesian Truth Serum (the original BTS, the RBTS, and the multi-valued BTS), by allowing small populations and non-binary private signals, while not requiring additional assump...

  12. The Simple Truth about the Gender Pay Gap

    Science.gov (United States)

    American Association of University Women, 2014

    2014-01-01

    It's been said that men are paid more than women are paid over their lifetimes. But what does that mean? Are women paid less because they choose lower-paying jobs? Is it because more women work part time than men do? Or is it because women tend to be the primary caregivers for their children? AAUW's "The Simple Truth about the…

  13. Fiction and truth in the parallel lives of Plutarch

    Directory of Open Access Journals (Sweden)

    Analía V. Sapere

    2018-05-01

    Full Text Available In this paper we will investigate the occurrences of πλάσμα in Plutarch’s Parallel Lives. We will analyse the meanings and nuances of the word in different passages of the work (understood as ‘fiction, counterfeit, figment’, etc. in order to connect our conclusions with plutarchean theorizations about the problem of truth in historical narrative.

  14. Truth or meaning: Ricoeur versus Frei on biblical narrative

    OpenAIRE

    Gary L. Comstock

    1989-01-01

    Truth or meaning: Ricoeur versus Frei on biblical narrative Of the theologians and philosophers now writing on biblical narrative, Hans Frei and Paul Ricoeur are probably the most prominent. It is significant that their views converge on important issues. Both are uncomfortable with hermeneutic theories that convert the text into an abstract philosophical system, an ideal typological structure, or a mere occasion for existential decision. Frei and Ricoeur seem knit together in a common en...

  15. Fuzzy logic of quasi-truth an algebraic treatment

    CERN Document Server

    Di Nola, Antonio; Turunen, Esko

    2016-01-01

    This book presents the first algebraic treatment of quasi-truth fuzzy logic and covers the algebraic foundations of many-valued logic. It offers a comprehensive account of basic techniques and reports on important results showing the pivotal role played by perfect many-valued algebras (MV-algebras). It is well known that the first-order predicate Łukasiewicz logic is not complete with respect to the canonical set of truth values. However, it is complete with respect to all linearly ordered MV –algebras. As there are no simple linearly ordered MV-algebras in this case, infinitesimal elements of an MV-algebra are allowed to be truth values. The book presents perfect algebras as an interesting subclass of local MV-algebras and provides readers with the necessary knowledge and tools for formalizing the fuzzy concept of quasi true and quasi false. All basic concepts are introduced in detail to promote a better understanding of the more complex ones. It is an advanced and inspiring reference-guide for graduate s...

  16. Thai Automatic Speech Recognition

    National Research Council Canada - National Science Library

    Suebvisai, Sinaporn; Charoenpornsawat, Paisarn; Black, Alan; Woszczyna, Monika; Schultz, Tanja

    2005-01-01

    .... We focus on the discussion of the rapid deployment of ASR for Thai under limited time and data resources, including rapid data collection issues, acoustic model bootstrap, and automatic generation of pronunciations...

  17. Automatic Payroll Deposit System.

    Science.gov (United States)

    Davidson, D. B.

    1979-01-01

    The Automatic Payroll Deposit System in Yakima, Washington's Public School District No. 7, directly transmits each employee's salary amount for each pay period to a bank or other financial institution. (Author/MLF)

  18. Automatic Test Systems Aquisition

    National Research Council Canada - National Science Library

    1994-01-01

    We are providing this final memorandum report for your information and use. This report discusses the efforts to achieve commonality in standards among the Military Departments as part of the DoD policy for automatic test systems (ATS...

  19. Brand and automaticity

    OpenAIRE

    Liu, J.

    2008-01-01

    A presumption of most consumer research is that consumers endeavor to maximize the utility of their choices and are in complete control of their purchasing and consumption behavior. However, everyday life experience suggests that many of our choices are not all that reasoned or conscious. Indeed, automaticity, one facet of behavior, is indispensable to complete the portrait of consumers. Despite its importance, little attention is paid to how the automatic side of behavior can be captured and...

  20. Position automatic determination technology

    International Nuclear Information System (INIS)

    1985-10-01

    This book tells of method of position determination and characteristic, control method of position determination and point of design, point of sensor choice for position detector, position determination of digital control system, application of clutch break in high frequency position determination, automation technique of position determination, position determination by electromagnetic clutch and break, air cylinder, cam and solenoid, stop position control of automatic guide vehicle, stacker crane and automatic transfer control.

  1. Automatic intelligent cruise control

    OpenAIRE

    Stanton, NA; Young, MS

    2006-01-01

    This paper reports a study on the evaluation of automatic intelligent cruise control (AICC) from a psychological perspective. It was anticipated that AICC would have an effect upon the psychology of driving—namely, make the driver feel like they have less control, reduce the level of trust in the vehicle, make drivers less situationally aware, but might reduce the workload and make driving might less stressful. Drivers were asked to drive in a driving simulator under manual and automatic inte...

  2. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    Science.gov (United States)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  3. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    Science.gov (United States)

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-12-01

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Full automatic fiducial marker detection on coil arrays for accurate instrumentation placement during MRI guided breast interventions

    Science.gov (United States)

    Filippatos, Konstantinos; Boehler, Tobias; Geisler, Benjamin; Zachmann, Harald; Twellmann, Thorsten

    2010-02-01

    With its high sensitivity, dynamic contrast-enhanced MR imaging (DCE-MRI) of the breast is today one of the first-line tools for early detection and diagnosis of breast cancer, particularly in the dense breast of young women. However, many relevant findings are very small or occult on targeted ultrasound images or mammography, so that MRI guided biopsy is the only option for a precise histological work-up [1]. State-of-the-art software tools for computer-aided diagnosis of breast cancer in DCE-MRI data offer also means for image-based planning of biopsy interventions. One step in the MRI guided biopsy workflow is the alignment of the patient position with the preoperative MR images. In these images, the location and orientation of the coil localization unit can be inferred from a number of fiducial markers, which for this purpose have to be manually or semi-automatically detected by the user. In this study, we propose a method for precise, full-automatic localization of fiducial markers, on which basis a virtual localization unit can be subsequently placed in the image volume for the purpose of determining the parameters for needle navigation. The method is based on adaptive thresholding for separating breast tissue from background followed by rigid registration of marker templates. In an evaluation of 25 clinical cases comprising 4 different commercial coil array models and 3 different MR imaging protocols, the method yielded a sensitivity of 0.96 at a false positive rate of 0.44 markers per case. The mean distance deviation between detected fiducial centers and ground truth information that was appointed from a radiologist was 0.94mm.

  5. Automatic classification of unexploded ordnance applied to Spencer Range live site for 5x5 TEMTADS sensor

    Science.gov (United States)

    Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon

    2013-06-01

    This paper details methods for automatic classification of Unexploded Ordnance (UXO) as applied to sensor data from the Spencer Range live site. The Spencer Range is a former military weapons range in Spencer, Tennessee. Electromagnetic Induction (EMI) sensing is carried out using the 5x5 Time-domain Electromagnetic Multi-sensor Towed Array Detection System (5x5 TEMTADS), which has 25 receivers and 25 co-located transmitters. Every transmitter is activated sequentially, each followed by measuring the magnetic field in all 25 receivers, from 100 microseconds to 25 milliseconds. From these data target extrinsic and intrinsic parameters are extracted using the Differential Evolution (DE) algorithm and the Ortho-Normalized Volume Magnetic Source (ONVMS) algorithms, respectively. Namely, the inversion provides x, y, and z locations and a time series of the total ONVMS principal eigenvalues, which are intrinsic properties of the objects. The eigenvalues are fit to a power-decay empirical model, the Pasion-Oldenburg model, providing 3 coefficients (k, b, and g) for each object. The objects are grouped geometrically into variably-sized clusters, in the k-b-g space, using clustering algorithms. Clusters matching a priori characteristics are identified as Targets of Interest (TOI), and larger clusters are automatically subclustered. Ground Truths (GT) at the center of each class are requested, and probability density functions are created for clusters that have centroid TOI using a Gaussian Mixture Model (GMM). The probability functions are applied to all remaining anomalies. All objects of UXO probability higher than a chosen threshold are placed in a ranked dig list. This prioritized list is scored and the results are demonstrated and analyzed.

  6. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    Science.gov (United States)

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  7. Truth telling in medical practice: students' opinions versus their observations of attending physicians' clinical practice.

    Science.gov (United States)

    Tang, Woung-Ru; Fang, Ji-Tseng; Fang, Chun-Kai; Fujimori, Maiko

    2013-07-01

    Truth telling or transmitting bad news is a problem that all doctors must frequently face. The purpose of this cross-sectional study was to investigate if medical students' opinions of truth telling differed from their observations of attending physicians' actual clinical practice. The subjects were 275 medical clerks/interns at a medical center in northern Taiwan. Data were collected on medical students' opinions of truth telling, their observations of physicians' clinical practice, students' level of satisfaction with truth telling practiced by attending physicians, and cancer patients' distress level when they were told the truth. Students' truth-telling awareness was significantly higher than the clinical truth-telling practice of attending physicians (pmedical students' opinions on truth telling and attending physicians' actual clinical practice. More research is needed to objectively assess physicians' truth telling in clinical practice and to study the factors affecting the method of truth telling used by attending physicians in clinical practice. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Design and Feasibility Testing of the truth FinishIt Tobacco Countermarketing Brand Equity Scale.

    Science.gov (United States)

    Evans, W Douglas; Rath, Jessica; Pitzer, Lindsay; Hair, Elizabeth C; Snider, Jeremy; Cantrell, Jennifer; Vallone, Donna

    2016-07-01

    The original truth campaign was a branded, national smoking prevention mass media effort focused on at-risk youth ages 12-17. Today the truth brand focuses on the goal of finishing tobacco (truth FinishIt). There have been significant changes in the tobacco control landscape, leading FinishIt to focus on 15- to 21-year-olds. The present article reports on formative research and media monitoring data collected to pilot test a new truth FinishIt brand equity scale. The goals of this study were to (a) content analyze truth FinishIt mass media ads, (b) assess truth's social media and followers' perceptions of truth's digital brand identity, and (c) develop and feasibility test a new version of the truth FinishIt brand equity scale using data from an existing Truth Initiative media monitoring study. Through factor analysis, we identified a brand equity scale, as in previous research, consisting of 4 main constructs: brand loyalty, leadership/satisfaction, personality, and awareness. Targeted truth attitudes and beliefs about social perceptions, acceptability, and industry-related beliefs were regressed on the higher order factor and each of the 4 individual brand equity factors. Ordinary least squares regression models generally showed associations in the expected directions (positive for anti-tobacco and negative for pro-tobacco) between targeted attitudes/beliefs and truth FinishIt brand equity. This study succeeded in developing and validating a new truth FinishIt brand equity scale. The scale may be a valuable metric for future campaign evaluation. Future studies should examine the effects of truth FinishIt brand equity on tobacco use behavioral outcomes over time.

  9. Some Notes (with Badiou and Žižek on Event/Truth/Subject/Militant Community in Jean-Paul Sartre's Political Thought

    Directory of Open Access Journals (Sweden)

    Erik M. Vogt

    2015-12-01

    Full Text Available The main object of this paper is to examine the new philosophical frame proposed by Alain Badiou and Slavoj Žižek and to show that it implies some traces of Sartre's philosophical and political heritage. According the project of Alain Badiou and Slavoj Žižek one should no longer accept today's constellation of freedom, particularistic truth and democracy, but to (reinscribe the issues of freedom and universal truth into a political project that attempts to re-activate a thinking of revolution. Their thinking consists in the wager that it is still possible to provide a philosophical frame for this leftist emancipatory position that claims the dimension of the universal against the vicious circle of capitalist globalization-cum-particularization and, by following Marx's claim that there are formal affinities between the ambitions of emancipatory politics and the working mode of capitalism, takes up the struggle of universalism against globalization (capital. It is only through this struggle for the universal that the intertwined processes of a constant expansion of the automatism of capital and "a process of fragmentation into closed identities," accompanied by "the culturalist and relativist ideology" (Badiou can be suspended. It is precisely this constellation of revolutionary act, universal truth, subject, and militant community, that reveal some similarities with Sartre's concepts of the subject, the revolutionary action, the militant community a.o.

  10. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers...... by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers...

  11. CRITERIA OF TRUTHFULNESS AND THE SCIENTIFIC QUALITY IN POST-MODERN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    Olga Mukha

    2012-02-01

    Full Text Available This article examines the criteria of truth in post-modern philosophy, taking into account the ways it is defined in both the classical and non-classical traditions. Specific to post-modern philosophy is the absence of a universal language of narration and the traditional methods in which knowledge is recognized as legitimate. Basing himself on these concepts, the author examines the problem of the ideal of scientific quality and the transformations this idea has undergone in contemporary philosophy. Truth is understood basically through two means which govern our relation to truth: the will to truth and the concern for truth. These also appear as defining factors of truth in various types of post-modern philosophy: social-operative, social-political, and aesthetic

  12. Truth and opinion in climate change discourse: the Gore-Hansen disagreement.

    Science.gov (United States)

    Russill, Chris

    2011-11-01

    In this paper, I discuss the "inconvenient truth" strategy of Al Gore. I argue that Gore's notion of truth upholds a conception of science and policy that narrows our understanding of climate change discourse. In one notable exchange, Gore and NASA scientist, James Hansen, disagreed about whether scientific statements based on Hansen's computer simulations were truth or opinion. This exchange is featured in An Inconvenient Truth, yet the disagreement is edited from the film and presented simply as an instance of Hansen speaking "inconvenient truth". In this article, I compare the filmic representation of Hansen's testimony with the congressional record. I place their exchange in a broader historical perspective on climate change disputation in order to discuss the implications of Gore's perspective on truth.

  13. The Truth About the Internet and Online Predators

    CERN Document Server

    Dingwell, Heath; Peterson, Fred L

    2011-01-01

    To help readers avoid and recognize risky behaviors, The Truth About the Internet and Online Predators explains many of the dangers associated with the Internet. The A-to-Z entries detail the social, legal, and personal risks of Internet use, while personal testimonies and question-and-answer sections provide readers with an inside look at common issues online. Entries include:. Bullies and cyberbullying. Characteristics of online predators. Chat rooms and instant messaging. Internet safety. Parental control. Peers and peer pressure. Phishing and pharming. Privacy issues. Social networking Web

  14. China QIUSHI SEEKING TRUTH no 3, 1 August 1988

    Science.gov (United States)

    1988-09-16

    Legal Principles in Law Circles in Recent Years 40 A New ’Political Economy’ Textbook Is Compiled in the Soviet Union [Dan Zhu ] 43 On ’Writings...Analysis on Results" (hereafter called "Census") which has been published by Zhongguo Caizheng Jingji Chubanshe (China Finance and Economics Publishing...HK2508082288 Beijing Q1USHI[SEEKING TRUTH] in Chinese No 3, 1 Aug 88 p 44 [Article by Dan Zhu 0030 4554 of the Chinese Academy of Social Sciences

  15. Logical frameworks for truth and abstraction an axiomatic study

    CERN Document Server

    Cantini, A

    1996-01-01

    This English translation of the author's original work has been thoroughly revised, expanded and updated. The book covers logical systems known as type-free or self-referential. These traditionally arise from any discussion on logical and semantical paradoxes. This particular volume, however, is not concerned with paradoxes but with the investigation of type-free sytems to show that: (i) there are rich theories of self-application, involving both operations and truth which can serve as foundations for property theory and formal semantics; (ii) these theories provide a new outlook on classical

  16. Truths and fallacies concerning radiation and its effects

    International Nuclear Information System (INIS)

    Wilkins, S.R.

    1984-01-01

    In childhood we learned many myths about radiation. For example, we were told that people exposed to x-rays would glow in the dark, become radioactive, or under the proper circumstances, turn into superhumans such as the ''Hulk'' or ''Spiderman.'' Although these and other childhood myths are not taken seriously, many misconceptions still exist about the effects of ionizing radiation. Does exposure to radiation necessarily imply an ill fate? It is the intent of this chapter to highlight a few of the truths and fallacies concerning radiation and its effects

  17. Herbal medicines: old and new concepts, truths and misunderstandings

    Directory of Open Access Journals (Sweden)

    Fabio Carmona

    2013-02-01

    Full Text Available Men have been using herbal medicines for thousands of years. The advantages of this type of therapeutics include good availability, local cultural aspects, individual preferences, the increasing demand for natural and organic products, and the already validated synergistic effects of herbal medicines. However, ethically, the scope and limits of these drugs need to be established not only by ethnopharmacological evidences but also by scientific investigations, which confirm the therapeutic effects. With this study, we propose to discuss the possible advantages of using herbal medicines instead of purified compounds, the truth and myths about herbal medicines, drug discovery, and the implications for medical education and health care.

  18. xxi Century Latin American Journalists: Beyond Post-Truth Politics

    Directory of Open Access Journals (Sweden)

    Adriana Amado

    2017-09-01

    Full Text Available Considering the complexity of circumstances that journalism is facing, academia should rethink the classical conceptions of the profession, especially when they reduce journalism to abstract models disregarding reporters’ actual practices. Results of a global research propose a new definition of journalism, a profession caught between conflicting interests that lack the supremacy of news production, as it used to be in the past century. It is all about broadening the debate to reconsider the role of the journalist in the contemporary information report, beyond current concepts such as post-truth.

  19. Finding small OBDDs for incompletely specified truth tables is hard

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Kristensen, Jesper Torp

    2006-01-01

    We present an efficient reduction mapping undirected graphs G with n = 2^k vertices for integers k to tables of partially specified Boolean functions g: {0,1}^(4k+1) -> {0,1,*} so that for any integer m, G has a vertex colouring using m colours if and only if g has a consistent ordered binary dec...... decision diagram with at most (2m + 2)n^2 + 4n decision nodes. From this it follows that the problem of finding a minimum-sized consistent OBDD for an incompletely specified truth table is NP-hard and also hard to approximate....

  20. Restorative Justice and the South African Truth and Reconciliation Process

    DEFF Research Database (Denmark)

    Gade, Christian B.N.

    2013-01-01

    It has frequently been argued that the post-apartheid Truth and Reconciliation Commission (TRC) was committed to restorative justice (RJ), and that RJ has deep historical roots in African indigenous cultures by virtue of its congruence both with ubuntu and with African indigenous justice systems......, when the South African Law Commission published an Issue Paper dealing with RJ. Furthermore, I show that neither the connection between RJ and ubuntu nor the connection between RJ and AIJS is as straightforward and unproblematic as often assumed....

  1. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  2. Automatic Ultrasound Scanning

    DEFF Research Database (Denmark)

    Moshavegh, Ramin

    on the user adjustments on the scanner interface to optimize the scan settings. This explains the huge interest in the subject of this PhD project entitled “AUTOMATIC ULTRASOUND SCANNING”. The key goals of the project have been to develop automated techniques to minimize the unnecessary settings...... on the scanners, and to improve the computer-aided diagnosis (CAD) in ultrasound by introducing new quantitative measures. Thus, four major issues concerning automation of the medical ultrasound are addressed in this PhD project. They touch upon gain adjustments in ultrasound, automatic synthetic aperture image...

  3. Automatic NAA. Saturation activities

    International Nuclear Information System (INIS)

    Westphal, G.P.; Grass, F.; Kuhnert, M.

    2008-01-01

    A system for Automatic NAA is based on a list of specific saturation activities determined for one irradiation position at a given neutron flux and a single detector geometry. Originally compiled from measurements of standard reference materials, the list may be extended also by the calculation of saturation activities from k 0 and Q 0 factors, and f and α values of the irradiation position. A systematic improvement of the SRM approach is currently being performed by pseudo-cyclic activation analysis, to reduce counting errors. From these measurements, the list of saturation activities is recalculated in an automatic procedure. (author)

  4. [The duty to tell the truth with regard to a person with Alzheimer's disease].

    Science.gov (United States)

    Neyen, Octavie; Cornet, Marielle; Zeringer, Marie; Neyen, Constance

    2014-01-01

    In the framework of a project relating to ethical questioning, pupils in their penultimate year at Mabillon des Ardennes high school gathered testimonies which revealed that the truth is sometimes hidden from people with Alzheimer's disease. Why is this right to the truth not always respected? In what circumstances does it happen? What are the reasons? What are the potential consequences? Reflection is required around the question of the respect of the right to the truth for people with cognitive disorders.

  5. Civil history and poetry, certainty and truth in Francis Bacon

    Directory of Open Access Journals (Sweden)

    Silvia Manzo

    2015-02-01

    Full Text Available This article aims at studying key components of Francis Bacon’s theory of history and of his work as practitioner of civil history, particularly in regard to truth and certainty in historical narratives. It compares Bacon’s theories of history and poetry, and the way in which he conceives their relation to certainty, truth and fiction. It analyzes mainly two sorts of writings. On the one hand, it investigates the programmatic texts where Bacon’s views of history and poetry are developed. On the other hand, it examines the finished and unfinished civil histories written by Bacon as historian. In addition, the article evaluates Bacon’s stances against the background of Renaissance and early modern English historiography. It concludes that although history and poetry constitute separate branches in Bacon’s classification of learning, they share important elements, in keeping with the view of poetry maintained by his contemporary Philip Sidney. Thus, Bacon included fictional patterns in his historical narrative and distinguished certainties from conjectures in a particular way. This attitude towards civil history shows a strong contrast to Bacon’s methodology for natural histories, which, in order to reach certainty, staunchly recommends to exclude any fictional narrative in reporting the facts of nature.

  6. Discerning truth from deception: The sincere witness profile

    Directory of Open Access Journals (Sweden)

    Fiorella Giusberti

    2009-01-01

    Full Text Available During the last twenty years, we have assisted to a growing interest in the detection of verbal cues under deception. In this context, we focused our attention on the truth vs. deception topic in adults. In particular, we were interested in discrepant findings concerning some verbal indicators. The aim of the present study was to investigate whether different experimental designs may yield different results regarding the presence or absence of CBCA criteria. Forty participants were shown a video of a robbery and were asked to give a truthful and a deceitful statement of the criminal event. The participants’ performances were recorded in order to analyze content of the reports. Results showed more changes in verbal behaviour under within-subjects design compared to between-subjects one, though the presence/absence of some criteria was the same across the two statistical procedures. The different results yielded by between- and within-subjects analyses can provide some hints as regards the discrepancy in deception literature on verbal cues. Implications for applied settings are discussed.

  7. Truth comes into play: Adorno, Kant and Aesthetics

    Directory of Open Access Journals (Sweden)

    Berta M. Pérez

    2009-10-01

    Full Text Available This paper departs from the verification of the centrality that, in Adorno’s recognition of the aesthetic field, the rehabilitation of its bond to knowledge and truth has. Adorno’s criticism to the Kantian aesthetic theory is, this way, reconstructed starting from his dismissal of the separation, established by Kant, between the aesthetic and the epistemological fields. This criticism, which accuses the Kantian aesthetics of subjectivism, is finally taken back to Adorno’s dissatisfaction regarding the transcendental approach, for not being dialectic. At this point, the need of determining the specificity of his own understanding of the aesthetic and dialectics becomes apparent. Surprisingly it is then discovered an unexpected proximity between his position and the one represented by the Kantian aesthetics itself, a proximity that let recognize in the latter some features which go beyond the limits of the Critical System towards the Adornian approach. But this proximity proves also the immanence and legitimacy of the (ambiguous Adornian criticism to Kant, and, as a result of this, the unsatisfactory quality of the Kantian determination of knowledge and truth.

  8. Truth and Fibble-Fable in Renaissance Satire

    Directory of Open Access Journals (Sweden)

    Branko Madžarevič

    2011-12-01

    Full Text Available Throughout the years of my close involvement with wor(lds’ transactions, as a translator, in the triangle of the Renaissance doctors Rabelais, Montaigne, and, yes, Louis-Ferdinand Céline, the French author of the novel Journey to the End of Night (1932, their views on satire can be considered from a rather unconventional angle. By means of an imaginary morbid epistolary medical council, the impromptu introduction tries to entangle this peculiar trio in a freewheeling alliance, leading to the assumption that every translation defies the interpretational ambiguities of the utopian Thelema motto “Do as you will”. In the satirical context of all source and target faces, it is always acting on the verge of the paradoxical encomium, the hypothetical pasticcio, and obscurantist reversals of the original text. Of course, the issue at stake here is one of the convolutions of Erasmus’ Praise of Folly, Rabelais’ utopian Thelema Abbey, and the German Epistles of Obscure Men in pathetically wretched Latin. This paper deals with Renaissance and humanist satire, focusing on Rabelais’ five books of Gargantua and Pantagruel (1532–1564 and the interplay between the ideas of truth, truthfulness, and seriousness. In addition, the paper deals with how the Renaissance spirit of this satirical contemporary and ally of ours challenges the issue of verbal boundaries and the materiality of language.

  9. Cliff : the automatized zipper

    NARCIS (Netherlands)

    Baharom, M.Z.; Toeters, M.J.; Delbressine, F.L.M.; Bangaru, C.; Feijs, L.M.G.

    2016-01-01

    It is our strong believe that fashion - more specifically apparel - can support us so much more in our daily life than it currently does. The Cliff project takes the opportunity to create a generic automatized zipper. It is a response to the struggle by elderly, people with physical disability, and

  10. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  11. Automatic Oscillating Turret.

    Science.gov (United States)

    1981-03-01

    Final Report: February 1978 ZAUTOMATIC OSCILLATING TURRET SYSTEM September 1980 * 6. PERFORMING 01G. REPORT NUMBER .J7. AUTHOR(S) S. CONTRACT OR GRANT...o....e.... *24 APPENDIX P-4 OSCILLATING BUMPER TURRET ...................... 25 A. DESCRIPTION 1. Turret Controls ...Other criteria requirements were: 1. Turret controls inside cab. 2. Automatic oscillation with fixed elevation to range from 20* below the horizontal to

  12. Reactor component automatic grapple

    International Nuclear Information System (INIS)

    Greenaway, P.R.

    1982-01-01

    A grapple for handling nuclear reactor components in a medium such as liquid sodium which, upon proper seating and alignment of the grapple with the component as sensed by a mechanical logic integral to the grapple, automatically seizes the component. The mechanical logic system also precludes seizure in the absence of proper seating and alignment. (author)

  13. Automatic sweep circuit

    International Nuclear Information System (INIS)

    Keefe, D.J.

    1980-01-01

    An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input is described. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found

  14. Automatic sweep circuit

    Science.gov (United States)

    Keefe, Donald J.

    1980-01-01

    An automatically sweeping circuit for searching for an evoked response in an output signal in time with respect to a trigger input. Digital counters are used to activate a detector at precise intervals, and monitoring is repeated for statistical accuracy. If the response is not found then a different time window is examined until the signal is found.

  15. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  16. Automatic Commercial Permit Sets

    Energy Technology Data Exchange (ETDEWEB)

    Grana, Paul [Folsom Labs, Inc., San Francisco, CA (United States)

    2017-12-21

    Final report for Folsom Labs’ Solar Permit Generator project, which has successfully completed, resulting in the development and commercialization of a software toolkit within the cloud-based HelioScope software environment that enables solar engineers to automatically generate and manage draft documents for permit submission.

  17. Automatic Deduction in Dynamic Geometry using Sage

    Directory of Open Access Journals (Sweden)

    Francisco Botana

    2012-02-01

    Full Text Available We present a symbolic tool that provides robust algebraic methods to handle automatic deduction tasks for a dynamic geometry construction. The main prototype has been developed as two different worksheets for the open source computer algebra system Sage, corresponding to two different ways of coding a geometric construction. In one worksheet, diagrams constructed with the open source dynamic geometry system GeoGebra are accepted. In this worksheet, Groebner bases are used to either compute the equation of a geometric locus in the case of a locus construction or to determine the truth of a general geometric statement included in the GeoGebra construction as a boolean variable. In the second worksheet, locus constructions coded using the common file format for dynamic geometry developed by the Intergeo project are accepted for computation. The prototype and several examples are provided for testing. Moreover, a third Sage worksheet is presented in which a novel algorithm to eliminate extraneous parts in symbolically computed loci has been implemented. The algorithm, based on a recent work on the Groebner cover of parametric systems, identifies degenerate components and extraneous adherence points in loci, both natural byproducts of general polynomial algebraic methods. Detailed examples are discussed.

  18. Ground water

    International Nuclear Information System (INIS)

    Osmond, J.K.; Cowart, J.B.

    1982-01-01

    The subject is discussed under the headings: background and theory (introduction; fractionation in the hydrosphere; mobility factors; radioisotope evolution and aquifer classification; aquifer disequilibria and geochemical fronts); case studies (introduction; (a) conservative, and (b) non-conservative, behaviour); ground water dating applications (general requirements; radon and helium; radium isotopes; uranium isotopes). (U.K.)

  19. Ground water

    International Nuclear Information System (INIS)

    Osmond, J.K.; Cowart, J.B.

    1992-01-01

    The great variations in concentrations and activity ratios of 234 U/ 238 U in ground waters and the features causing elemental and isotopic mobility in the hydrosphere are discussed. Fractionation processes and their application to hydrology and other environmental problems such as earthquake, groundwater and aquifer dating are described. (UK)

  20. Blindagem eletromagnética, aterramento e proteção contra surtos de tensão em equipamentos para monitoramento automático do teor de água no solo Electromagnetic shielding, grounding and protection against voltage surges in equipments for automatic soil-water measurements

    Directory of Open Access Journals (Sweden)

    Claudia F. A. Teixeira

    2004-04-01

    Full Text Available O objetivo do presente trabalho foi implementar um sistema de blindagem eletromagnética, aterramento e proteção contra surtos de tensão na instalação de instrumental para o monitoramento automático do teor de água no solo. O experimento foi instalado em Piracicaba, São Paulo, em que os equipamentos utilizados foram um testador de cabos marca Tektronix, que opera pelo princípio da reflectometria no domínio do tempo e um sistema de aquisição de dados marca Campbell Scientific Inc. (CSI, que controla e analisa as formas de onda produzidas pelo testador de cabos. A umidade volumétrica foi obtida indiretamente por meio de sensores inseridos no solo, os quais medem a constante dielétrica do mesmo. Utilizaram-se tubos galvanizados para a blindagem eletromagnética dos cabos e hastes "cooperweld", cordoalha de cobre e terminais bimetálicos para o sistema de aterramento. Para o sistema de proteção eletroeletrônica, utilizaram-se um disjuntor para a separação do circuito de alimentação, protetor de surtos, "no-breaks" e medidor de resistência de terra. Face aos resultados obtidos, pode-se concluir que o sistema proposto, ao utilizar material geralmente preexistente em locais de pesquisa agropecuária, apresentou uma proteção eficaz.The objective of this paper was to implement an electromagnetic shielding, grounding and voltage surge protection in equipments for automatic soil water monitoring. The experiment was carried out in Piracicaba, São Paulo State, Brazil, using a Tektronix Cable Tester (Model 1502 B that operates by time domain reflectometry (TDR and a datalogger (CR10X, Campbell Scientific Inc., CSI to control and analyze the waveforms produced by the cable tester. The water content was indirectly obtained by measuring the dielectric constant through probes inserted in the soil. For electromagnetic shielding the energy and signal cables were inserted in galvanized pipes, linked together by means of copper wire fixed in

  1. The seventh servant: the implications of a truth drive in Bion's theory of 'O'.

    Science.gov (United States)

    Grotstein, James S

    2004-10-01

    Drawing upon Bion's published works on the subjects of truth, dreaming, alpha-function and transformations in 'O', the author independently postulates that there exists a 'truth instinctual drive' that subserves a truth principle, the latter of which is associated with the reality principle. Further, he suggests, following Bion's postulation, that 'alpha-function' and dreaming/phantasying constitute unconscious thinking processes and that they mediate the activity of this 'truth drive' (quest, pulsion), which the author hypothesizes constitutes another aspect of a larger entity that also includes the epistemophilic component drive. It purportedly seeks and transmits as well as includes what Bion (1965, pp. 147-9) calls 'O', the 'Absolute Truth, Ultimate Reality, O' (also associated with infinity, noumena or things-in-themselves, and 'godhead') (1970, p. 26). It is further hypothesized that the truth drive functions in collaboration with an 'unconscious consciousness' that is associated with the faculty of 'attention', which is also known as 'intuition'. It is responsive to internal psychical reality and constitutes Bion's 'seventh servant'. O, the ultimate landscape of psychoanalysis, has many dimensions, but the one that seems to interest Bion is that of the emotional experience of the analysand's and the analyst's 'evolving O' respectively (1970, p. 52) during the analytic session. The author thus hypothesizes that a sense of truth presents itself to the subject as a quest for truth which has the quality and force of an instinctual drive and constitutes the counterpart to the epistemophilic drive. This 'truth quest' or 'drive' is hypothesized to be the source of the generation of the emotional truth of one's ongoing experiences, both conscious and unconscious. It is proposed that emotions are beacons of truth in regard to the acceptance of reality. The concepts of an emotional truth drive and a truth principle would help us understand why analysands are able to

  2. A multimodality segmentation framework for automatic target delineation in head and neck radiotherapy.

    Science.gov (United States)

    Yang, Jinzhong; Beadle, Beth M; Garden, Adam S; Schwartz, David L; Aristophanous, Michalis

    2015-09-01

    To develop an automatic segmentation algorithm integrating imaging information from computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI) to delineate target volume in head and neck cancer radiotherapy. Eleven patients with unresectable disease at the tonsil or base of tongue who underwent MRI, CT, and PET/CT within two months before the start of radiotherapy or chemoradiotherapy were recruited for the study. For each patient, PET/CT and T1-weighted contrast MRI scans were first registered to the planning CT using deformable and rigid registration, respectively, to resample the PET and magnetic resonance (MR) images to the planning CT space. A binary mask was manually defined to identify the tumor area. The resampled PET and MR images, the planning CT image, and the binary mask were fed into the automatic segmentation algorithm for target delineation. The algorithm was based on a multichannel Gaussian mixture model and solved using an expectation-maximization algorithm with Markov random fields. To evaluate the algorithm, we compared the multichannel autosegmentation with an autosegmentation method using only PET images. The physician-defined gross tumor volume (GTV) was used as the "ground truth" for quantitative evaluation. The median multichannel segmented GTV of the primary tumor was 15.7 cm(3) (range, 6.6-44.3 cm(3)), while the PET segmented GTV was 10.2 cm(3) (range, 2.8-45.1 cm(3)). The median physician-defined GTV was 22.1 cm(3) (range, 4.2-38.4 cm(3)). The median difference between the multichannel segmented and physician-defined GTVs was -10.7%, not showing a statistically significant difference (p-value = 0.43). However, the median difference between the PET segmented and physician-defined GTVs was -19.2%, showing a statistically significant difference (p-value =0.0037). The median Dice similarity coefficient between the multichannel segmented and physician-defined GTVs was 0.75 (range, 0.55-0.84), and the

  3. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks

    Science.gov (United States)

    Cruz-Roa, Angel; Basavanhally, Ajay; González, Fabio; Gilmore, Hannah; Feldman, Michael; Ganesan, Shridar; Shih, Natalie; Tomaszewski, John; Madabhushi, Anant

    2014-03-01

    This paper presents a deep learning approach for automatic detection and visual analysis of invasive ductal carcinoma (IDC) tissue regions in whole slide images (WSI) of breast cancer (BCa). Deep learning approaches are learn-from-data methods involving computational modeling of the learning process. This approach is similar to how human brain works using different interpretation levels or layers of most representative and useful features resulting into a hierarchical learned representation. These methods have been shown to outpace traditional approaches of most challenging problems in several areas such as speech recognition and object detection. Invasive breast cancer detection is a time consuming and challenging task primarily because it involves a pathologist scanning large swathes of benign regions to ultimately identify the areas of malignancy. Precise delineation of IDC in WSI is crucial to the subsequent estimation of grading tumor aggressiveness and predicting patient outcome. DL approaches are particularly adept at handling these types of problems, especially if a large number of samples are available for training, which would also ensure the generalizability of the learned features and classifier. The DL framework in this paper extends a number of convolutional neural networks (CNN) for visual semantic analysis of tumor regions for diagnosis support. The CNN is trained over a large amount of image patches (tissue regions) from WSI to learn a hierarchical part-based representation. The method was evaluated over a WSI dataset from 162 patients diagnosed with IDC. 113 slides were selected for training and 49 slides were held out for independent testing. Ground truth for quantitative evaluation was provided via expert delineation of the region of cancer by an expert pathologist on the digitized slides. The experimental evaluation was designed to measure classifier accuracy in detecting IDC tissue regions in WSI. Our method yielded the best quantitative

  4. SU-C-BRA-06: Automatic Brain Tumor Segmentation for Stereotactic Radiosurgery Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Stojadinovic, S; Jiang, S; Timmerman, R; Abdulrahman, R; Nedzi, L; Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Stereotactic radiosurgery (SRS), which delivers a potent dose of highly conformal radiation to the target in a single fraction, requires accurate tumor delineation for treatment planning. We present an automatic segmentation strategy, that synergizes intensity histogram thresholding, super-voxel clustering, and level-set based contour evolving methods to efficiently and accurately delineate SRS brain tumors on contrast-enhance T1-weighted (T1c) Magnetic Resonance Images (MRI). Methods: The developed auto-segmentation strategy consists of three major steps. Firstly, tumor sites are localized through 2D slice intensity histogram scanning. Then, super voxels are obtained through clustering the corresponding voxels in 3D with reference to the similarity metrics composited from spatial distance and intensity difference. The combination of the above two could generate the initial contour surface. Finally, a localized region active contour model is utilized to evolve the surface to achieve the accurate delineation of the tumors. The developed method was evaluated on numerical phantom data, synthetic BRATS (Multimodal Brain Tumor Image Segmentation challenge) data, and clinical patients’ data. The auto-segmentation results were quantitatively evaluated by comparing to ground truths with both volume and surface similarity metrics. Results: DICE coefficient (DC) was performed as a quantitative metric to evaluate the auto-segmentation in the numerical phantom with 8 tumors. DCs are 0.999±0.001 without noise, 0.969±0.065 with Rician noise and 0.976±0.038 with Gaussian noise. DC, NMI (Normalized Mutual Information), SSIM (Structural Similarity) and Hausdorff distance (HD) were calculated as the metrics for the BRATS and patients’ data. Assessment of BRATS data across 25 tumor segmentation yield DC 0.886±0.078, NMI 0.817±0.108, SSIM 0.997±0.002, and HD 6.483±4.079mm. Evaluation on 8 patients with total 14 tumor sites yield DC 0.872±0.070, NMI 0.824±0

  5. Small-Scale Helicopter Automatic Autorotation : Modeling, Guidance, and Control

    NARCIS (Netherlands)

    Taamallah, S.

    2015-01-01

    Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit

  6. Demonstrator for Automatic Target Classification in SAR Imagery

    NARCIS (Netherlands)

    Wit, J.J.M. de; Broek, A.C. van den; Dekker, R.J.

    2006-01-01

    Due to the increasing use of unmanned aerial vehicles (UAV) for reconnaissance, surveillance, and target acquisition applications, the interest in synthetic aperture radar (SAR) systems is growing. In order to facilitate the processing of the enormous amount of SAR data on the ground, automatic

  7. Ground Pollution Science

    International Nuclear Information System (INIS)

    Oh, Jong Min; Bae, Jae Geun

    1997-08-01

    This book deals with ground pollution science and soil science, classification of soil and fundamentals, ground pollution and human, ground pollution and organic matter, ground pollution and city environment, environmental problems of the earth and ground pollution, soil pollution and development of geological features of the ground, ground pollution and landfill of waste, case of measurement of ground pollution.

  8. Automatic indexing, compiling and classification

    International Nuclear Information System (INIS)

    Andreewsky, Alexandre; Fluhr, Christian.

    1975-06-01

    A review of the principles of automatic indexing, is followed by a comparison and summing-up of work by the authors and by a Soviet staff from the Moscou INFORM-ELECTRO Institute. The mathematical and linguistic problems of the automatic building of thesaurus and automatic classification are examined [fr

  9. From the clouds to the ground - snow precipitation patterns vs. snow accumulation patterns

    Science.gov (United States)

    Gerber, Franziska; Besic, Nikola; Mott, Rebecca; Gabella, Marco; Germann, Urs; Bühler, Yves; Marty, Mauro; Berne, Alexis; Lehning, Michael

    2017-04-01

    Knowledge about snow distribution and snow accumulation patterns is important and valuable for different applications such as the prediction of seasonal water resources or avalanche forecasting. Furthermore, accumulated snow on the ground is an important ground truth for validating meteorological and climatological model predictions of precipitation in high mountains and polar regions. Snow accumulation patterns are determined by many different processes from ice crystal nucleation in clouds to snow redistribution by wind and avalanches. In between, snow precipitation undergoes different dynamical and microphysical processes, such as ice crystal growth, aggregation and riming, which determine the growth of individual particles and thereby influence the intensity and structure of the snowfall event. In alpine terrain the interaction of different processes and the topography (e.g. lifting condensation and low level cloud formation, which may result in a seeder-feeder effect) may lead to orographic enhancement of precipitation. Furthermore, the redistribution of snow particles in the air by wind results in preferential deposition of precipitation. Even though orographic enhancement is addressed in numerous studies, the relative importance of micro-physical and dynamically induced mechanisms on local snowfall amounts and especially snow accumulation patterns is hardly known. To better understand the relative importance of different processes on snow precipitation and accumulation we analyze snowfall and snow accumulation between January and March 2016 in Davos (Switzerland). We compare MeteoSwiss operational weather radar measurements on Weissfluhgipfel to a spatially continuous snow accumulation map derived from airborne digital sensing (ADS) snow height for the area of Dischma valley in the vicinity of the weather radar. Additionally, we include snow height measurements from automatic snow stations close to the weather radar. Large-scale radar snow accumulation

  10. Automatization of welding

    International Nuclear Information System (INIS)

    Iwabuchi, Masashi; Tomita, Jinji; Nishihara, Katsunori.

    1978-01-01

    Automatization of welding is one of the effective measures for securing high degree of quality of nuclear power equipment, as well as for correspondence to the environment at the site of plant. As the latest ones of the automatic welders practically used for welding of nuclear power apparatuses in factories of Toshiba and IHI, those for pipes and lining tanks are described here. The pipe welder performs the battering welding on the inside of pipe end as the so-called IGSCC countermeasure and the succeeding butt welding through the same controller. The lining tank welder is able to perform simultaneous welding of two parallel weld lines on a large thin plate lining tank. Both types of the welders are demonstrating excellent performance at the shops as well as at the plant site. (author)

  11. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  12. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  13. Cross-Cultural Differences in Children’s Choices, Categorizations, and Evaluations of Truths and Lies

    Science.gov (United States)

    Fu, Genyue; Xu, Fen; Cameron, Catherine Ann; Heyman, Gail; Lee, Kang

    2008-01-01

    This study examined cross-cultural differences and similarities in children’s moral understanding of individual- or collective-oriented lies and truths. Seven-, 9-, and 11-year-old Canadian and Chinese children were read stories about story characters facing moral dilemmas about whether to lie or tell the truth to help a group but harm an individual or vice versa. Participants chose to lie or to tell the truth as if they were the character (Experiments 1 and 2) and categorized and evaluated the story characters’ truthful and untruthful statements (Experiments 3 and 4). Most children in both cultures labeled lies as lies and truths as truths. The major cultural differences lay in choices and moral evaluations. Chinese children chose lying to help a collective but harm an individual, and they rated it less negatively than lying with opposite consequences. Chinese children rated truth telling to help an individual but harm a group less positively than the alternative. Canadian children did the opposite. These findings suggest that cross-cultural differences in emphasis on groups versus individuals affect children’s choices and moral judgments about truth and deception. PMID:17352539

  14. Cross-Cultural Differences in Children's Choices, Categorizations, and Evaluations of Truths and Lies

    Science.gov (United States)

    Fu, Genyue; Xu, Fen; Cameron, Catherine Ann; Leyman, Gail; Lee, Kang

    2007-01-01

    This study examined cross-cultural differences and similarities in children's moral understanding of individual- or collective-oriented lies and truths. Seven-, 9-, and 11-year-old Canadian and Chinese children were read stories about story characters facing moral dilemmas about whether to lie or tell the truth to help a group but harm an…

  15. Towards a pragmatics of non-fictional narrative truth: Gricean and ...

    African Journals Online (AJOL)

    This paper focuses on a particular kind of truth that falls within this category, namely non-fictional narrative truth. “Narrative truth” is defined as a judgement of verisimilitude accorded to the meaning of a narrative as a whole. This narrative meaning is neither rationally nor empirically verifiable, but rather arrived at by a ...

  16. Overcoming Relativism and Absolutism: Dewey's Ideals of Truth and Meaning in Philosophy for Children

    Science.gov (United States)

    Bleazby, Jennifer

    2011-01-01

    Different notions of truth imply and encourage different ideals of thinking, knowledge, meaning, and learning. Thus, these concepts have fundamental importance for educational theory and practice. In this paper, I intend to draw out and clarify the notions of truth, knowledge and meaning that are implied by P4C's pedagogical ideals. There is some…

  17. Helping medical students to acquire a deeper understanding of truth-telling.

    Science.gov (United States)

    Hurst, Samia A; Baroffio, Anne; Ummel, Marinette; Burn, Carine Layat

    2015-01-01

    Truth-telling is an important component of respect for patients' self-determination, but in the context of breaking bad news, it is also a distressing and difficult task. We investigated the long-term influence of a simulated patient-based teaching intervention, integrating learning objectives in communication skills and ethics into students' attitudes and concerns regarding truth-telling. We followed two cohorts of medical students from the preclinical third year to their clinical rotations (fifth year). Open-ended responses were analysed to explore medical students' reported difficulties in breaking bad news. This intervention was implemented during the last preclinical year of a problem-based medical curriculum, in collaboration between the doctor-patient communication and ethics programs. Over time, concerns such as empathy and truthfulness shifted from a personal to a relational focus. Whereas 'truthfulness' was a concern for the content of the message, 'truth-telling' included concerns on how information was communicated and how realistically it was received. Truth-telling required empathy, adaptation to the patient, and appropriate management of emotions, both for the patient's welfare and for a realistic understanding of the situation. Our study confirms that an intervention confronting students with a realistic situation succeeds in making them more aware of the real issues of truth-telling. Medical students deepened their reflection over time, acquiring a deeper understanding of the relational dimension of values such as truth-telling, and honing their view of empathy.

  18. Is socioeconomic status associated with awareness of and receptivity to the truth campaign?

    Science.gov (United States)

    Vallone, Donna M; Allen, Jane A; Xiao, Haijun

    2009-10-01

    The truth campaign is credited with preventing approximately 450,000 youth from starting to smoke, from 2000 through 2004 [Farrelly, M.C., Nonnemaker, J., Davis, K.C., Hussin, A., 2009. The Influence of the National Truth Campaign on Smoking Initiation. Am. J. Prev. Med. February 9 [Epub ahead of print

  19. Rhetoric and Truth: A Note on Aristotle, Rhetoric 1355a 21-24

    Science.gov (United States)

    Grimaldi, William M. A.

    1978-01-01

    A passage from Aristotle is discussed and interpreted. Rhetoric represents truth and justice in any situation for the auditor through the use of language. The usefulness of rhetoric lies in its ability to assure an adequate and competent articulation of truth and justice. (JF)

  20. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  1. Automatic LOD selection

    OpenAIRE

    Forsman, Isabelle

    2017-01-01

    In this paper a method to automatically generate transition distances for LOD, improving image stability and performance is presented. Three different methods were tested all measuring the change between two level of details using the spatial frequency. The methods were implemented as an optional pre-processing step in order to determine the transition distances from multiple view directions. During run-time both view direction based selection and the furthest distance for each direction was ...

  2. Populism, Exclusion, Post-truth. Some Conceptual Caveats

    Science.gov (United States)

    De Cleen, Benjamin

    2018-01-01

    In their editorial, Speed and Mannion identify two main challenges "the rise of post-truth populism" poses for health policy: the populist threat to inclusive healthcare policies, and the populist threat to well-designed health policies that draw on professional expertise and research evidence. This short comment suggests some conceptual clarifications that might help in thinking through more profoundly these two important issues. It argues that we should approach right-wing populism as a combination of a populist down/up (people/elite) axis with an exclusionary nationalist in/out (member/non-member) axis. And it raises some questions regarding the equation between populism, demagogy and the rejection of expertise and scientific knowledge. PMID:29524956

  3. Truth and reconciliation: Yes, please! No, thank you!

    Directory of Open Access Journals (Sweden)

    Kesić Vesna

    2002-01-01

    Full Text Available The case of former Yugoslavia and its successors is specific and a bit different from the other post-conflict societies. First, retributive model of justice is carried out, or it should be carried out, before the International Criminal Tribunal in the Hague. The question is how to start the process of searching for the truth and reconciliation inside and between societies, groups and individuals in newly established countries. There is no such a model in the world, like these in South Africa and some countries in Latin America, which can be applied here, because in this case we are talking about five states, from which at least three were in the war. Also, the character of these conflicts covers the diapason from international conflicts to internal aggression and civil war.

  4. The Effect of Telling Lies on Belief in the Truth

    Directory of Open Access Journals (Sweden)

    Danielle Polage

    2017-11-01

    Full Text Available The current study looks at the effect of telling lies, in contrast to simply planning lies, on participants’ belief in the truth. Participants planned and told a lie, planned to tell a lie but didn’t tell it, told an unplanned lie, or neither planned nor told a lie (control about events that did not actually happen to them. Participants attempted to convince researchers that all of the stories told were true. Results show that telling a lie plays a more important role in inflating belief scores than simply preparing the script of a lie. Cognitive dissonance may lead to motivated forgetting of information that does not align with the lie. This research suggests that telling lies may lead to confusion as to the veracity of the lie leading to inflated belief scores.

  5. In search of truth: the regulatory necessity of validation

    International Nuclear Information System (INIS)

    Niederer, U.

    1991-01-01

    A look at modern ideas of how scientific truth is achieved shows that theories are not really proved but accepted by a consensus of the experts, borne out by often repeated experience showing a theory to work well. In the same sense acceptability of models in waste disposal is mostly based on consensus. To obtain consensus of the relevant experts, including regulators, all models which considerably influence the results of a safety assessment have to be validated. This is particularly important for the models of geospheric migration because scientific experience with the deep underground is scarce. Validation plays a special role in public acceptance where regulators and other groups, which act as intermediaries between the public and the project manager, have to be convinced that all the relevant models are correct

  6. Right to Truth: Dissapeared students in Ayotzinapa, Mexico

    Directory of Open Access Journals (Sweden)

    Gabriela Vargas Gómez

    2015-10-01

    Full Text Available The disappearance of the 43 students Ayotzinapa in Iguala Guerrero, will remain in the history of Mexico as one of the most serious incidents of violation of human rights carried out by organized crime in complicity with public servants. The official version of events does not answer many questions of the family, nor it does answer the questions of the society as a whole. Ensure the right to truth for victims, as well as for the national and international community, must be assumed by the Mexican State as an obligation which complies fully with the necessary guarantees that international principles apply to enforced disappearances, both to repair damage, and to prevent its recurrence.

  7. Truth or beauty science and the quest for order

    CERN Document Server

    Orrell, David

    2012-01-01

    In this sweeping book, applied mathematician and popular author David Orrell questions the promises and pitfalls of associating beauty with truth, showing how ideas of mathematical elegance have inspired—and have sometimes misled—scientists attempting to understand nature. Orrell shows how the ancient Greeks constructed a concept of the world based on musical harmony; later thinkers replaced this model with a program, based on Newton’s “rational mechanics,” to reduce the universe to a few simple equations. He then turns to current physical theories, such as supersymmetric string theory—again influenced by deep aesthetic principles. The book sheds new light on historical investigations and also recent research, including the examinations ongoing at the Large Hadron Collider. Finally, broadening his discussion to other fields of research, including economics, architecture, and health, Orrell questions whether these aesthetic principles reflect an accurate way to explain and understand the structure...

  8. Truth or meaning: Ricoeur versus Frei on biblical narrative

    Directory of Open Access Journals (Sweden)

    Gary L. Comstock

    1989-01-01

    Full Text Available Truth or meaning: Ricoeur versus Frei on biblical narrative Of the theologians and philosophers now writing on biblical narrative, Hans Frei and Paul Ricoeur are probably the most prominent. It is significant that their views converge on important issues. Both are uncomfortable with hermeneutic theories that convert the text into an abstract philosophical system, an ideal typological structure, or a mere occasion for existential decision. Frei and Ricoeur seem knit together in a common enterprise; they appear to be building a single narrative theology. I argue that the appearance of symmetry is an illusion. There is a fundamental conflict between the ‘pure narrativism’ of Frei and the ‘impure narrativism’ of Ricoeur. I give reasons for thinking that Ricoeur’s is the stronger position.

  9. The Influence of the National truth campaign on smoking initiation.

    Science.gov (United States)

    Farrelly, Matthew C; Nonnemaker, James; Davis, Kevin C; Hussin, Altijani

    2009-05-01

    States and national organizations spend millions annually on antismoking campaigns aimed at youth. Much of the evidence for their effectiveness is based on cross-sectional studies. This study was designed to evaluate the effectiveness of a prominent national youth smoking-prevention campaign in the U.S. known as truth that was launched in February 2000. A nationally representative cohort of 8904 adolescents aged 12-17 years who were interviewed annually from 1997 to 2004 was analyzed in 2008. A quasi-experimental design was used to relate changes in smoking initiation to variable levels of exposure to antismoking messages over time and across 210 media markets in the U.S. A discrete-time hazard model was used to quantify the influence of media market delivery of TV commercials on smoking initiation, controlling for confounding influences. Based on the results of the hazard model, the number of youth nationally who were prevented from smoking from 2000 through 2004 was estimated. Exposure to the truth campaign is associated with a decreased risk of smoking initiation (relative risk=0.80, p=0.001). Through 2004, approximately 450,000 adolescents were prevented from trying smoking nationwide. Factors negatively associated with initiation include African-American race (relative risk=0.44, p<0.001), Hispanic ethnicity (relative risk=0.74, p<0.001), completing high school (relative risk=0.69, p<0.001), and living with both parents at baseline (OR=0.79, p<0.001). The current study strengthens the available evidence for antismoking campaigns as a viable strategy for preventing youth smoking.

  10. Communication grounding facility

    International Nuclear Information System (INIS)

    Lee, Gye Seong

    1998-06-01

    It is about communication grounding facility, which is made up twelve chapters. It includes general grounding with purpose, materials thermal insulating material, construction of grounding, super strength grounding method, grounding facility with grounding way and building of insulating, switched grounding with No. 1A and LCR, grounding facility of transmission line, wireless facility grounding, grounding facility in wireless base station, grounding of power facility, grounding low-tenton interior power wire, communication facility of railroad, install of arrester in apartment and house, install of arrester on introduction and earth conductivity and measurement with introduction and grounding resistance.

  11. Towards an Automatic Framework for Urban Settlement Mapping from Satellite Images: Applications of Geo-referenced Social Media and One Class Classification

    Science.gov (United States)

    Miao, Zelang

    2017-04-01

    Currently, urban dwellers comprise more than half of the world's population and this percentage is still dramatically increasing. The explosive urban growth over the next two decades poses long-term profound impact on people as well as the environment. Accurate and up-to-date delineation of urban settlements plays a fundamental role in defining planning strategies and in supporting sustainable development of urban settlements. In order to provide adequate data about urban extents and land covers, classifying satellite data has become a common practice, usually with accurate enough results. Indeed, a number of supervised learning methods have proven effective in urban area classification, but they usually depend on a large amount of training samples, whose collection is a time and labor expensive task. This issue becomes particularly serious when classifying large areas at the regional/global level. As an alternative to manual ground truth collection, in this work we use geo-referenced social media data. Cities and densely populated areas are an extremely fertile land for the production of individual geo-referenced data (such as GPS and social network data). Training samples derived from geo-referenced social media have several advantages: they are easy to collect, usually they are freely exploitable; and, finally, data from social media are spatially available in many locations, and with no doubt in most urban areas around the world. Despite these advantages, the selection of training samples from social media meets two challenges: 1) there are many duplicated points; 2) method is required to automatically label them as "urban/non-urban". The objective of this research is to validate automatic sample selection from geo-referenced social media and its applicability in one class classification for urban extent mapping from satellite images. The findings in this study shed new light on social media applications in the field of remote sensing.

  12. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Shen, C; Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy, and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with

  13. Automatic spinal cord localization, robust to MRI contrasts using global curve optimization.

    Science.gov (United States)

    Gros, Charley; De Leener, Benjamin; Dupont, Sara M; Martin, Allan R; Fehlings, Michael G; Bakshi, Rohit; Tummala, Subhash; Auclair, Vincent; McLaren, Donald G; Callot, Virginie; Cohen-Adad, Julien; Sdika, Michaël

    2018-02-01

    mean square error of 1.02 mm. OptiC achieved superior results compared to a state-of-the-art spinal cord localization technique based on the Hough transform, especially on pathological cases with an averaged mean square error of 1.08 mm vs. 13.16 mm (Wilcoxon signed-rank test p-value < .01). Images containing brain regions were identified with a 99% precision, on which brain and spine regions were separated with a distance error of 9.37 mm compared to ground-truth. Validation results on a challenging dataset suggest that OptiC could reliably be used for subsequent quantitative analyses tasks, opening the door to more robust analysis on pathological cases. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Automatic quantitative renal scintigraphy

    International Nuclear Information System (INIS)

    Valeyre, J.; Deltour, G.; Delisle, M.J.; Bouchard, A.

    1976-01-01

    Renal scintigraphy data may be analyzed automatically by the use of a processing system coupled to an Anger camera (TRIDAC-MULTI 8 or CINE 200). The computing sequence is as follows: normalization of the images; background noise subtraction on both images; evaluation of mercury 197 uptake by the liver and spleen; calculation of the activity fractions on each kidney with respect to the injected dose, taking into account the kidney depth and the results referred to normal values; edition of the results. Automation minimizes the scattering parameters and by its simplification is a great asset in routine work [fr

  15. AUTOMATIC FREQUENCY CONTROL SYSTEM

    Science.gov (United States)

    Hansen, C.F.; Salisbury, J.D.

    1961-01-10

    A control is described for automatically matching the frequency of a resonant cavity to that of a driving oscillator. The driving oscillator is disconnected from the cavity and a secondary oscillator is actuated in which the cavity is the frequency determining element. A low frequency is mixed with the output of the driving oscillator and the resultant lower and upper sidebands are separately derived. The frequencies of the sidebands are compared with the secondary oscillator frequency. deriving a servo control signal to adjust a tuning element in the cavity and matching the cavity frequency to that of the driving oscillator. The driving oscillator may then be connected to the cavity.

  16. Automatic dipole subtraction

    International Nuclear Information System (INIS)

    Hasegawa, K.

    2008-01-01

    The Catani-Seymour dipole subtraction is a general procedure to treat infrared divergences in real emission processes at next-to-leading order in QCD. We automatized the procedure in a computer code. The code is useful especially for the processes with many parton legs. In this talk, we first explain the algorithm of the dipole subtraction and the whole structure of our code. After that we show the results for some processes where the infrared divergences of real emission processes are subtracted. (author)

  17. Automatic programmable air ozonizer

    International Nuclear Information System (INIS)

    Gubarev, S.P.; Klosovsky, A.V.; Opaleva, G.P.; Taran, V.S.; Zolototrubova, M.I.

    2015-01-01

    In this paper we describe a compact, economical, easy to manage auto air ozonator developed at the Institute of Plasma Physics of the NSC KIPT. It is designed for sanitation, disinfection of premises and cleaning the air from foreign odors. A distinctive feature of the developed device is the generation of a given concentration of ozone, approximately 0.7 maximum allowable concentration (MAC), and automatic maintenance of a specified level. This allows people to be inside the processed premises during operation. The microprocessor controller to control the operation of the ozonator was developed

  18. 'Grounded' Politics

    DEFF Research Database (Denmark)

    Schmidt, Garbi

    2012-01-01

    play within one particular neighbourhood: Nørrebro in the Danish capital, Copenhagen. The article introduces the concept of grounded politics to analyse how groups of Muslim immigrants in Nørrebro use the space, relationships and history of the neighbourhood for identity political statements....... The article further describes how national political debates over the Muslim presence in Denmark affect identity political manifestations within Nørrebro. By using Duncan Bell’s concept of mythscape (Bell, 2003), the article shows how some political actors idealize Nørrebro’s past to contest the present...... ethnic and religious diversity of the neighbourhood and, further, to frame what they see as the deterioration of genuine Danish identity....

  19. Matters of Fact: Language, Science, and the Status of Truth in Late Colonial Korea

    Directory of Open Access Journals (Sweden)

    Christopher P. Hanscom

    2014-03-01

    Full Text Available This article addresses the status of the fact in literary and historical discourses in late colonial Korea, focusing on the elaboration of the relationship between scientific and literary truths primarily in the work of philosopher and critic Sŏ Insik (1906–?. It points to a growing tendency in late 1930s and early 1940s Korea to question the veracity of the fact (or of empiricism more broadly in an environment where the enunciation of the colonial subject had been rendered problematic and objective statements had arguably lost their connection with social reality. In a period when the relationship between signifier and referent had come into question, how did this major critic understand the relationship between science and literature, or between truth and subjectivity? Sŏ warns against a simplistic apprehension of the notion of truth as unilaterally equivalent with what he calls “scientific truth” (kwahakchŏk chilli—a nomological truth based on objective observation and confirmation by universal principles—and argues that a necessary complement to apparently objective truth is “literary truth” (munhakchŏk chinsil. Against the fixed, conceptual form of scientific thought, literary truth presents itself as an experiential truth that returns to the sensory world of the sociolinguistic subject (chuch’e as a source of credibility.

  20. Automatic personnel contamination monitor

    International Nuclear Information System (INIS)

    Lattin, Kenneth R.

    1978-01-01

    United Nuclear Industries, Inc. (UNI) has developed an automatic personnel contamination monitor (APCM), which uniquely combines the design features of both portal and hand and shoe monitors. In addition, this prototype system also has a number of new features, including: micro computer control and readout, nineteen large area gas flow detectors, real-time background compensation, self-checking for system failures, and card reader identification and control. UNI's experience in operating the Hanford N Reactor, located in Richland, Washington, has shown the necessity of automatically monitoring plant personnel for contamination after they have passed through the procedurally controlled radiation zones. This final check ensures that each radiation zone worker has been properly checked before leaving company controlled boundaries. Investigation of the commercially available portal and hand and shoe monitors indicated that they did not have the sensitivity or sophistication required for UNI's application, therefore, a development program was initiated, resulting in the subject monitor. Field testing shows good sensitivity to personnel contamination with the majority of alarms showing contaminants on clothing, face and head areas. In general, the APCM has sensitivity comparable to portal survey instrumentation. The inherit stand-in, walk-on feature of the APCM not only makes it easy to use, but makes it difficult to bypass. (author)

  1. Thoughts on Chemistry and Scientific Truth in Post-Factual Times.

    Science.gov (United States)

    Schreiner, Peter R

    2018-04-19

    "… The value and meaning of scientific truth has not been overcome by postmodernism or post-factual tendencies. Just because politics is mostly a representation of opinions, this does not imply that truth has become irrelevant. Quite the opposite, the value of truth is growing in turbulent times and for scientists it constitutes the currency of credibility and accountability …" Read more in the Guest Editorial by Peter R. Schreiner. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Truth-telling contra perfectionist liberalism: Muslim parrhēsíastes in Denmark

    DEFF Research Database (Denmark)

    Renders, Johannes

    In this paper, I first offer a general outline and reflection on the notion of parrhēsía (truth-telling), as popularized by Foucault. Secondly, I discuss Foucault’s history of problematizations, with comments on what he called “games of truth” and the Cartesian conception of truth-telling. Thirdly......, I sketch a trend in the current Danish public and political sphere, defining the notion of “perfectionist liberalism” and how it translates to the Danish context, including concrete examples and notes on “liberal intolerance arguments”. Lastly, I address the condition of Muslim parrhēsíastes (truth...

  3. Using GPS-surveyed intertidal zones to determine the validity of shorelines automatically mapped by Landsat water indices

    Science.gov (United States)

    Kelly, Joshua T.; Gontz, Allen M.

    2018-03-01

    Satellite remote sensing has been used extensively in a variety of shoreline studies and validated using aerial photography. This ground truth method only represents an instantaneous depiction of the shoreline at the time of acquisition and does not take into account the spatial and temporal variability of the dynamic shoreline boundary. Landsat 8‧s Operational Land Imager sensor's capability to accurately delineate a shoreline is assessed by comparing all known Landsat water index-derived shorelines with two GPS-surveyed intertidal zones that coincide with the satellite flyover date, one of which had near-neap tide conditions. Seven indices developed for automatically classifying water pixels were evaluated for their ability to delineate shorelines. The shoreline is described here as the area above and below maximum low and high tide, otherwise known as the intertidal zone. The high-water line, or wet/dry sediment line, was chosen as the shoreline indicator to be mapped using a handheld GPS. The proportion of the Landsat-derived shorelines that fell within this zone and their alongshore profile lengths were calculated. The most frequently used water index and the predecessor to Modified Normalized Difference Water Index (MNDWI), Normalized Difference Water Index (NDWI), was found to be the least accurate by a significant margin. Other indices required calibration of their threshold value to achieve accurate results, thus diminishing their replicability success for other regions. MNDWI was determined to be the best index for automated shoreline mapping, based on its superior accuracy and repeatable, stable threshold value.

  4. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  5. On Telling a Lie to Reveal the Truth: Mongrel

    Directory of Open Access Journals (Sweden)

    James Clinton Oleson

    2017-07-01

    Full Text Available South African author William Dicey’s 2016 collection of essays, Mongrel, operates as a literary prism, refracting and clarifying literary and sociological elements of life. The book’s six essays grapple with a sprawling range of subjects, including: the elusive distinction between fiction and non-fiction, literary footnotes, the endeavor of writing, the search for truth, the citizen’s search for community, the relevance of ethnicity in post-apartheid society, the perpetuation of socioeconomic disadvantage, the tragedy of criminal justice, and collective moral culpability for climate change. History, economics, and practical ethics underscore the entire collection, and exogenous sources such as Nabokov’s Pale Fire and Coetzee’s Disgrace can lend depth to the works. The essays of Mongrel can be understood as six discrete works, but they can also be understood as a meta-narrative that takes as its object the sociological search for restored community and the literary quest for authenticity.

  6. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Science.gov (United States)

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  7. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    Directory of Open Access Journals (Sweden)

    Helene Kreysa

    Full Text Available A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins". Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  8. An Indispensable Truth How Fusion Power Can Save the Planet

    CERN Document Server

    Chen, Francis F

    2011-01-01

    Both global warming and oil shortage can be solved by controlled fusion, a clean power source that will serve mankind for millennia.� The idea of hydrogen fusion as well as its difficulties are presented in non-technical language to dispel the notion that fusion is always 50 years away.� This book also summarizes the evidence for climate change and explains the principles of both fossil and "green" energy sources to show that fusion is the best alternative for central-station power in the near term as well as the far future. Praise for An Indispensable Truth: How Fusion Power Can Save the Planet: "In this study Professor Chen outlines the underlying physics, recent progress in achieving advanced plasmas and magnetic confinement, and hopes for the future. He recognizes the difficulties that remain in engineering a fusion reactor, but he remains optimistic regarding ultimate success, yet fearful of the consequences were we to fail."- James R. Schlesinger, former Chairman, Atomic Energy Commission; Director,...

  9. An inconvenient truth: treatment of displaced paediatric supracondylar humeral fractures.

    LENUS (Irish Health Repository)

    Donnelly, M

    2012-06-01

    The need for emergent management of displaced paediatric supracondylar humeral fractures is being questioned in the literature. Open reduction rates of up to 46% have been reported in the non-emergent management of these injuries. At our institution these fractures are managed as operative emergencies by senior personnel. To examine the ongoing need for this policy we reviewed our results. All patients managed over a five year period with Gartland type IIB or III paeditric supracondylar humeral fractures were identified and a comprehensive chart and radiographic review undertaken. The mean time from injury to fracture reduction and stabilization was 6.6 h. Consultants performed or supervised 90% of cases. Open reduction was necessary in 5% of cases. Complications included a perioperative nerve injury rate of 6% and a superficial pin site infection rate of 3%. This study suggests that, despite the challenge to trauma on-call rostering, the emergency management of these injuries is advantageous to patients in units of our size. Based on the data presented here we continue our practice of emergent management. We suggest that units of a similar size to our own would show a benefit from an analogous policy albeit an inconvenient truth.

  10. ROBIN: a platform for evaluating automatic target recognition algorithms: I. Overview of the project and presentation of the SAGEM DS competition

    Science.gov (United States)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set

  11. Automatic identification in mining

    Energy Technology Data Exchange (ETDEWEB)

    Puckett, D; Patrick, C [Mine Computers and Electronics Inc., Morehead, KY (United States)

    1998-06-01

    The feasibility of monitoring the locations and vital statistics of equipment and personnel in surface and underground mining operations has increased with advancements in radio frequency identification (RFID) technology. This paper addresses the use of RFID technology, which is relatively new to the mining industry, to track surface equipment in mine pits, loading points and processing facilities. Specific applications are discussed, including both simplified and complex truck tracking systems and an automatic pit ticket system. This paper concludes with a discussion of the future possibilities of using RFID technology in mining including monitoring heart and respiration rates, body temperatures and exertion levels; monitoring repetitious movements for the study of work habits; and logging air quality via personnel sensors. 10 refs., 5 figs.

  12. Automatic quantitative metallography

    International Nuclear Information System (INIS)

    Barcelos, E.J.B.V.; Ambrozio Filho, F.; Cunha, R.C.

    1976-01-01

    The quantitative determination of metallographic parameters is analysed through the description of Micro-Videomat automatic image analysis system and volumetric percentage of perlite in nodular cast irons, porosity and average grain size in high-density sintered pellets of UO 2 , and grain size of ferritic steel. Techniques adopted are described and results obtained are compared with the corresponding ones by the direct counting process: counting of systematic points (grid) to measure volume and intersections method, by utilizing a circunference of known radius for the average grain size. The adopted technique for nodular cast iron resulted from the small difference of optical reflectivity of graphite and perlite. Porosity evaluation of sintered UO 2 pellets is also analyzed [pt

  13. Semi-automatic fluoroscope

    International Nuclear Information System (INIS)

    Tarpley, M.W.

    1976-10-01

    Extruded aluminum-clad uranium-aluminum alloy fuel tubes must pass many quality control tests before irradiation in Savannah River Plant nuclear reactors. Nondestructive test equipment has been built to automatically detect high and low density areas in the fuel tubes using x-ray absorption techniques with a video analysis system. The equipment detects areas as small as 0.060-in. dia with 2 percent penetrameter sensitivity. These areas are graded as to size and density by an operator using electronic gages. Video image enhancement techniques permit inspection of ribbed cylindrical tubes and make possible the testing of areas under the ribs. Operation of the testing machine, the special low light level television camera, and analysis and enhancement techniques are discussed

  14. Automatic surveying techniques

    International Nuclear Information System (INIS)

    Sah, R.

    1976-01-01

    In order to investigate the feasibility of automatic surveying methods in a more systematic manner, the PEP organization signed a contract in late 1975 for TRW Systems Group to undertake a feasibility study. The completion of this study resulted in TRW Report 6452.10-75-101, dated December 29, 1975, which was largely devoted to an analysis of a survey system based on an Inertial Navigation System. This PEP note is a review and, in some instances, an extension of that TRW report. A second survey system which employed an ''Image Processing System'' was also considered by TRW, and it will be reviewed in the last section of this note. 5 refs., 5 figs., 3 tabs

  15. AUTOMATIC ARCHITECTURAL STYLE RECOGNITION

    Directory of Open Access Journals (Sweden)

    M. Mathias

    2012-09-01

    Full Text Available Procedural modeling has proven to be a very valuable tool in the field of architecture. In the last few years, research has soared to automatically create procedural models from images. However, current algorithms for this process of inverse procedural modeling rely on the assumption that the building style is known. So far, the determination of the building style has remained a manual task. In this paper, we propose an algorithm which automates this process through classification of architectural styles from facade images. Our classifier first identifies the images containing buildings, then separates individual facades within an image and determines the building style. This information could then be used to initialize the building reconstruction process. We have trained our classifier to distinguish between several distinct architectural styles, namely Flemish Renaissance, Haussmannian and Neoclassical. Finally, we demonstrate our approach on various street-side images.

  16. Truth Commissions in Latin America. The hope of a new future

    Directory of Open Access Journals (Sweden)

    Nelson Molina Valencia

    2017-01-01

    Full Text Available This article presents the implementation of the right to the truth of the victims, through the creation of eleven Commissions of Truth, established in Argentina, Chile, El Salvador, Guatemala, Uruguay, Peru, Paraguay, Colombia, Ecuador, Honduras and Brasil, which emerged as the product of peace agreements or transitional processes. The Commissions of Truth received the assignment to investigate violations of Human Rights and breaches of International Humanitarian Law by military dictatorships, authoritarian regimes or internal armed conflicts. This review shows, that in addition to the subjects that constitute the Commissions, they work due to eight conditions: determined duration; legitimacy; themes; working methodologies; media of dissemination of results; attention to Disarmament, Demobilization and Reintegration processes; repair strategies, request for forgiveness and reconciliation. The existence of the Commissions of Truth, while transforming the conflicts they serve, have not reached, as a strategy, the integral promotion of connivance and reconciliation.

  17. Truth and beauty in cosmology: does the Universe have an aesthetic?

    CERN Multimedia

    Impey, C

    2004-01-01

    "Astronomy is an empirical science, yet scientific definitions of truth and beauty are closely tied to the fact that mathematics appears to provide an accurate description of the physical Universe" (2 pages)

  18. Values in Fritz Perls's Gestalt Therapy: On the Dangers of Half-Truths.

    Science.gov (United States)

    Cadwallader, Eva H.

    1984-01-01

    Examines some of the values in Perls's theory of psychotherapy, which his Gestalt Prayer epitomizes. Argues that at least five of the major value claims presupposed by his psychotherapeutic theory and practice are in fact dangerous half-truths. (JAC)

  19. Three Functions of the School Newspaper: The Truth Shop, The Persuasion Podium, The Pleasure Dome.

    Science.gov (United States)

    Campbell, Laurence R.

    This Quill and Scroll Study, which is illustrated with numerous tables, concerns the following subjects: The School Newspaper as a Truth Shop; The School Newspaper as a Pleasure Dome; and The School Newspaper as a Persuasion Podium. (DB)

  20. A Bridge to Reconciliation: A Critique of the Indian Residential School Truth Commission

    Directory of Open Access Journals (Sweden)

    Marc A. Flisfeder

    2010-05-01

    Full Text Available In the past year, the Government of Canada has established the Indian Residential Schools (IRS Truth and Reconciliation Commission (TRC to address the deleterious effect that the IRS system has had on Aboriginal communities. This paper argues that the TRC as an alternative dispute resolution mechanism is flawed since it focuses too much on truth at the expense of reconciliation. While the proliferation of historical truths is of great importance, without mapping a path to reconciliation, the Canadian public will simply learn about the mistakes of the past without addressing the residual, communal impacts of the IRS system that continue to linger. The Truth and Reconciliation Commission must therefore approach its mandate broadly and in a manner reminiscent of the Royal Commission on Aboriginal Peoples of 1996.

  1. Science, religion, and the quest for knowledge and truth: an Islamic perspective

    Science.gov (United States)

    Guessoum, Nidhal

    2010-03-01

    This article consists of two parts. The first one is to a large extent a commentary on John R. Staver's "Skepticism, truth as coherence, and constructivist epistemology: grounds for resolving the discord between science and religion?" The second part is a related overview of Islam's philosophy of knowledge and, to a certain degree, science. In responding to Staver's thesis, I rely strongly on my scientific education and habit of mind; I also partly found my views on my Islamic background, though I enlarge my scope to consider western philosophical perspectives as well. I differ with Staver in his definition of the nature, scope, and goals of religion (concisely, "explaining the world and how it works"), and I think this is the crux of the matter in attempting to resolve the perceived "discord" between science and religion. The heart of the problem is in the definition of the domains of action of science and religion, and I address this issue at some length, both generically and using Islamic principles, which are found to be very widely applicable. The concept of "reality," so important to Staver's thesis, is also critically reviewed. The philosophy of knowledge (and of science) in Islam is briefly reviewed in the aim of showing the great potential for harmony between the two "institutions" (religion and science), on the basis of the following philosophy: science describes nature, whereas religion gives us not only a philosophy of existence but also an interpretative cloak for the discoveries of science and for the meaning of the cosmos and nature. I conclude by insisting that though science and religion can be considered as two worldviews that propose to describe "reality" and to explain our existence and that of the world; they may come to compete for humans' minds and appear to enter into a conflicting position, but only if and when we confuse their domains and modes of action. [InlineMediaObject not available: see fulltext.][InlineMediaObject not available: see

  2. The Politics of Violence, Truth, and Reconciliation in the Arab Middle East

    DEFF Research Database (Denmark)

    This book treats political and cultural attempts to create truth and reconciliation processes in the Arab Middle Eat. It contains studies of Morocco, Algeria, Sudan, Lebanon, Palestine, Iraq and Syria.......This book treats political and cultural attempts to create truth and reconciliation processes in the Arab Middle Eat. It contains studies of Morocco, Algeria, Sudan, Lebanon, Palestine, Iraq and Syria....

  3. Cognitive dissonance, social comparison, and disseminating untruthful or negative truthful eWOM messages

    OpenAIRE

    Liu, Y-L; Keng, Ching-Jui

    2014-01-01

    In this research we explored consumers' intentions to provide untruthful or negative truthful electronic word-of-mouth (eWOM) messages when undergoing conflicting cognitive dissonance and after experiencing social comparison. We recruited 480 Taiwanese Internet users to participate in a scenario-based experiment. The findings show that after making downward comparisons on the Internet, consumers with high cognitive dissonance were more inclined to disseminate negative truthful eWOM messages c...

  4. Islamic positivism and scientific truth: Qur'an and archeology in a creationist documentary film

    OpenAIRE

    Dupret , Baudouin; Gutron , Clémentine

    2016-01-01

    International audience; The ambition of “scientific creationism” is to prove that science actually confirms religion. This is especially true in the case of Muslim creationism, which adopts a reasoning of a syllogistic type: divine revelation is truth; good science confirms truth; divine revelation is henceforth scientifically proven. Harun Yahya is a prominent Muslim “creationist” whose website hosts many texts and documentary films, among which “Evidence of the true faith in historical sour...

  5. Truth and victims’ rights: Towards a legal epistemology of international criminal justice

    OpenAIRE

    Aguilera, Edgar R.

    2013-01-01

    The author advances the thesis that the now well established international crime victims' right to know the truth creates an opportunity for an applied epistemology reflection regarding international criminal justice. At the heart of the project lies the author's argument that this victims' right -if taken seriously- implies both the right that the international criminal justice system's normative structures or legal frameworks and practices feature a truth-promoting profile, or in other word...

  6. A novel rumor diffusion model considering the effect of truth in online social media

    Science.gov (United States)

    Sun, Ling; Liu, Yun; Zeng, Qing-An; Xiong, Fei

    2015-12-01

    In this paper, we propose a model to investigate how truth affects rumor diffusion in online social media. Our model reveals a relation between rumor and truth — namely, when a rumor is diffusing, the truth about the rumor also diffuses with it. Two patterns of the agents used to identify rumor, self-identification and passive learning are taken into account. Combining theoretical proof and simulation analysis, we find that the threshold value of rumor diffusion is negatively correlated to the connectivity between nodes in the network and the probability β of agents knowing truth. Increasing β can reduce the maximum density of the rumor spreaders and slow down the generation speed of new rumor spreaders. On the other hand, we conclude that the best rumor diffusion strategy must balance the probability of forwarding rumor and the probability of agents losing interest in the rumor. High spread rate λ of rumor would lead to a surge in truth dissemination which will greatly limit the diffusion of rumor. Furthermore, in the case of unknown λ, increasing β can effectively reduce the maximum proportion of agents who do not know the truth, but cannot narrow the rumor diffusion range in a certain interval of β.

  7. TESTING OF THE TRUTH IN ANDREI PLATONOV’S TALE THE FOUNDATION PIT

    Directory of Open Access Journals (Sweden)

    Marina Vladimirovna Zavarkina

    2014-11-01

    Full Text Available Analyzing the manuscript of Andrei Platonov’s tale The Foundation Pit and his early journalism the author traces the evolution of Platonov’s views on the problem of the search for the truth. Analysis of the dynamic transcription of the The Foundation Pit manuscript enabled to show that by the end of the 1920s Platonov had abandoned rationalistic interpretations of the concept of «truth», including in its "Bogdanov’s edition". Platonov increasingly questioned the capabilities of socialist science, based on materialism, to fi nd out the truth about the world, and changed his views on work as the only method of learning the truth. He departed from the materialistic concept of learning and shifted towards the religious and philosophical tradition: in the end the formula of P. Florensky "truth-estina (to be in existence" is getting more and more essential and it is outlined a shift of the concept of “truth” from a cognitive-materialistic category (“invent” / “to do” truth to an ontological and moral category.

  8. Prosecuting International Crimes at National Level: Lessons from the Argentine ‘Truth-Finding Trials’

    Directory of Open Access Journals (Sweden)

    Elena Maculan

    2012-01-01

    Full Text Available Truth-finding trials (juicios por la verdad constitute a novel solution devised by the Argentine judicial system to cope with crimes committed by the past military dictatorship. This mechanism uses criminal courts as well as criminal procedure in order to investigate the truth about the dictatorship's crimes; however, the trials allow judges neither to establish criminal responsibility nor to punish the perpetrators of crimes. This limitation is due to the inability, imposed by the Full Stop and Due Obedience Laws, to prosecute the perpetrators of crimes.From the perspective of criminal law, truth-finding trials present two problematic features: firstly, their creation and regulation are set by judges, which has caused the development of many non-homogeneous local solutions and, secondly, their hybrid nature, which entails a possible subversion of conventional forms and goals in the context of the criminal trial.The paper also describes the current situation, since the Argentine impunity laws were declared unconstitutional and criminal proceedings reopened. The new framework provokes questions about the relationship between the reopened criminal trials and the truth-finding investigations, not only with regard to evidentiary issues but also with respect to the reason why the truth-finding investigations are still held.Finally, the shift from a non-punitive approach to the current full criminal accountability seems to suggest that truth-finding trials were merely a temporary solution, while the notion of the full prosecution and punishment of State crimes was never really set aside.

  9. [Truth telling and advance care planning at the end of life].

    Science.gov (United States)

    Hu, Wen-Yu; Yang, Chia-Ling

    2009-02-01

    One of the core values in terminal care the respect of patient 'autonomy'. This essay begins with a discussion of medical ethics principles and the Natural Death Act in Taiwan and then summarizes two medical ethical dilemmas, truth telling and advance care planning (ACP), faced in the development of hospice and palliative care in Taiwan. The terminal truth telling process incorporates the four basic principles of Assessment and preparation, Communication with family, Truth-telling process, and Support and follow up (the so-called "ACTs"). Many experts suggest practicing ACP by abiding by the following five steps: (1) presenting and illustrating topics; (2) facilitating a structured discussion; (3) completing documents with advanced directives (ADs); (4) reviewing and updating ADs; and (5) applying ADs in clinical circumstances. Finally, the myths and challenges in truth telling and ADs include the influence of healthcare system procedures and priorities, inadequate communication skills, and the psychological barriers of medical staffs. Good communication skills are critical to truth telling and ACP. Significant discussion about ACP should help engender mutual trust between patients and the medical staffs who take the time to establish such relationships. Promoting patient autonomy by providing the opportunity of a good death is an important goal of truth telling and ACP in which patients have opportunities to choose their terminal treatment.

  10. Telling the truth about nuclear inside the organisation

    International Nuclear Information System (INIS)

    Stiopol, Mihaela; Bilegan, losif Constantin

    2002-01-01

    Internal communication represents an important element of the general communication and public relations activities, especially in nuclear sector. A fundamental truth of public relations says that 'public relations start home'. The means of this affirmation is that if an internal information doesn't develop in a correct manner, the organization does not spread out the real and coherent programs in connection with the society. An objective is that then the members of an organization talk about this organization, they must represent 'a single voice, which presents the same coherent message'. One of the fundamental principles is that the well-informed people, are deeply/profoundly motivated and the image of the organization is correct spread out to media. There are four main directions of internal messages that must concentrate the interest of the personnel: - The scope and the perspective of the organization; - The role of each employer within the organization; - Activities. The involvement in educational and recreative activities is necessary for the employer's moral; - Updated information. A well-informed employer, as regards te general communication policy of the company, is more efficient. The means through an internal communication can be achieved are the followings: - Printed materials; - Electronic means of communication (intranet); - Oral communication; - Advertising means. In connection with the above mentioned ideas the paper will try to present the 'NUCLEARELECTRICA' company internal communication policy, answering to the following questions; 1. What does the internal communication mean? 2. Is the internal communication necessary and why? 3. How the internal communication can be carried out? 4. How did we achieve the internal communication within our company? 5. What do we intend to realize in the future? (author)

  11. The truthful signalling hypothesis: an explicit general equilibrium model.

    Science.gov (United States)

    Hausken, Kjell; Hirshleifer, Jack

    2004-06-21

    In mating competition, the truthful signalling hypothesis (TSH), sometimes known as the handicap principle, asserts that higher-quality males signal while lower-quality males do not (or else emit smaller signals). Also, the signals are "believed", that is, females mate preferentially with higher-signalling males. Our analysis employs specific functional forms to generate analytic solutions and numerical simulations that illuminate the conditions needed to validate the TSH. Analytic innovations include: (1) A Mating Success Function indicates how female mating choices respond to higher and lower signalling levels. (2) A congestion function rules out corner solutions in which females would mate exclusively with higher-quality males. (3) A Malthusian condition determines equilibrium population size as related to per-capita resource availability. Equilibria validating the TSH are achieved over a wide range of parameters, though not universally. For TSH equilibria it is not strictly necessary that the high-quality males have an advantage in terms of lower per-unit signalling costs, but a cost difference in favor of the low-quality males cannot be too great if a TSH equilibrium is to persist. And although the literature has paid less attention to these points, TSH equilibria may also fail if: the quality disparity among males is too great, or the proportion of high-quality males in the population is too large, or if the congestion effect is too weak. Signalling being unprofitable in aggregate, it can take off from a no-signalling equilibrium only if the trait used for signalling is not initially a handicap, but instead is functionally useful at low levels. Selection for this trait sets in motion a bandwagon, whereby the initially useful indicator is pushed by male-male competition into the domain where it does indeed become a handicap.

  12. Experimental investigation of an accelerometer controlled automatic braking system

    Science.gov (United States)

    Dreher, R. C.; Sleeper, R. K.; Nayadley, J. R., Sr.

    1972-01-01

    An investigation was made to determine the feasibility of an automatic braking system for arresting the motion of an airplane by sensing and controlling braked wheel decelerations. The system was tested on a rotating drum dynamometer by using an automotive tire, wheel, and disk-brake assembly under conditions which included two tire loadings, wet and dry surfaces, and a range of ground speeds up to 70 knots. The controlling parameters were the rates at which brake pressure was applied and released and the Command Deceleration Level which governed the wheel deceleration by controlling the brake operation. Limited tests were also made with the automatic braking system installed on a ground vehicle in an effort to provide a more realistic proof of its feasibility. The results of this investigation indicate that a braking system which utilizes wheel decelerations as the control variable to restrict tire slip is feasible and capable of adapting to rapidly changing surface conditions.

  13. Evaluating the effect of multiple sclerosis lesions on automatic brain structure segmentation

    Directory of Open Access Journals (Sweden)

    Sandra González-Villà

    2017-01-01

    Full Text Available In recent years, many automatic brain structure segmentation methods have been proposed. However, these methods are commonly tested with non-lesioned brains and the effect of lesions on their performance has not been evaluated. Here, we analyze the effect of multiple sclerosis (MS lesions on three well-known automatic brain structure segmentation methods, namely, FreeSurfer, FIRST and multi-atlas fused by majority voting, which use learning-based, deformable and atlas-based strategies, respectively. To perform a quantitative analysis, 100 synthetic images of MS patients with a total of 2174 lesions are simulated on two public databases with available brain structure ground truth information (IBSR18 and MICCAI’12. The Dice similarity coefficient (DSC differences and the volume differences between the healthy and the simulated images are calculated for the subcortical structures and the brainstem. We observe that the three strategies are affected when lesions are present. However, the effects of the lesions do not follow the same pattern; the lesions either make the segmentation method underperform or surprisingly augment the segmentation accuracy. The obtained results show that FreeSurfer is the method most affected by the presence of lesions, with DSC differences (generated − healthy ranging from −0.11 ± 0.54 to 9.65 ± 9.87, whereas FIRST tends to be the most robust method when lesions are present (−2.40 ± 5.54 to 0.44 ± 0.94. Lesion location is not important for global strategies such as FreeSurfer or majority voting, where structure segmentation is affected wherever the lesions exist. On the other hand, FIRST is more affected when the lesions are overlaid or close to the structure of analysis. The most affected structure by the presence of lesions is the nucleus accumbens (from −1.12 ± 2.53 to 1.32 ± 4.00 for the left hemisphere and from −2.40 ± 5.54 to 9.65 ± 9.87 for the right hemisphere, whereas the

  14. Enhanced Site Characterization of the 618-4 Burial Ground

    Energy Technology Data Exchange (ETDEWEB)

    Murray, Christopher J.; Last, George V.; Chien, Yi-Ju

    2001-09-25

    This report describes the results obtained from deployment of the Enhanced Site Characterization System (ESCS) at the Hanford Site's 618-4 Burial Ground. The objective of this deployment was to use advanced geostatistical methods to integrate and interpret geophysical and ground truth data, to map the physical types of waste materials present in unexcavated portions of the burial ground. One issue of particularly interest was the number of drums (containing depleted uranium metal shavings or uranium-oxide powder) remaining in the burial ground and still requiring removal.Fuzzy adaptive resonance theory (ART), a neural network classification method, was used to cluster the study area into 3 classes based on their geophysical signatures. Multivariate statistical analyses and discriminant function analysis (DFA) indicated that the drum area as well as a second area (the SW anomaly) had similar geophysical signatures that were different from the rest of the burial ground. Further analysis of the drum area suggested that as many as 770 drums to 850 drums may remain in that area. Similarities between the geophysical signatures of the drum area and the SW anomaly suggested that excavation of the SW anomaly area also proceed with caution.Deployment of the ESCS technology was successful in integrating multiple geophysical variables and grouping these observations into clusters that are relevant for planning further excavation of the buried ground. However, the success of the technology could not be fully evaluated because reliable ground truth data were not available to enable calibration of the different geophysical signatures against actual waste types.

  15. The Victim, the International Criminal Court and the Search for Truth: on the Interdependence and Incompatibility of Truths about Mass Atrocity

    NARCIS (Netherlands)

    Stolk, S.

    2015-01-01

    In the debate on the place of victims in international criminal proceedings, the 'search for truth' takes centre stage as an important concern of victims, international criminal tribunals and the wider international community. However, the various claims about the importance of telling and receiving

  16. Automatic EEG spike detection.

    Science.gov (United States)

    Harner, Richard

    2009-10-01

    Since the 1970s advances in science and technology during each succeeding decade have renewed the expectation of efficient, reliable automatic epileptiform spike detection (AESD). But even when reinforced with better, faster tools, clinically reliable unsupervised spike detection remains beyond our reach. Expert-selected spike parameters were the first and still most widely used for AESD. Thresholds for amplitude, duration, sharpness, rise-time, fall-time, after-coming slow waves, background frequency, and more have been used. It is still unclear which of these wave parameters are essential, beyond peak-peak amplitude and duration. Wavelet parameters are very appropriate to AESD but need to be combined with other parameters to achieve desired levels of spike detection efficiency. Artificial Neural Network (ANN) and expert-system methods may have reached peak efficiency. Support Vector Machine (SVM) technology focuses on outliers rather than centroids of spike and nonspike data clusters and should improve AESD efficiency. An exemplary spike/nonspike database is suggested as a tool for assessing parameters and methods for AESD and is available in CSV or Matlab formats from the author at brainvue@gmail.com. Exploratory Data Analysis (EDA) is presented as a graphic method for finding better spike parameters and for the step-wise evaluation of the spike detection process.

  17. The ethics of truth in the era of technological civilization and the information

    Directory of Open Access Journals (Sweden)

    N. V. Nesprava

    2017-01-01

    Full Text Available The technological civilization and information society give a lot of advantages to a modern man in various fields. However, at the same time, we do not see a decrease in hostility and injustice in the society. Moreover, technological progress has led to the fact that human civilization is on the verge of a serious challenge caused by environmental disasters and opportunity for unprecedented disastrous consequences of the World War III. In these circumstances, the search for the causes of the current crisis is actual as well as the development of concepts that can contribute to overcoming this crisis. One of the most promising theories in this context is the ethics of truth. In modern civilization the issue of the importance of truth is shifted to the periphery of the intellectual discourse. Modern civilization uses only substitutes of the issue. Veracity in science together with popular opinions in the field of information circulating are such substitutes of truth. However, these substitutes do not fully reflect the true contents of the truth issue. The truth is not confined to the veracity of a celebrity’s opinion. Analyzing the theories by H. Jonas, В. Hübner, V. Stepin and T. Voronina, the study argues that the processes of losing of the meaning of life are intensifying in the modern civilization. We argue that a lack of a proper attention in modern civilization to the issue of truth inevitably stimulates processes of diluting the meaning of human life. Based on the theories by G. Hegel, H.-G. Gadamer and J. Neidleman T. Osborne, the study demonstrates that without thinking about the issue of the truth one cannot come near to the transcendent, and thus, his life could not be convincingly endowed with meaning, fullness and a purpose of existence. Understanding the issue of truth is especially important in conditions of the modern civilization, because our civilization is facing the possibility of self-destruction as a result of its

  18. Distributed operating system for NASA ground stations

    Science.gov (United States)

    Doyle, John F.

    1987-01-01

    NASA ground stations are characterized by ever changing support requirements, so application software is developed and modified on a continuing basis. A distributed operating system was designed to optimize the generation and maintenance of those applications. Unusual features include automatic program generation from detailed design graphs, on-line software modification in the testing phase, and the incorporation of a relational database within a real-time, distributed system.

  19. Automatic measurement of target crossing speed

    Science.gov (United States)

    Wardell, Mark; Lougheed, James H.

    1992-11-01

    The motion of ground vehicle targets after a ballistic round is launched can be a major source of inaccuracy for small (handheld) anti-armour weapon systems. A method of automatically measuring the crossing component to compensate the fire control solution has been devised and tested against various targets in a range of environments. A photodetector array aligned with the sight's horizontal reticle obtains scene features, which are digitized and processed to separate target from sight motion. Relative motion of the target against the background is briefly monitored to deduce angular crossing rate and a compensating lead angle is introduced into the aim point. Research to gather quantitative data and optimize algorithm performance is described, and some results from field testing are presented.

  20. Automatically pairing measured findings across narrative abdomen CT reports.

    Science.gov (United States)

    Sevenster, Merlijn; Bozeman, Jeffrey; Cowhy, Andrea; Trost, William

    2013-01-01

    Radiological measurements are one of the key variables in widely adopted guidelines (WHO, RECIST) that standardize and objectivize response assessment in oncology care. Measurements are typically described in free-text, narrative radiology reports. We present a natural language processing pipeline that extracts measurements from radiology reports and pairs them with extracted measurements from prior reports of the same clinical finding, e.g., lymph node or mass. A ground truth was created by manually pairing measurements in the abdomen CT reports of 50 patients. A Random Forest classifier trained on 15 features achieved superior results in an end-to-end evaluation of the pipeline on the extraction and pairing task: precision 0.910, recall 0.878, F-measure 0.894, AUC 0.988. Representing the narrative content in terms of UMLS concepts did not improve results. Applications of the proposed technology include data mining, advanced search and workflow support for healthcare professionals managing radiological measurements.

  1. Automatic Camera Orientation and Structure Recovery with Samantha

    Science.gov (United States)

    Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.

    2011-09-01

    SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  2. AUTOMATIC RETINA EXUDATES SEGMENTATION WITHOUT A MANUALLY LABELLED TRAINING SET

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy which can be assessed by detecting exudates (a type of bright lesion) in fundus images. In this work, two new methods for the detection of exudates are presented which do not use a supervised learning step and therefore do not require ground-truthed lesion training sets which are time consuming to create, difficult to obtain, and prone to human error. We introduce a new dataset of fundus images from various ethnic groups and levels of DME which we have made publicly available. We evaluate our algorithm with this dataset and compare our results with two recent exudate segmentation algorithms. In all of our tests, our algorithms perform better or comparable with an order of magnitude reduction in computational time.

  3. AUTOMATIC ADJUSTMENT OF WIDE-BASE GOOGLE STREET VIEW PANORAMAS

    Directory of Open Access Journals (Sweden)

    E. Boussias-Alexakis

    2016-06-01

    Full Text Available This paper focuses on the issue of sparse matching in cases of extremely wide-base panoramic images such as those acquired by Google Street View in narrow urban streets. In order to effectively use affine point operators for bundle adjustment, panoramas must be suitably rectified to simulate affinity. To this end, a custom piecewise planar projection (triangular prism projection is applied. On the assumption that the image baselines run parallel to the street façades, the estimated locations of the vanishing lines of the façade plane allow effectively removing projectivity and applying the ASIFT point operator on panorama pairs. Results from comparisons with multi-panorama adjustment, based on manually measured image points, and ground truth indicate that such an approach, if further elaborated, may well provide a realistic answer to the matching problem in the case of demanding panorama configurations.

  4. AUTOMATIC CAMERA ORIENTATION AND STRUCTURE RECOVERY WITH SAMANTHA

    Directory of Open Access Journals (Sweden)

    R. Gherardi

    2012-09-01

    Full Text Available SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  5. VP-Nets : Efficient automatic localization of key brain structures in 3D fetal neurosonography.

    Science.gov (United States)

    Huang, Ruobing; Xie, Weidi; Alison Noble, J

    2018-04-23

    Three-dimensional (3D) fetal neurosonography is used clinically to detect cerebral abnormalities and to assess growth in the developing brain. However, manual identification of key brain structures in 3D ultrasound images requires expertise to perform and even then is tedious. Inspired by how sonographers view and interact with volumes during real-time clinical scanning, we propose an efficient automatic method to simultaneously localize multiple brain structures in 3D fetal neurosonography. The proposed View-based Projection Networks (VP-Nets), uses three view-based Convolutional Neural Networks (CNNs), to simplify 3D localizations by directly predicting 2D projections of the key structures onto three anatomical views. While designed for efficient use of data and GPU memory, the proposed VP-Nets allows for full-resolution 3D prediction. We investigated parameters that influence the performance of VP-Nets, e.g. depth and number of feature channels. Moreover, we demonstrate that the model can pinpoint the structure in 3D space by visualizing the trained VP-Nets, despite only 2D supervision being provided for a single stream during training. For comparison, we implemented two other baseline solutions based on Random Forest and 3D U-Nets. In the reported experiments, VP-Nets consistently outperformed other methods on localization. To test the importance of loss function, two identical models are trained with binary corss-entropy and dice coefficient loss respectively. Our best VP-Net model achieved prediction center deviation: 1.8 ± 1.4 mm, size difference: 1.9 ± 1.5 mm, and 3D Intersection Over Union (IOU): 63.2 ± 14.7% when compared to the ground truth. To make the whole pipeline intervention free, we also implement a skull-stripping tool using 3D CNN, which achieves high segmentation accuracy. As a result, the proposed processing pipeline takes a raw ultrasound brain image as input, and output a skull-stripped image with five detected key brain

  6. Helping medical students to acquire a deeper understanding of truth-telling

    Directory of Open Access Journals (Sweden)

    Samia A. Hurst

    2015-11-01

    Full Text Available Problem: Truth-telling is an important component of respect for patients’ self-determination, but in the context of breaking bad news, it is also a distressing and difficult task. Intervention: We investigated the long-term influence of a simulated patient-based teaching intervention, integrating learning objectives in communication skills and ethics into students’ attitudes and concerns regarding truth-telling. We followed two cohorts of medical students from the preclinical third year to their clinical rotations (fifth year. Open-ended responses were analysed to explore medical students’ reported difficulties in breaking bad news. Context: This intervention was implemented during the last preclinical year of a problem-based medical curriculum, in collaboration between the doctor–patient communication and ethics programs. Outcome: Over time, concerns such as empathy and truthfulness shifted from a personal to a relational focus. Whereas ‘truthfulness’ was a concern for the content of the message, ‘truth-telling’ included concerns on how information was communicated and how realistically it was received. Truth-telling required empathy, adaptation to the patient, and appropriate management of emotions, both for the patient's welfare and for a realistic understanding of the situation. Lessons learned: Our study confirms that an intervention confronting students with a realistic situation succeeds in making them more aware of the real issues of truth-telling. Medical students deepened their reflection over time, acquiring a deeper understanding of the relational dimension of values such as truth-telling, and honing their view of empathy.

  7. Something Happens in Room 13: Bringing Truths into the World

    Science.gov (United States)

    Grube, Vicky

    2015-01-01

    This qualitative study looks at how an art studio run by children in crisis impacts what we can learn about art and relationships. Room 13, an art studio on school grounds managed by children ages 7-11 years old, began in Scotland in the 1980's and is now worldwide. Room 13 young artists manage the studio, raise funds, and even hire an adult…

  8. Automatic structural parcellation of mouse brain MRI using multi-atlas label fusion.

    Directory of Open Access Journals (Sweden)

    Da Ma

    Full Text Available Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework.

  9. Ground water '89

    International Nuclear Information System (INIS)

    1989-01-01

    The proceedings of the 5th biennial symposium of the Ground Water Division of the Geological Society of South Africa are presented. The theme of the symposium was ground water and mining. Papers were presented on the following topics: ground water resources; ground water contamination; chemical analyses of ground water and mining and its influece on ground water. Separate abstracts were prepared for 5 of the papers presented. The remaining papers were considered outside the subject scope of INIS

  10. Towards automatic exchange of information

    OpenAIRE

    Oberson, Xavier

    2015-01-01

    This article describes the various steps that led towards automatic exchange of information as the global standard and the issues that remain to be solved. First, the various competing models of exchange information, such as Double Tax Treaty (DTT), TIEA's, FATCA or UE Directives are described with a view to show how they interact between themselves. Second, the so-called Rubik Strategy is summarized and compared with an automatic exchange of information (AEOI). The third part then describes ...

  11. Effects of the truth FinishIt brand on tobacco outcomes.

    Science.gov (United States)

    Evans, W Douglas; Rath, Jessica M; Hair, Elizabeth C; Snider, Jeremy Williams; Pitzer, Lindsay; Greenberg, Marisa; Xiao, Haijun; Cantrell, Jennifer; Vallone, Donna

    2018-03-01

    Since 2000, the truth campaign has grown as a social marketing brand. Back then, truth employed branding to compete directly with the tobacco industry. In 2014, the launch of truth FinishIt reflected changes in the brand's strategy, the tobacco control environment, and youth/young adult behavior. Building on a previous validation study, the current study examined brand equity in truth FinishIt , as measured by validated multi-dimensional scales, and tobacco related attitudes, beliefs, and behavior based on two waves of the Truth Longitudinal Cohort data from 2015 and 2016. A fixed effects logistic regression was used to estimate the change in brand equity between panel survey waves 3 and 4 on past 30-day smoking among ever and current smokers. Additional models determined the effects of brand equity predicting tobacco attitudes/use at follow up among the full sample. All analyses controlled for demographic factors. A one-point increase in the brand equity scale between the two waves was associated with a 66% greater chance of not smoking among ever smokers (OR 1.66, CI 1.11-2.48, p  effects of brand equity on tobacco use and how tobacco control can optimize the use of branding in campaigns.

  12. The Science Behind the Academy for Eating Disorders' Nine Truths About Eating Disorders.

    Science.gov (United States)

    Schaumberg, Katherine; Welch, Elisabeth; Breithaupt, Lauren; Hübel, Christopher; Baker, Jessica H; Munn-Chernoff, Melissa A; Yilmaz, Zeynep; Ehrlich, Stefan; Mustelin, Linda; Ghaderi, Ata; Hardaway, Andrew J; Bulik-Sullivan, Emily C; Hedman, Anna M; Jangmo, Andreas; Nilsson, Ida A K; Wiklund, Camilla; Yao, Shuyang; Seidel, Maria; Bulik, Cynthia M

    2017-11-01

    In 2015, the Academy for Eating Disorders collaborated with international patient, advocacy, and parent organizations to craft the 'Nine Truths About Eating Disorders'. This document has been translated into over 30 languages and has been distributed globally to replace outdated and erroneous stereotypes about eating disorders with factual information. In this paper, we review the state of the science supporting the 'Nine Truths'. The literature supporting each of the 'Nine Truths' was reviewed, summarized and richly annotated. Most of the 'Nine Truths' arise from well-established foundations in the scientific literature. Additional evidence is required to further substantiate some of the assertions in the document. Future investigations are needed in all areas to deepen our understanding of eating disorders, their causes and their treatments. The 'Nine Truths About Eating Disorders' is a guiding document to accelerate global dissemination of accurate and evidence-informed information about eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  13. Should physicians tell the truth without taking social complications into account? A striking case.

    Science.gov (United States)

    Avci, Ercan

    2018-03-01

    The principle of respect for autonomy requires informing patients adequately and appropriately about diagnoses, treatments, and prognoses. However, some clinical cases may cause ethical dilemmas regarding telling the truth. Under the existence especially of certain cultural, social, and religious circumstances, disclosing all the relevant information to all pertinent parties might create harmful effects. Even though the virtue of telling the truth is unquestionable, sometimes de facto conditions compel physicians to act paternalistically to protect the patient/patients from imminent dangers. This article, which aims to study the issue of whether a physician should always tell the truth, analyzes an interesting case that represents the detection of misattributed paternity during pre-transplant tests for a kidney transplant from the son to the father in Turkey, where social, cultural, and religious factors have considerable impact on marital infidelity. After analyzing the concept of telling the truth and its relationship with paternalism and two major ethical theories, consequentialism and deontology, it is concluded that the value of the integrity of life and survival overrides the value of telling the truth. For this reason, in the case of a high possibility of severe and imminent threats, withholding some information is ethically justifiable.

  14. The impact of culture and religion on truth telling at the end of life.

    Science.gov (United States)

    de Pentheny O'Kelly, Clarissa; Urch, Catherine; Brown, Edwina A

    2011-12-01

    Truth telling, a cardinal rule in Western medicine, is not a globally shared moral stance. Honest disclosure of terminal prognosis and diagnosis are regarded as imperative in preparing for the end of life. Yet in many cultures, truth concealment is common practice. In collectivist Asian and Muslim cultures, illness is a shared family affair. Consequently, decision making is family centred and beneficence and non-malfeasance play a dominant role in their ethical model, in contrast to patient autonomy in Western cultures. The 'four principles' are prevalent throughout Eastern and Western cultures, however, the weight with which they are considered and their understanding differ. The belief that a grave diagnosis or prognosis will extinguish hope in patients leads families to protect ill members from the truth. This denial of the truth, however, is linked with not losing faith in a cure. Thus, aggressive futile treatment can be expected. The challenge is to provide a health care service that is equable for all individuals in a given country. The British National Health Service provides care to all cultures but is bound by the legal principles and framework of the UK and aims for equity of provision by working within the UK ethical framework with legal and ethical norms being explained to all patients and relatives. This requires truth telling about prognosis and efficacy of potential treatments so that unrealistic expectations are not raised.

  15. Auditory Figure-Ground Segregation is Impaired by High Visual Load

    OpenAIRE

    Lavie, Nilli; Chait, Maria; Molloy, Katharine

    2017-01-01

    Figure-ground segregation is fundamental to listening in complex acoustic environments. An ongoing debate pertains to whether segregation requires attention or is 'automatic' and pre-attentive. In this magnetoencephalography (MEG) study we tested a prediction derived from Load Theory of attention (1) that segregation requires attention, but can benefit from the automatic allocation of any 'leftover' capacity under low load. Complex auditory scenes were modelled with Stochastic Figure Ground s...

  16. Good things don't come easy (to mind): explaining framing effects in judgments of truth.

    Science.gov (United States)

    Hilbig, Benjamin E

    2012-01-01

    Recently, the general phenomenon of a positive-negative-asymmetry was extended to judgments of truth. That is, negatively framed statements were shown to receive substantially higher truth ratings than formally equivalent statements framed positively. However, the cognitive mechanisms underlying this effect are unknown, so far. In the current work, two potential accounts are introduced and tested against each other in three experiments: On the one hand, negative framing may induce increased elaboration and thereby persuasion. Alternatively, negative framing could yield faster retrieval or generation of evidence and thus influence subjective veracity via experiential fluency. Two experiments drawing on response latencies and one manipulating the delay between information acquisition and judgment provide support for the fluency-based account. Overall, results replicate and extend the negatively-biased framing effect in truth judgments and show that processing fluency may account for it. © 2011 Hogrefe Publishing

  17. A Truthful Incentive Mechanism for Online Recruitment in Mobile Crowd Sensing System

    Directory of Open Access Journals (Sweden)

    Xiao Chen

    2017-01-01

    Full Text Available We investigate emerging mobile crowd sensing (MCS systems, in which new cloud-based platforms sequentially allocate homogenous sensing jobs to dynamically-arriving users with uncertain service qualities. Given that human beings are selfish in nature, it is crucial yet challenging to design an efficient and truthful incentive mechanism to encourage users to participate. To address the challenge, we propose a novel truthful online auction mechanism that can efficiently learn to make irreversible online decisions on winner selections for new MCS systems without requiring previous knowledge of users. Moreover, we theoretically prove that our incentive possesses truthfulness, individual rationality and computational efficiency. Extensive simulation results under both real and synthetic traces demonstrate that our incentive mechanism can reduce the payment of the platform, increase the utility of the platform and social welfare.

  18. A Truthful Incentive Mechanism for Online Recruitment in Mobile Crowd Sensing System.

    Science.gov (United States)

    Chen, Xiao; Liu, Min; Zhou, Yaqin; Li, Zhongcheng; Chen, Shuang; He, Xiangnan

    2017-01-01

    We investigate emerging mobile crowd sensing (MCS) systems, in which new cloud-based platforms sequentially allocate homogenous sensing jobs to dynamically-arriving users with uncertain service qualities. Given that human beings are selfish in nature, it is crucial yet challenging to design an efficient and truthful incentive mechanism to encourage users to participate. To address the challenge, we propose a novel truthful online auction mechanism that can efficiently learn to make irreversible online decisions on winner selections for new MCS systems without requiring previous knowledge of users. Moreover, we theoretically prove that our incentive possesses truthfulness, individual rationality and computational efficiency. Extensive simulation results under both real and synthetic traces demonstrate that our incentive mechanism can reduce the payment of the platform, increase the utility of the platform and social welfare.

  19. Intelligible genders in scene: the cinema and the truth production about bodies

    Directory of Open Access Journals (Sweden)

    Luciene Galvão

    2014-06-01

    Full Text Available This paper aims to discuss how cinematographic language produces truths about men and women. Throughout the text, we have used to illustrate some iconic films that bring notions of masculinity and femininity. The film we have chosen are works that have a distinct esthetic and markets, they are able to raise issues related to gender and sexuality in discussions against romantic love, identity, homosexuality, violence and techniques confession of truths, among others. We analyze the films from Michel Foucault perspective concern sexuality and power relations and Judith Butler about gender intelligible. The plots of the films show that such truths are constantly negotiated and further indicate that norms about sex, desire, pleasure, masculinity and femininity are not only reproduced as its effects on private plots do not end with the end of the film.

  20. THE RULE OF TRUTH AND “WHITE LIE” IN MODERN MEDICINE

    Directory of Open Access Journals (Sweden)

    Zhanna V. Chashina

    2016-06-01

    Full Text Available Introduction. The article focuses on a topical conflict in contemporary medical practice – truthful patient’s information. Analysis of the problem is considered in historical perspective, by bringing the views of both domestic and foreign authors. Various opinions and arguments for and against providing information to the patient about his health discussions to this day by physicians, despite the legalization provision of truthful information about the state of health. Materials and Methods. Material for the article was the ethical and regulatory documents that include provisions on the right of the patient in receiving truthful information. On the basis of the dialectical approach in the article, the object of the research is a rule of truthfulness in medicine, the analysis which takes place in the framework of medical ethics and its modern model of bioethics. The application of integrated approach allowed us to consider the problem from a position of morality and law, society, medicine and the individual. Using the following methods: comparative-historical, axiological, document analysis, and synthesis of the functionality, efficiency and appropriateness of the rules of veracity at the present stage of development of medicine. Results. In the course of the study it was revealed that the issue of the rule of truthfulness in modern medical practice, justifying from the perspective of modern model of medical ethics – bioethics and the law, is the inalienable right of the patient. In addition, you specify the inextricable link between the rule and the truthfulness rule of informed consent. These regulations removed the controversial question of the presence of the “holy lie” in medicine. But due to the absolutely inviolable rules of “no harm” in medicine it is necessary to consider the ethical and legal aspects of rules of veracity: the duty, the right, the opportunity and feasibility to speak the truth, allowing not only the law