WorldWideScience

Sample records for ground truth automatic

  1. Semi-automatic ground truth generation using unsupervised clustering and limited manual labeling: Application to handwritten character recognition.

    Science.gov (United States)

    Vajda, Szilárd; Rangoni, Yves; Cecotti, Hubert

    2015-06-01

    For training supervised classifiers to recognize different patterns, large data collections with accurate labels are necessary. In this paper, we propose a generic, semi-automatic labeling technique for large handwritten character collections. In order to speed up the creation of a large scale ground truth, the method combines unsupervised clustering and minimal expert knowledge. To exploit the potential discriminant complementarities across features, each character is projected into five different feature spaces. After clustering the images in each feature space, the human expert labels the cluster centers. Each data point inherits the label of its cluster's center. A majority (or unanimity) vote decides the label of each character image. The amount of human involvement (labeling) is strictly controlled by the number of clusters - produced by the chosen clustering approach. To test the efficiency of the proposed approach, we have compared, and evaluated three state-of-the art clustering methods (k-means, self-organizing maps, and growing neural gas) on the MNIST digit data set, and a Lampung Indonesian character data set, respectively. Considering a k-nn classifier, we show that labeling manually only 1.3% (MNIST), and 3.2% (Lampung) of the training data, provides the same range of performance than a completely labeled data set would.

  2. Truth Maintenance in Automatic Planning.

    Science.gov (United States)

    The objective of this project was to explore the usefulness of incorporating truth maintenance system (TMS) technology into the design of planning...in the report. The six appendices describe the underlying research contributing to the design of the prototype system. Keywords: Truth maintenance, Planning search, Replanning, Nonmonotonic reasoning, Defeasible reasoning.

  3. SpinSat Mission Ground Truth Characterization

    Science.gov (United States)

    2014-09-01

    SpinSat Mission Ground Truth Characterization Andrew Nicholas, Ted Finne, Ivan Galysh, Anthony Mai, Jim Yen Naval Research Laboratory, Washington...mission overview, ground truth characterization and unique SSA observation opportunities of the mission. 1. MISSION CONCEPT The Naval Research...2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE SpinSat Mission Ground Truth Characterization 5a. CONTRACT

  4. Ground Truth Annotation in T Analyst

    DEFF Research Database (Denmark)

    2015-01-01

    This video shows how to annotate the ground truth tracks in the thermal videos. The ground truth tracks are produced to be able to compare them to tracks obtained from a Computer Vision tracking approach. The program used for annotation is T-Analyst, which is developed by Aliaksei Laureshyn, Ph...

  5. Ground Truth Annotation in T Analyst

    DEFF Research Database (Denmark)

    2015-01-01

    This video shows how to annotate the ground truth tracks in the thermal videos. The ground truth tracks are produced to be able to compare them to tracks obtained from a Computer Vision tracking approach. The program used for annotation is T-Analyst, which is developed by Aliaksei Laureshyn, Ph...

  6. Development of mine explosion ground truth smart sensors

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Steven R. [Rocky Mountain Geophysics, Inc., Los Alamos, NM (United States); Harben, Phillip E. [Rocky Mountain Geophysics, Inc., Los Alamos, NM (United States); Jarpe, Steve [Jarpe Data Solutions, Prescott, AZ (United States); Harris, David B. [Deschutes Signal Processing, Maupin, OR (United States)

    2015-09-14

    Accurate seismo-acoustic source location is one of the fundamental aspects of nuclear explosion monitoring. Critical to improved location is the compilation of ground truth data sets for which origin time and location are accurately known. Substantial effort by the National Laboratories and other seismic monitoring groups have been undertaken to acquire and develop ground truth catalogs that form the basis of location efforts (e.g. Sweeney, 1998; Bergmann et al., 2009; Waldhauser and Richards, 2004). In particular, more GT1 (Ground Truth 1 km) events are required to improve three-dimensional velocity models that are currently under development. Mine seismicity can form the basis of accurate ground truth datasets. Although the location of mining explosions can often be accurately determined using array methods (e.g. Harris, 1991) and from overhead observations (e.g. MacCarthy et al., 2008), accurate origin time estimation can be difficult. Occasionally, mine operators will share shot time, location, explosion size and even shot configuration, but this is rarely done, especially in foreign countries. Additionally, shot times provided by mine operators are often inaccurate. An inexpensive, ground truth event detector that could be mailed to a contact, placed in close proximity (< 5 km) to mining regions or earthquake aftershock regions that automatically transmits back ground-truth parameters, would greatly aid in development of ground truth datasets that could be used to improve nuclear explosion monitoring capabilities. We are developing an inexpensive, compact, lightweight smart sensor unit (or units) that could be used in the development of ground truth datasets for the purpose of improving nuclear explosion monitoring capabilities. The units must be easy to deploy, be able to operate autonomously for a significant period of time (> 6 months) and inexpensive enough to be discarded after useful operations have expired (although this may not be part of our business

  7. The ground truth about metadata and community detection in networks

    CERN Document Server

    Peel, Leto; Clauset, Aaron

    2016-01-01

    Across many scientific domains, there is common need to automatically extract a simplified view or a coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called \\textit{ground truth} communities. This works well in synthetic networks with planted communities because such networks' links are formed explicitly based on the planted communities. However, there are no planted communities in real world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. Here, we show that metadata are not the same as ground truth, and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch the...

  8. The ground truth about metadata and community detection in networks.

    Science.gov (United States)

    Peel, Leto; Larremore, Daniel B; Clauset, Aaron

    2017-05-01

    Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system's components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks' links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.

  9. Ground truth and benchmarks for performance evaluation

    Science.gov (United States)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  10. Ground Truth Collections at the MTI Core Sites

    Energy Technology Data Exchange (ETDEWEB)

    Garrett, A.J.

    2001-01-25

    The Savannah River Technology Center (SRTC) selected 13 sites across the continental US and one site in the western Pacific to serve as the primary or core site for collection of ground truth data for validation of MTI science algorithms. Imagery and ground truth data from several of these sites are presented in this paper. These sites are the Comanche Peak, Pilgrim and Turkey Point power plants, Ivanpah playas, Crater Lake, Stennis Space Center and the Tropical Western Pacific ARM site on the island of Nauru. Ground truth data includes water temperatures (bulk and skin), radiometric data, meteorological data and plant operating data. The organizations that manage these sites assist SRTC with its ground truth data collections and also give the MTI project a variety of ground truth measurements that they make for their own purposes. Collectively, the ground truth data from the 14 core sites constitute a comprehensive database for science algorithm validation.

  11. Development of Mine Explosion Ground Truth Smart Sensors

    Science.gov (United States)

    2011-09-01

    DEVELOPMENT OF MINE EXPLOSION GROUND TRUTH SMART SENSORS Steven R. Taylor1, Phillip E. Harben1, Steve Jarpe2, and David B. Harris3 Rocky...improved location is the compilation of ground truth data sets for which origin time and location are accurately known. Substantial effort by the...National Laboratories and seismic monitoring groups have been undertaken to acquire and develop ground truth catalogs that form the basis of location

  12. A ship-borne meteorological station for ground truth measurements

    Digital Repository Service at National Institute of Oceanography (India)

    Desai, R.G.P.; Desa, B.A.E.

    Oceanographic upwelling studies required ground truth measurements of meteorological parameters and sea surface temperature to be made from a research vessel which did not have the necessary facilities. A ship-borne station was therefore designed...

  13. Ground truth delineation for medical image segmentation based on Local Consistency and Distribution Map analysis.

    Science.gov (United States)

    Cheng, Irene; Sun, Xinyao; Alsufyani, Noura; Xiong, Zhihui; Major, Paul; Basu, Anup

    2015-01-01

    Computer-aided detection (CAD) systems are being increasingly deployed for medical applications in recent years with the goal to speed up tedious tasks and improve precision. Among others, segmentation is an important component in CAD systems as a preprocessing step to help recognize patterns in medical images. In order to assess the accuracy of a CAD segmentation algorithm, comparison with ground truth data is necessary. To-date, ground truth delineation relies mainly on contours that are either manually defined by clinical experts or automatically generated by software. In this paper, we propose a systematic ground truth delineation method based on a Local Consistency Set Analysis approach, which can be used to establish an accurate ground truth representation, or if ground truth is available, to assess the accuracy of a CAD generated segmentation algorithm. We validate our computational model using medical data. Experimental results demonstrate the robustness of our approach. In contrast to current methods, our model also provides consistency information at distributed boundary pixel level, and thus is invariant to global compensation error.

  14. Is our Ground-Truth for Traffic Classification Reliable?

    DEFF Research Database (Denmark)

    Carela-Español, Valentín; Bujlow, Tomasz; Barlet-Ros, Pere

    2014-01-01

    The validation of the different proposals in the traffic classification literature is a controversial issue. Usually, these works base their results on a ground-truth built from private datasets and labeled by techniques of unknown reliability. This makes the validation and comparison with other...... solutions an extremely difficult task. This paper aims to be a first step towards addressing the validation and trustworthiness problem of network traffic classifiers. We perform a comparison between 6 well-known DPI-based techniques, which are frequently used in the literature for ground-truth generation....... In order to evaluate these tools we have carefully built a labeled dataset of more than 500 000 flows, which contains traffic from popular applications. Our results present PACE, a commercial tool, as the most reliable solution for ground-truth generation. However, among the open-source tools available...

  15. Ground truth data generation for skull-face overlay.

    Science.gov (United States)

    Ibáñez, O; Cavalli, F; Campomanes-Álvarez, B R; Campomanes-Álvarez, C; Valsecchi, A; Huete, M I

    2015-05-01

    Objective and unbiased validation studies over a significant number of cases are required to get a more solid picture on craniofacial superimposition reliability. It will not be possible to compare the performance of existing and upcoming methods for craniofacial superimposition without a common forensic database available for the research community. Skull-face overlay is a key task within craniofacial superimposition that has a direct influence on the subsequent task devoted to evaluate the skull-face relationships. In this work, we present the procedure to create for the first time such a dataset. We have also created a database with 19 skull-face overlay cases for which we are trying to overcome legal issues that allow us to make it public. The quantitative analysis made in the segmentation and registration stages, together with the visual assessment of the 19 face-to-face overlays, allows us to conclude that the results can be considered as a gold standard. With such a ground truth dataset, a new horizon is opened for the development of new automatic methods whose performance could be now objectively measured and compared against previous and future proposals. Additionally, other uses are expected to be explored to better understand the visual evaluation process of craniofacial relationships in craniofacial identification. It could be very useful also as a starting point for further studies on the prediction of the resulting facial morphology after corrective or reconstructive interventionism in maxillofacial surgery.

  16. Seismic Monitoring System Calibration Using Ground Truth Database

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Winston; Wagner, Robert

    2002-12-22

    Calibration of a seismic monitoring system remains a major issue due to the lack of ground truth information and uncertainties in the regional geological parameters. Rapid and accurate identification of seismic events is currently not feasible due to the absence of a fundamental framework allowing immediate access to ground truth information for many parts of the world. Precise location and high-confidence identification of regional seismic events are the primary objectives of monitoring research in seismology. In the Department of Energy Knowledge Base (KB), ground truth information addresses these objectives and will play a critical role for event relocation and identification using advanced seismic analysis tools. Maintaining the KB with systematic compilation and analysis of comprehensive sets of geophysical data from various parts of the world is vital. The goal of this project is to identify a comprehensive database for China using digital seismic waveform data that are currently unavailable. These data may be analyzed along with ground truth information that becomes available. To date, arrival times for all regional phases are determined on all events above Mb 4.5 that occurred in China in 2000 and 2001. Travel-time models are constructed to compare with existing models. Seismic attenuation models may be constructed to provide better understanding of regional wave propagation in China with spatial resolution that has not previously been obtained.

  17. On the ground truth problem of malicious DNS traffic analysis

    DEFF Research Database (Denmark)

    Stevanovic, Matija; Pedersen, Jens Myrup; D’Alconzo, Alessandro

    2015-01-01

    DNS is often abused by Internet criminals in order to provide flexible and resilient hosting of malicious content and reliable communication within their network architecture. The majority of detection methods targeting alicious DNS traffic are data-driven, most commonly having machine learning...... algorithms at their core. These methods require accurate ground truth of both malicious and benign DNS traffic for model training as well as for the performance evaluation. This paper elaborates on the problem of obtaining such a ground truth and evaluates practices employed by contemporary detection methods....... Building upon the evaluation results, we propose a novel semi-manual labeling practice targeting agile DNS mappings, i.e. DNS queries that are used to reach a potentially malicious server characterized by fast changing domain names or/and IP addresses. The proposed approach is developed with the purpose...

  18. Aeolian dunes as ground truth for atmospheric modeling on Mars

    Science.gov (United States)

    Hayward, R.K.; Titus, T.N.; Michaels, T.I.; Fenton, L.K.; Colaprete, A.; Christensen, P.R.

    2009-01-01

    Martian aeolian dunes preserve a record of atmosphere/surface interaction on a variety of scales, serving as ground truth for both Global Climate Models (GCMs) and mesoscale climate models, such as the Mars Regional Atmospheric Modeling System (MRAMS). We hypothesize that the location of dune fields, expressed globally by geographic distribution and locally by dune centroid azimuth (DCA), may record the long-term integration of atmospheric activity across a broad area, preserving GCM-scale atmospheric trends. In contrast, individual dune morphology, as expressed in slipface orientation (SF), may be more sensitive to localized variations in circulation, preserving topographically controlled mesoscale trends. We test this hypothesis by comparing the geographic distribution, DCA, and SF of dunes with output from the Ames Mars GCM and, at a local study site, with output from MRAMS. When compared to the GCM: 1) dunes generally lie adjacent to areas with strongest winds, 2) DCA agrees fairly well with GCM modeled wind directions in smooth-floored craters, and 3) SF does not agree well with GCM modeled wind directions. When compared to MRAMS modeled winds at our study site: 1) DCA generally coincides with the part of the crater where modeled mean winds are weak, and 2) SFs are consistent with some weak, topographically influenced modeled winds. We conclude that: 1) geographic distribution may be valuable as ground truth for GCMs, 2) DCA may be useful as ground truth for both GCM and mesoscale models, and 3) SF may be useful as ground truth for mesoscale models. Copyright 2009 by the American Geophysical Union.

  19. Comparison of algorithms for ultrasound image segmentation without ground truth

    Science.gov (United States)

    Sikka, Karan; Deserno, Thomas M.

    2010-02-01

    Image segmentation is a pre-requisite to medical image analysis. A variety of segmentation algorithms have been proposed, and most are evaluated on a small dataset or based on classification of a single feature. The lack of a gold standard (ground truth) further adds to the discrepancy in these comparisons. This work proposes a new methodology for comparing image segmentation algorithms without ground truth by building a matrix called region-correlation matrix. Subsequently, suitable distance measures are proposed for quantitative assessment of similarity. The first measure takes into account the degree of region overlap or identical match. The second considers the degree of splitting or misclassification by using an appropriate penalty term. These measures are shown to satisfy the axioms of a quasi-metric. They are applied for a comparative analysis of synthetic segmentation maps to show their direct correlation with human intuition of similar segmentation. Since ultrasound images are difficult to segment and usually lack a ground truth, the measures are further used to compare the recently proposed spectral clustering algorithm (encoding spatial and edge information) with standard k-means over abdominal ultrasound images. Improving the parameterization and enlarging the feature space for k-means steadily increased segmentation quality to that of spectral clustering.

  20. Reverse Classification Accuracy: Predicting Segmentation Performance in the Absence of Ground Truth.

    Science.gov (United States)

    Valindria, Vanya V; Lavdas, Ioannis; Bai, Wenjia; Kamnitsas, Konstantinos; Aboagye, Eric O; Rockall, Andrea G; Rueckert, Daniel; Glocker, Ben

    2017-08-01

    When integrating computational tools, such as automatic segmentation, into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data and, in particular, to detect when an automatic method fails. However, this is difficult to achieve due to the absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross validation, because validation data are often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared with a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA, we take the predicted segmentation from a new image to train a reverse classifier, which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as a part of large-scale image analysis studies.

  1. VERIFICATION & VALIDATION OF A SEMANTIC IMAGE TAGGING FRAMEWORK VIA GENERATION OF GEOSPATIAL IMAGERY GROUND TRUTH

    Energy Technology Data Exchange (ETDEWEB)

    Gleason, Shaun Scott [ORNL; Ferrell, Regina Kay [ORNL; Cheriyadat, Anil M [ORNL; Vatsavai, Raju [ORNL; Sari-Sarraf, Hamed [ORNL; Dema, Mesfin A [ORNL

    2011-01-01

    As a result of increasing geospatial image libraries, many algorithms are being developed to automatically extract and classify regions of interest from these images. However, limited work has been done to compare, validate and verify these algorithms due to the lack of datasets with high accuracy ground truth annotations. In this paper, we present an approach to generate a large number of synthetic images accompanied by perfect ground truth annotation via learning scene statistics from few training images through Maximum Entropy (ME) modeling. The ME model [1,2] embeds a Stochastic Context Free Grammar (SCFG) to model object attribute variations with Markov Random Fields (MRF) with the final goal of modeling contextual relations between objects. Using this model, 3D scenes are generated by configuring a 3D object model to obey the learned scene statistics. Finally, these plausible 3D scenes are captured by ray tracing software to produce synthetic images with the corresponding ground truth annotations that are useful for evaluating the performance of a variety of image analysis algorithms.

  2. AMS Ground Truth Measurements: Calibration and Test Lines

    Energy Technology Data Exchange (ETDEWEB)

    Wasiolek, P. [National Security Technologies, LLC. (NSTec), Mercury, NV (United States)

    2013-11-01

    Airborne gamma spectrometry is one of the primary techniques used to define the extent of ground contamination after a radiological incident. Its usefulness was demonstrated extensively during the response to the Fukushima nuclear power plant (NPP) accident in March-May 2011. To map ground contamination a set of scintillation detectors is mounted on an airborne platform (airplane or helicopter) and flown over contaminated areas. The acquisition system collects spectral information together with the aircraft position and altitude every second. To provide useful information to decision makers, the count rate data expressed in counts per second (cps) needs to be converted to the terrestrial component of the exposure rate 1 m above ground, or surface activity of isotopes of concern. This is done using conversion coefficients derived from calibration flights. During a large scale radiological event, multiple flights may be necessary and may require use of assets from different agencies. However, as the production of a single, consistent map product depicting the ground contamination is the primary goal, it is critical to establish very early into the event a common calibration line. Such a line should be flown periodically in order to normalize data collected from different aerial acquisition systems and potentially flown at different flight altitudes and speeds. In order to verify and validate individual aerial systems, the calibration line needs to be characterized in terms of ground truth measurements. This is especially important if the contamination is due to short-lived radionuclides. The process of establishing such a line, as well as necessary ground truth measurements, is described in this document.

  3. Compositional Ground Truth of Diviner Lunar Radiometer Observations

    Science.gov (United States)

    Greenhagen, B. T.; Thomas, I. R.; Bowles, N. E.; Allen, C. C.; Donaldson Hanna, K. L.; Foote, E. J.; Paige, D. A.

    2012-01-01

    The Moon affords us a unique opportunity to "ground truth" thermal infrared (i.e. 3 to 25 micron) observations of an airless body. The Moon is the most accessable member of the most abundant class of solar system bodies, which includes Mercury, astroids, and icy satellites. The Apollo samples returned from the Moon are the only extraterrestrial samples with known spatial context. And the Diviner Lunar Radiometer (Diviner) is the first instrument to globally map the spectral thermal emission of an airless body. Here we compare Diviner observations of Apollo sites to compositional and spectral measurements of Apollo lunar soil samples in simulated lunar environment (SLE).

  4. Fish farms at sea: the ground truth from Google Earth.

    Directory of Open Access Journals (Sweden)

    Pablo Trujillo

    Full Text Available In the face of global overfishing of wild-caught seafood, ocean fish farming has augmented the supply of fresh fish to western markets and become one of the fastest growing global industries. Accurate reporting of quantities of wild-caught fish has been problematic and we questioned whether similar discrepancies in data exist in statistics for farmed fish production. In the Mediterranean Sea, ocean fish farming is prevalent and stationary cages can be seen off the coasts of 16 countries using satellite imagery available through Google Earth. Using this tool, we demonstrate here that a few trained scientists now have the capacity to ground truth farmed fish production data reported by the Mediterranean countries. With Google Earth, we could examine 91% of the Mediterranean coast and count 248 tuna cages (circular cages >40 m diameter and 20,976 other fish cages within 10 km offshore, the majority of which were off Greece (49% and Turkey (31%. Combining satellite imagery with assumptions about cage volume, fish density, harvest rates, and seasonal capacity, we make a conservative approximation of ocean-farmed finfish production for 16 Mediterranean countries. Our overall estimate of 225,736 t of farmed finfish (not including tuna in the Mediterranean Sea in 2006 is only slightly more than the United Nations Food and Agriculture Organization reports. The results demonstrate the reliability of recent FAO farmed fish production statistics for the Mediterranean as well as the promise of Google Earth to collect and ground truth data.

  5. Fish farms at sea: the ground truth from Google Earth.

    Science.gov (United States)

    Trujillo, Pablo; Piroddi, Chiara; Jacquet, Jennifer

    2012-01-01

    In the face of global overfishing of wild-caught seafood, ocean fish farming has augmented the supply of fresh fish to western markets and become one of the fastest growing global industries. Accurate reporting of quantities of wild-caught fish has been problematic and we questioned whether similar discrepancies in data exist in statistics for farmed fish production. In the Mediterranean Sea, ocean fish farming is prevalent and stationary cages can be seen off the coasts of 16 countries using satellite imagery available through Google Earth. Using this tool, we demonstrate here that a few trained scientists now have the capacity to ground truth farmed fish production data reported by the Mediterranean countries. With Google Earth, we could examine 91% of the Mediterranean coast and count 248 tuna cages (circular cages >40 m diameter) and 20,976 other fish cages within 10 km offshore, the majority of which were off Greece (49%) and Turkey (31%). Combining satellite imagery with assumptions about cage volume, fish density, harvest rates, and seasonal capacity, we make a conservative approximation of ocean-farmed finfish production for 16 Mediterranean countries. Our overall estimate of 225,736 t of farmed finfish (not including tuna) in the Mediterranean Sea in 2006 is only slightly more than the United Nations Food and Agriculture Organization reports. The results demonstrate the reliability of recent FAO farmed fish production statistics for the Mediterranean as well as the promise of Google Earth to collect and ground truth data.

  6. Ground-Truthing a Next Generation Snow Radar

    Science.gov (United States)

    Yan, S.; Brozena, J. M.; Gogineni, P. S.; Abelev, A.; Gardner, J. M.; Ball, D.; Liang, R.; Newman, T.

    2016-12-01

    During the early spring of 2016 the Naval Research Laboratory (NRL) performed a test of a next generation airborne snow radar over ground truth data collected on several areas of fast ice near Barrow, AK. The radar was developed by the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas, and includes several improvements compared to their previous snow radar. The new unit combines the earlier Ku-band and snow radars into a single unit with an operating frequency spanning the entire 2-18 GHz, an enormous bandwidth which provides the possibility of snow depth measurements with 1.5 cm range resolution. Additionally, the radar transmits on dual polarizations (H and V), and receives the signal through two orthogonally polarized Vivaldi arrays, each with 128 phase centers. The 8 sets of along-track phase centers are combined in hardware to improve SNR and narrow the beamwidth in the along-track, resulting in 8 cross-track effective phase centers which are separately digitized to allow for beam sharpening and forming in post-processing. Tilting the receive arrays 30 degrees from the horizontal also allows the formation of SAR images and the potential for estimating snow-water equivalent (SWE). Ground truth data (snow depth, density, salinity and SWE) were collected over several 60 m wide swaths that were subsequently overflown with the snow radar mounted on a Twin Otter. The radar could be operated in nadir (by beam steering the receive antennas to point beneath the aircraft) or side-looking modes. Results from the comparisons will be shown.

  7. Visualization of ground truth tracks for the video 'Tracking a "facer's" behavior in a public plaza'

    DEFF Research Database (Denmark)

    2015-01-01

    The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager.......The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager....

  8. Visualization of ground truth tracks for the video 'Tracking a "facer's" behavior in a public plaza'

    DEFF Research Database (Denmark)

    2015-01-01

    The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager.......The video shows the ground truth tracks in GIS of all pedestrians in the video 'Tracking a 'facer's" behavior in a public plaza'. The visualization was made using QGIS TimeManager....

  9. Acoustic seafloor discrimination with echo shape parameters: A comparison with the ground truth

    NARCIS (Netherlands)

    Walree, P.A. van; Tȩgowski, J.; Laban, C.; Simons, D.G.

    2005-01-01

    Features extracted from echosounder bottom returns are compared with the ground truth in a North Sea survey area. The ground truth consists of 50 grab samples for which the grain size distribution, and the gravel and shell contents were determined. Echo envelopes are analysed for two single-beam ech

  10. Truth.

    Science.gov (United States)

    Mills, Jon

    2014-04-01

    What exactly do we mean by truth? Although the concept is nebulous across the array of theoretical perspectives in psychoanalysis, it is fundamental to all discourses. Is psychoanalysis in a position to offer a theory of truth despite the fact that at present it has no explicit, formal theory regarding the matter? A general metatheory is proposed here that allows for discrete categories and instantiations of truth as metacontextual appearance. In revisiting the ancient notion of aletheia as disclosedness or unconcealment, we may discover a distinct psychoanalytic contribution to truth conditioned on unconscious processes reappropriated from Heidegger's project of fundamental ontology. Construed as a dialectics of truth, this notion accords well with how psychoanalysts understand the dynamic unconscious and how it functions to both reveal and conceal. Given that clinical experience demonstrates the workings of dynamic unconscious activity, psychoanalytic theory may contribute a vocabulary relevant to philosophy by explicating the motives and mechanisms that create the appearances of contextual truth as such, phenomena whose causes have previously gone undescribed.

  11. Ground Truth Collections for Explosions in Northern Fennoscandia and Russia

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Ringdal, F; Kremenetskaya, E; Mykkeltveit, S; Schweitzer, J.; Hauk, T; Asming, V; Rock, D; Lewis, P

    2003-07-28

    This project is providing ground-truth information on explosions conducted at the principal mines within 500 kilometers of the ARCES station, and is assembling a seismic waveform database for these events from local and regional stations. The principal mines of interest are in northwest Russia (Khibiny Massif, Olenogorsk, Zapolyamy, and Kovdor groups) and Sweden (Malmberget, Kiruna). These mines form a natural laboratory for examining the variation of mining explosion observations with source type, since they include colocated surface and underground mines and mines conducting a variety of different shot types. In September 2002 we deployed two lines of temporary stations from the Khibiny Massif through and to the north of the ARCES station. This deployment is producing data that will allow researchers to examine the variation of discriminants caused by varying source-receiver distance and the diversity of explosion types. To date, we have collected ground-truth information on 1,118 explosions in the Kola Peninsula, and have assembled waveform data for approximately 700 of these. The database includes waveforms from instruments temporarily deployed in the Khibiny Massif mines, from the Apatity network just outside of the Massif, from LVZ, KEV and ARCES, and from the stations deployed along the two lines into northern Norway. In this paper we present representative waveforms for several types of shots recorded at various regional distances. We have conducted a preliminary study of the variation of phase ratios as a function of source type. This study shows significant differences in Pd/Sn and Pd/Lg ratios for two types of mining explosions: surface ripple-fired explosions and compact underground explosions. Compact explosions are, typically, underground explosions of a few tons with only one or two short delays, and are the closest approximation to single, well-tamped explosions available in the Khibiny mines. The surface shots typically are much larger (ranging up

  12. Government Applications Task Force ground truth study of WAG 4

    Energy Technology Data Exchange (ETDEWEB)

    Evers, T.K.; Smyre, J.L.; King, A.L.

    1997-06-01

    This report documents the Government Applications Task Force (GATF) Buried Waste Project. The project was initiated as a field investigation and verification of the 1994 Strategic Environmental Research and Development Program`s (SERDP) Buried Waste Identification Project results. The GATF project team included staff from three US Department of Energy (DOE) Laboratories [Oak Ridge National Laboratory (ORNL), Los Alamos National Laboratory (LANL), and the Savannah River Technology Center (SRTC)] and from the National Exploitation Laboratory. Similar studies were conducted at each of the three DOE laboratories to demonstrate the effective use of remote sensing technologies. The three locations were selected to assess differences in buried waste signatures under various environmental conditions (i.e., climate, terrain, precipitation, geology, etc.). After a brief background discussion of the SERDP Project, this report documents the field investigation (ground truth) results from the 1994--1995 GATF Buried Waste Study at ORNL`s Waste Area Grouping (WAG) 4. Figures for this report are located in Appendix A.

  13. Interactive removal and ground truth for difficult shadow scenes.

    Science.gov (United States)

    Gong, Han; Cosker, Darren

    2016-09-01

    A user-centric method for fast, interactive, robust, and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases, such as highly textured and colored shadows. To perform detection, an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame, which allows us efficient estimation of nonuniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multiscene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored, and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed algorithm to outperform the state of the art across several measures and shadow categories. To complement our data set, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research.

  14. Improvement of Ground Truth Classification of Soviet Peaceful Nuclear Explosions

    Science.gov (United States)

    Mackey, K. G.; Fujita, K.; Bergman, E.

    2016-12-01

    From the 1960's through the late 1980's, the Soviet Union conducted 122 Peaceful Nuclear Explosions across its territory. These PNEs are now very important to the seismological community as so-called Ground Truth (GT) events. The PNE locations are widely distributed, thus GT0-1 locations, meaning that true location is known to within 1 km or better, are used as calibration events for developing seismic velocity models, model validation, seismic discrimination, etc. The nuclear monitoring/verification community generally utilizes published lists of PNE locations as known or verified GT events, though in reality there are errors and some PNEs are poorly located. We have determined or validated GT0-1 locations for 85 of the Soviet PNEs. Some PNEs published as GT1 or better also have larger errors. Our locations were determined using an integrated approach encompassing published open literature, analysis of satellite imagery and regional seismic data. We have visited and verified 10 PNE sites across Kazakhstan and Ukraine, allowing GPS coordinates to be obtained in the field.

  15. Interactive removal and ground truth for difficult shadow scenes

    Science.gov (United States)

    Gong, Han; Cosker, Darren

    2016-09-01

    A user-centric method for fast, interactive, robust and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases: such as highly textured and colored shadows. To perform detection an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame which allows us efficient estimation of non-uniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multi-scene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed new algorithm to outperform the state-of-the-art across several measures and shadow category. To complement our dataset, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research.

  16. Ground Truth Creation for Complex Clinical NLP Tasks - an Iterative Vetting Approach and Lessons Learned.

    Science.gov (United States)

    Liang, Jennifer J; Tsou, Ching-Huei; Devarakonda, Murthy V

    2017-01-01

    Natural language processing (NLP) holds the promise of effectively analyzing patient record data to reduce cognitive load on physicians and clinicians in patient care, clinical research, and hospital operations management. A critical need in developing such methods is the "ground truth" dataset needed for training and testing the algorithms. Beyond localizable, relatively simple tasks, ground truth creation is a significant challenge because medical experts, just as physicians in patient care, have to assimilate vast amounts of data in EHR systems. To mitigate potential inaccuracies of the cognitive challenges, we present an iterative vetting approach for creating the ground truth for complex NLP tasks. In this paper, we present the methodology, and report on its use for an automated problem list generation task, its effect on the ground truth quality and system accuracy, and lessons learned from the effort.

  17. Reconciling Radar Remote-Sensing with MER Ground Truth

    Science.gov (United States)

    Haldemann, A. F. C.; Larsen, K. W.; Jurgens, R. F.; Golombek, M. P.

    2004-11-01

    The Goldstone Solar System Radar (GSSR) carried out Earth-based delay-Doppler radar observations of Mars with four receiving stations during the oppositions in 2001 and 2003, supporting Mars Exploration Rover landing site selection. This interferometric technique demonstrated radar mapping of Mars with a 5 km to 10 km spatial resolution. The data for both Gusev Crater and Meridiani Planum indicated smooth terrains, consistent with, but somewhat different from, previous lower spatial resolution Earth-based radar data. Now, with quantitative ground-truth roughness measurements by Spirit and Opportunity, along with THEMIS visible camera images, we can begin to reconcile these differing remote-sensing observations. For Gusev crater, older λ =3.5 cm wavelength data did not directly sample the crater but were of nearby terrain of the same map unit as Gusev's floor. The reported Hagfors scattering model parameters were θ rms=4.7±1.6 degrees, and ρ 0=0.04±0.02. These quasi-specular parameters refer to roughness in the range 10 λ to 100 λ . The higher resolution data from 2003, averaged over the whole MER Gusev ellipse were θ rms=1.3+1.0-0.5 degrees and ρ 0=0.02±0.01. The ρ 0 for the 5 km pixel where Spirit landed was like the average, but θ rms=1.6+1.0-0.5. The roughness derived from stereo images from Spirits first 30 sols, available on the PDS, implies near-nadir scattering from 3 m scales is dominant. We examine the spatial coverage of the older data, as well as other radar data to reconcile the differing observations. For Meridiani, GSSR made direct observations at 3.5 cm at both 5 km resolution and at 10×150 km resolution in 2001. We will carry out our comparative analyses once rover navigation data beyond Eagle crater, obtained after Sol 58, are released to the PDS, and expect to have them for presentation at the meeting.

  18. Ground truth management system to support multispectral scanner /MSS/ digital analysis

    Science.gov (United States)

    Coiner, J. C.; Ungar, S. G.

    1977-01-01

    A computerized geographic information system for management of ground truth has been designed and implemented to relate MSS classification results to in situ observations. The ground truth system transforms, generalizes and rectifies ground observations to conform to the pixel size and shape of high resolution MSS aircraft data. These observations can then be aggregated for comparison to lower resolution sensor data. Construction of a digital ground truth array allows direct pixel by pixel comparison between classification results of MSS data and ground truth. By making comparisons, analysts can identify spatial distribution of error within the MSS data as well as usual figures of merit for the classifications. Use of the ground truth system permits investigators to compare a variety of environmental or anthropogenic data, such as soil color or tillage patterns, with classification results and allows direct inclusion of such data into classification operations. To illustrate the system, examples from classification of simulated Thematic Mapper data for agricultural test sites in North Dakota and Kansas are provided.

  19. Ground Truth Creation for Complex Clinical NLP Tasks – an Iterative Vetting Approach and Lessons Learned

    Science.gov (United States)

    Liang, Jennifer J.; Tsou, Ching-Huei; Devarakonda, Murthy V.

    2017-01-01

    Natural language processing (NLP) holds the promise of effectively analyzing patient record data to reduce cognitive load on physicians and clinicians in patient care, clinical research, and hospital operations management. A critical need in developing such methods is the “ground truth” dataset needed for training and testing the algorithms. Beyond localizable, relatively simple tasks, ground truth creation is a significant challenge because medical experts, just as physicians in patient care, have to assimilate vast amounts of data in EHR systems. To mitigate potential inaccuracies of the cognitive challenges, we present an iterative vetting approach for creating the ground truth for complex NLP tasks. In this paper, we present the methodology, and report on its use for an automated problem list generation task, its effect on the ground truth quality and system accuracy, and lessons learned from the effort. PMID:28815130

  20. The Truth Before and After: Brain Potentials Reveal Automatic Activation of Event Knowledge during Sentence Comprehension.

    Science.gov (United States)

    Nieuwland, Mante S

    2015-11-01

    How does knowledge of real-world events shape our understanding of incoming language? Do temporal terms like "before" and "after" impact the online recruitment of real-world event knowledge? These questions were addressed in two ERP experiments, wherein participants read sentences that started with "before" or "after" and contained a critical word that rendered each sentence true or false (e.g., "Before/After the global economic crisis, securing a mortgage was easy/harder"). The critical words were matched on predictability, rated truth value, and semantic relatedness to the words in the sentence. Regardless of whether participants explicitly verified the sentences or not, false-after-sentences elicited larger N400s than true-after-sentences, consistent with the well-established finding that semantic retrieval of concepts is facilitated when they are consistent with real-world knowledge. However, although the truth judgments did not differ between before- and after-sentences, no such sentence N400 truth value effect occurred in before-sentences, whereas false-before-sentences elicited an enhanced subsequent positive ERPs. The temporal term "before" itself elicited more negative ERPs at central electrode channels than "after." These patterns of results show that, irrespective of ultimate sentence truth value judgments, semantic retrieval of concepts is momentarily facilitated when they are consistent with the known event outcome compared to when they are not. However, this inappropriate facilitation incurs later processing costs as reflected in the subsequent positive ERP deflections. The results suggest that automatic activation of event knowledge can impede the incremental semantic processes required to establish sentence truth value.

  1. Retina Lesion and Microaneurysm Segmentation using Morphological Reconstruction Methods with Ground-Truth Data

    Energy Technology Data Exchange (ETDEWEB)

    Karnowski, Thomas Paul [ORNL; Govindaswamy, Priya [Oak Ridge National Laboratory (ORNL); Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK); Abramoff, M.D. [University of Iowa

    2008-01-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  2. Retina Lesion and Microaneurysm Segmentation using Morphological Reconstruction Methods with Ground-Truth Data

    Energy Technology Data Exchange (ETDEWEB)

    Karnowski, Thomas Paul [ORNL; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [ORNL; Muthusamy Govindasamy, Vijaya Priya [ORNL

    2009-09-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  3. Soil moisture ground truth: Steamboat Springs, Colorado, site and Walden, Colorado, site

    Science.gov (United States)

    Jones, E. B.

    1976-01-01

    Ground-truth data taken at Steamboat Springs and Walden, Colorado in support of the NASA missions in these areas during the period March 8, 1976 through March 11, 1976 was presented. This includes the following information: snow course data for Steamboat Springs and Walden, snow pit and snow quality data for Steamboat Springs, and soil moisture report.

  4. Designing of Ground Truth Annotated DBT-TU-JU Breast Thermogram Database towards Early Abnormality Prediction.

    Science.gov (United States)

    Bhowmik, Mrinal Kanti; Gogoi, Usha Rani; Majumdar, Gautam; Bhattacharjee, Debotosh; Datta, Dhritiman; Ghosh, Anjan Kumar

    2017-08-17

    The advancement of research in a specific area of clinical diagnosis crucially depends on the availability and quality of the radiology and other test related databases accompanied by ground truth and additional necessary medical findings. The paper describes the creation of the Department of Biotechnology-Tripura University-Jadavpur University (DBT-TU-JU) breast thermogram database. The objective of creating the DBT-TU-JU database is to provide a breast thermogram database that is annotated with the ground truth images of the suspicious regions. Along with the result of breast thermography, the database comprises of the results of other breast imaging methodologies. A standard breast thermogram acquisition protocol suite comprising of several critical factors has been designed for the collection of breast thermograms. Currently, the DBT-TU-JU database contains 1100 breast thermograms of 100 subjects. Due to the necessity of evaluating any breast abnormality detection system, this study emphasizes the generation of the ground truth images of the hotspot areas, whose presence in a breast thermogram signifies the presence of breast abnormality. With the generated ground truth images, we compared the results of six state-of-the-art image segmentation methods using five supervised evaluation metrics to identify the proficient segmentation methods for hotspot extraction. Based on the evaluation results, the Fractional-Order Darwinian particle swarm optimization, Region growing, Mean shift and Fuzzy c-means clustering are found to be more efficient in comparison to k-means clustering and Threshold based segmentation methods.

  5. A new benchmark for pose estimation with ground truth from virtual reality

    DEFF Research Database (Denmark)

    Schlette, Christian; Buch, Anders Glent; Aksoy, Eren Erdal

    2014-01-01

    assembly tasks. Following the eRobotics methodology, a simulatable 3D representation of this platform was modelled in virtual reality. Based on a detailed camera and sensor simulation, we generated a set of benchmark images and point clouds with controlled levels of noise as well as ground truth data...

  6. Go fly a kite : air truthing replaces ground truthing for environmental investigations

    Energy Technology Data Exchange (ETDEWEB)

    Eaton, S.

    2008-05-15

    This article discussed the use of kite aerial photography (KAP) in oil and gas exploration activities. KAP exhibits a minimal environmental footprint while providing high resolution airborne data of the Earth's surface in infrared and a variety of other media. The cost-effective technology is being employed by Alberta's oil and gas operators as well as by the environmental consulting sector. The kites fly at lower elevations than other remote sensing tools, and yield better spatial resolution on the ground. KAP can map the Earth's surface at a scale of investigation on the order of 5 to 10 centimetres. The images are placed into a geo-referenced mosaic along with poorer resolution remote sensing tools. A KAP kit can be assembled for under $1000. By using infrared KAP images, operators are able to determine the health of muskeg and swamp areas and measure the rate of photosynthesis of plants. KAP is also used to evaluate troublesome wellsite by reclamation groups. The next generation of sensors will include radio-controlled drones and miniature aircraft. 6 figs.

  7. Comparison of MTI Water Temperatures with Ground Truth Measurements at Crater Lake, OR

    Energy Technology Data Exchange (ETDEWEB)

    Kurzeja, R.J.

    2002-12-09

    Water surface temperatures calculated with the Los Alamos National Laboratory Robust algorithm were compared with ground truth water temperature measurements near the Oregon State University buoy in Crater Lake, OR. Bulk water measurements at the OSU buoy were corrected for the skin temperature depression and temperature gradient in the top 10 cm of the water to find the water surface temperature for 18 MTI images for June 2000 to Feb 2002. The MTI robust temperatures were found to be biased by 0.1C, with an RMS error of 1.9C compared with the ground truth water surface temperatures. When corrected for the errors in the buoy temperatures the RMS was reduced to 1.3C. This RMS difference is greater than the 1C found at the Pacific Island of Nauru because of the greater variability in the lake temperature and the atmosphere at Crater Lake and the much smaller target area used in the comparison.

  8. ERTS-1 DCS technical support provided by Wallops Station. [ground truth stations and DCP repair depot

    Science.gov (United States)

    Smith, R.

    1975-01-01

    Wallops Station accepted the tasks of providing ground truth to several ERTS investigators, operating a DCP repair depot, designing and building an airborne DCP Data Acquisition System, and providing aircraft underflight support for several other investigators. Additionally, the data bank is generally available for use by ERTS and other investigators that have a scientific interest in data pertaining to the Chesapeake Bay area. Working with DCS has provided a means of evaluating the system as a data collection device possibly applicable to ongoing Earth Resources Program activities in the Chesapeake Bay area as well as providing useful data and services to other ERTS investigators. The two areas of technical support provided by Wallops, ground truth stations and repair for DCPs, are briefly discussed.

  9. More efficient ground truth ROI image coding technique :implementation and wavelet based application analysis

    Institute of Scientific and Technical Information of China (English)

    KUMARAYAPA Ajith; ZHANG Ye

    2007-01-01

    In this paper, more efficient, low-complexity and reliable region of interest (ROI) image codec for compressing smooth low texture remote sensing images is proposed. We explore the efficiency of the modified ROI codec with respect to the selected set of convenient wavelet filters, which is a novel method. Such ROI coding experiment analysis representing low bit rate lossy to high quality lossless reconstruction with timing analysis is useful for improving remote sensing ground truth surveillance efficiency in terms of time and quality. The subjective [i.e. fair, five observer (HVS) evaluations using enhanced 3D picture view Hyper memory display technology] and the objective results revealed that for faster ground truth ROI coding applications, the Symlet-4 adaptation performs better than Biorthogonal 4.4 and Biorthogonal 6.8. However, the discrete Meyer wavelet adaptation is the best solution for delayed ROI image reconstructions.

  10. Ground Truth Events with Source Geometry in Eurasia and the Middle East

    Science.gov (United States)

    2016-06-02

    in Saudi Arabia, 5 in Ethiopia, and 10 in Tanzania with magnitudes of 3 or greater. Source parameters were obtained through moment tensor inversions...TERMS seismic location; seismic ground truth; seismic moment tensor 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES...Arabia ..................... 4 Figure 2. Results from the grid search and moment tensor inversion for source mechanisms for events in Saudi Arabia

  11. SuReSim: simulating localization microscopy experiments from ground truth models.

    Science.gov (United States)

    Venkataramani, Varun; Herrmannsdörfer, Frank; Heilemann, Mike; Kuner, Thomas

    2016-04-01

    Super-resolution fluorescence microscopy has become a widely used tool in many areas of research. However, designing and validating super-resolution experiments to address a research question in a technically feasible and scientifically rigorous manner remains a fundamental challenge. We developed SuReSim, a software tool that simulates localization data of arbitrary three-dimensional structures represented by ground truth models, allowing users to systematically explore how changing experimental parameters can affect potential imaging outcomes.

  12. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space

    Science.gov (United States)

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.

    2017-03-01

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction.

  13. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space.

    Science.gov (United States)

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P

    2017-03-28

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO-GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction.

  14. Reference-free ground truth metric for metal artifact evaluation in CT images

    Energy Technology Data Exchange (ETDEWEB)

    Kratz, Baerbel; Ens, Svitlana; Mueller, Jan; Buzug, Thorsten M. [Institute of Medical Engineering, University of Luebeck, 23538 Luebeck (Germany)

    2011-07-15

    Purpose: In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. Methods: The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. Results: The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. Conclusions: The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.

  15. FIELD GROUND TRUTHING DATA COLLECTOR – A MOBILE TOOLKIT FOR IMAGE ANALYSIS AND PROCESSING

    Directory of Open Access Journals (Sweden)

    X. Meng

    2012-07-01

    Full Text Available Field Ground Truthing Data Collector is one of the four key components of the NASA funded ICCaRS project, being developed in Southeast Michigan. The ICCaRS ground truthing toolkit entertains comprehensive functions: 1 Field functions, including determining locations through GPS, gathering and geo-referencing visual data, laying out ground control points for AEROKAT flights, measuring the flight distance and height, and entering observations of land cover (and use and health conditions of ecosystems and environments in the vicinity of the flight field; 2 Server synchronization functions, such as, downloading study-area maps, aerial photos and satellite images, uploading and synchronizing field-collected data with the distributed databases, calling the geospatial web services on the server side to conduct spatial querying, image analysis and processing, and receiving the processed results in field for near-real-time validation; and 3 Social network communication functions for direct technical assistance and pedagogical support, e.g., having video-conference calls in field with the supporting educators, scientists, and technologists, participating in Webinars, or engaging discussions with other-learning portals. This customized software package is being built on Apple iPhone/iPad and Google Maps/Earth. The technical infrastructures, data models, coupling methods between distributed geospatial data processing and field data collector tools, remote communication interfaces, coding schema, and functional flow charts will be illustrated and explained at the presentation. A pilot case study will be also demonstrated.

  16. Ground Truth Studies - A hands-on environmental science program for students, grades K-12

    Science.gov (United States)

    Katzenberger, John; Chappell, Charles R.

    1992-01-01

    The paper discusses the background and the objectives of the Ground Truth Studies (GTSs), an activity-based teaching program which integrates local environmental studies with global change topics, utilizing remotely sensed earth imagery. Special attention is given to the five key concepts around which the GTS programs are organized, the pilot program, the initial pilot study evaluation, and the GTS Handbook. The GTS Handbook contains a primer on global change and remote sensing, aerial and satellite images, student activities, glossary, and an appendix of reference material. Also described is a K-12 teacher training model. International participation in the program is to be initiated during the 1992-1993 school year.

  17. New Ground Truth Capability from InSAR Time Series Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, S; Vincent, P; Yang, D

    2005-07-13

    We demonstrate that next-generation interferometric synthetic aperture radar (InSAR) processing techniques applied to existing data provide rich InSAR ground truth content for exploitation in seismic source identification. InSAR time series analyses utilize tens of interferograms and can be implemented in different ways. In one such approach, conventional InSAR displacement maps are inverted in a final post-processing step. Alternatively, computationally intensive data reduction can be performed with specialized InSAR processing algorithms. The typical final result of these approaches is a synthesized set of cumulative displacement maps. Examples from our recent work demonstrate that these InSAR processing techniques can provide appealing new ground truth capabilities. We construct movies showing the areal and temporal evolution of deformation associated with previous nuclear tests. In other analyses, we extract time histories of centimeter-scale surface displacement associated with tunneling. The potential exists to identify millimeter per year surface movements when sufficient data exists for InSAR techniques to isolate and remove phase signatures associated with digital elevation model errors and the atmosphere.

  18. Assessment of MTI Water Temperature Thermal Discharge Retrievals with Ground Truth

    Energy Technology Data Exchange (ETDEWEB)

    Kurzeja, R.J.

    2002-12-06

    Surface water temperatures calculated from Multispectral Thermal Imager (MTI) brightness temperatures and the robust retrieval algorithm, developed by the Los Alamos National Laboratory (LANL), are compared with ground truth measurements at a mid-latitude cold-water site along the Atlantic coast near Plymouth, MA. In contrast to the relative uniformity of the sea-surface temperature in the open ocean the water temperature near Pilgrim exhibits strong spatial gradients and temporal variability. This made it critical that all images be accurately registered in order to extract temperature values at the six buoy locations. Sixteen images during a one-year period from August 2000 to July 2001 were selected for the study. The RMS error of Pilgrim water temperature is about 3.5 C for the 4 buoys located in open water. The RMS error of the combined temperatures from 3 of the open-water buoys is 2.8 C. The RMS error includes errors in the ground truth. The magnitude of this error is estimated to range between 0.8 and 2.3 C. The two main components of this error are warm-layer effect and spatial variability. The actual error in the MTI retrievals for Pilgrim daytime conditions is estimated to be between 2.7 and 3.4 C for individual buoys and between 1.7 and 2.7 C for the combined open-water buoys.

  19. Phantom-based ground-truth generation for cerebral vessel segmentation and pulsatile deformation analysis

    Science.gov (United States)

    Schetelig, Daniel; Säring, Dennis; Illies, Till; Sedlacik, Jan; Kording, Fabian; Werner, René

    2016-03-01

    Hemodynamic and mechanical factors of the vascular system are assumed to play a major role in understanding, e.g., initiation, growth and rupture of cerebral aneurysms. Among those factors, cardiac cycle-related pulsatile motion and deformation of cerebral vessels currently attract much interest. However, imaging of those effects requires high spatial and temporal resolution and remains challenging { and similarly does the analysis of the acquired images: Flow velocity changes and contrast media inflow cause vessel intensity variations in related temporally resolved computed tomography and magnetic resonance angiography data over the cardiac cycle and impede application of intensity threshold-based segmentation and subsequent motion analysis. In this work, a flow phantom for generation of ground-truth images for evaluation of appropriate segmentation and motion analysis algorithms is developed. The acquired ground-truth data is used to illustrate the interplay between intensity fluctuations and (erroneous) motion quantification by standard threshold-based segmentation, and an adaptive threshold-based segmentation approach is proposed that alleviates respective issues. The results of the phantom study are further demonstrated to be transferable to patient data.

  20. First- and third-party ground truth for key frame extraction from consumer video clips

    Science.gov (United States)

    Costello, Kathleen; Luo, Jiebo

    2007-02-01

    Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.

  1. Ground Truth Observations of the Interior of a Rockglacier as Validation for Geophysical Monitoring Data Sets

    Science.gov (United States)

    Hilbich, C.; Roer, I.; Hauck, C.

    2007-12-01

    Monitoring the permafrost evolution in mountain regions is currently one of the important tasks in cryospheric studies as little data on past and present changes of the ground thermal regime and its material properties are available. In addition to recently established borehole temperature monitoring networks, techniques to determine and monitor the ground ice content have to be developed. A reliable quantification of ground ice is especially important for modelling the thermal evolution of frozen ground and for assessing the hazard potential due to thawing permafrost induced slope instability. Near surface geophysical methods are increasingly applied to detect and monitor ground ice occurrences in permafrost areas. Commonly, characteristic values of electrical resistivity and seismic velocity are used as indicators for the presence of frozen material. However, validation of the correct interpretation of the geophysical parameters can only be obtained through boreholes, and only regarding vertical temperature profiles. Ground truth of the internal structure and the ice content is usually not available. In this contribution we will present a unique data set from a recently excavated rockglacier near Zermatt/Valais in the Swiss Alps, where an approximately 5 m deep trench was cut across the rockglacier body for the construction of a ski track. Longitudinal electrical resistivity tomography (ERT) and refraction seismic tomography profiles were conducted prior to the excavation, yielding data sets for cross validation of commonly applied geophysical interpretation approaches in the context of ground ice detection. A recently developed 4-phase model was applied to calculate ice-, air- and unfrozen water contents from the geophysical data sets, which were compared to the ground truth data from the excavated trench. The obtained data sets will be discussed in the context of currently established geophysical monitoring networks in permafrost areas. In addition to the

  2. SIR-C/X-SAR data calibration and ground truth campaign over the NASA-CB1 test-site

    Energy Technology Data Exchange (ETDEWEB)

    Notarnicola, C.; Posa, F.; Refice, A.; Sergi, R.; Smacchia, P. [Istituto Nazionale di Fisica della Materia and Dipartimento Interateneo di Fisica, Bari (Italy); Casarano, D. [ENEA, Centro Ricerche Trisaia, Rotondella, MT (Italy); De Carolis, G.; Mattia, F. [Istituto di Tecnologia Informatica Spaziale-Consiglio Nazionale delle Ricerche, Centro di Geodesia Spaziale G. Colombo, Terlecchia, MT (Italy); Schena, V.D. [Alenia Spazio, Rome (Italy)

    2001-02-01

    During the Space Shuttle Endeavour mission in October 1994, a remote-sensing campaign was carried out with the objectives of both radiometric and polarimetric calibration and ground truth data acquisition of bare soils. This paper presents the results obtained in the experiment. Polarimetric cross-talk and channel imbalance values, as well as radiometric calibration parameters, have been found to be within the science requirements for SAR images. Regarding ground truth measurements, a wide spread in the height rms values and correlation lengths has been observed, which was motivated a critical revisiting of surface parameters descriptors.

  3. Ground Truth Collection for Mining Explosions in Northern Fennoscandia and Northwestern Russia

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Ringdal, R; Kremenetskaya, E; Mykkeltveit, S; Rock, D W; Maercklin, N; Schweitzer, J; Hauk, T F; Lewis, J P

    2005-07-13

    We concluded comprehensive ground truth collection at the Khibiny, Olenegorsk, Kovdor, and Zapolyarnyi mines, and have basic information on 2,052 explosions. In the past two years we used this ground truth information to extract waveform data from the ARCES array and a number of regional stations (KEV, LVZ, APA) as well as from six stations that we deployed along two lines stretching between the Khibiny Massif mines and the region around the ARCES array. We calculated P/S ratios using the ARCES array data for many of these events comprising several source types (compact underground explosions, underground ripple-fired explosions, surface ripple-fired explosions). We found that the P/S ratios of small compact underground explosions in mines of the Khibiny Massif are systematically lower than the P/S ratios of large ripple-fired surface explosions. We had anticipated that smaller underground shots would appear more like single well-coupled explosions, thus having higher P/S ratios than large ripple-fired explosions. A possible explanation for this phenomenon is that the compact underground explosions in these mines are designed to fracture and drop a large quantity of ore from the ceiling of a horizontal shaft. The potential energy released by the falling ore may express as shear wave energy, which may be considerably greater than the (P wave) energy released directly by the explosive. We concluded the deployment of the six stations along the Khibiny-ARCES lines this past summer; this year we are examining the data from these stations to see how P/S ratios vary with range from the source. We have an update on the P/S ratio analysis contrasting different source types, with the addition of an analysis of range dependence using data from the temporary stations. The portable stations were redeployed in the fall of 2004 to the Kiruna and Malmberget underground mines in northern Sweden. The stations deployed in Malmberget also record events from the surface mining

  4. Objective Performance Evaluation of Video Segmentation Algorithms with Ground-Truth

    Institute of Scientific and Technical Information of China (English)

    杨高波; 张兆扬

    2004-01-01

    While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation. Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy. Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames. The experimental results show the feasibility of our approach. Moreover, it is computationally more efficient than previous methods. It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.

  5. The IMPACT project Polish Ground-Truth texts as a Djvu corpus

    Directory of Open Access Journals (Sweden)

    Janusz S. Bień

    2014-09-01

    Full Text Available The IMPACT project Polish Ground-Truth texts as a Djvu corpus The purpose of the paper is twofold. First, to describe the already implemented idea of DjVu corpora, i.e. corpora which consist of both scanned images and a transcription of the texts with the words associated with their occurrences in the scans. Secondly, to present a case study of a corpus consisting of almost 5 000 pages of Polish historical texts dating from 1570 to 1756 (it is practically the very first corpus of historical Polish. The tools described have universal character and are freely available under the GNU GPL license, hence they can be used also for other purposes.

  6. Towards a repository for standardized medical image and signal case data annotated with ground truth.

    Science.gov (United States)

    Deserno, Thomas M; Welter, Petra; Horsch, Alexander

    2012-04-01

    Validation of medical signal and image processing systems requires quality-assured, representative and generally acknowledged databases accompanied by appropriate reference (ground truth) and clinical metadata, which are composed laboriously for each project and are not shared with the scientific community. In our vision, such data will be stored centrally in an open repository. We propose an architecture for a standardized case data and ground truth information repository supporting the evaluation and analysis of computer-aided diagnosis based on (a) the Reference Model for an Open Archival Information System (OAIS) provided by the NASA Consultative Committee for Space Data Systems (ISO 14721:2003), (b) the Dublin Core Metadata Initiative (DCMI) Element Set (ISO 15836:2009), (c) the Open Archive Initiative (OAI) Protocol for Metadata Harvesting, and (d) the Image Retrieval in Medical Applications (IRMA) framework. In our implementation, a portal bunches all of the functionalities that are needed for data submission and retrieval. The complete life cycle of the data (define, create, store, sustain, share, use, and improve) is managed. Sophisticated search tools make it easier to use the datasets, which may be merged from different providers. An integrated history record guarantees reproducibility. A standardized creation report is generated with a permanent digital object identifier. This creation report must be referenced by all of the data users. Peer-reviewed e-publishing of these reports will create a reputation for the data contributors and will form de-facto standards regarding image and signal datasets. Good practice guidelines for validation methodology complement the concept of the case repository. This procedure will increase the comparability of evaluation studies for medical signal and image processing methods and applications.

  7. Ground truth and detection threshold from WWII naval clean-up in Denmark

    Science.gov (United States)

    Larsen, Tine B.; Dahl-Jensen, Trine; Voss, Peter

    2013-04-01

    The sea bed below the Danish territorial waters is still littered with unexploded mines and other ammunition from World War II. The mines were air dropped by the RAF and the positions of the mines are unknown. As the mines still pose a potential threat to fishery and other marine activities, the Admiral Danish Fleet under the Danish Navy searches for the mines and destroy them by detonation, where they are found. The largest mines destroyed in this manner in 2012 are equivalent to 800 kg TNT each. The Seismological Service at the National Geological Survey of Denmark and Greenland is notified by the navy when ammunition in excess of 100 kg TNT is detonated. The notifications include information about position, detonation time and the estimated amount of explosives. The larger explosions are clearly registered not only on the Danish seismographs, but also on seismographs in the neighbouring countries. This includes the large seismograph arrays in Norway, Sweden, and Finland. Until recently the information from the Danish navy was only utilized to rid the Danish earthquake catalogue of explosions. But the high quality information provided by the navy enables us to use these ground truth events to assess the quality of our earthquake catalogue. The mines are scattered though out the Danish territorial waters, thus we can use the explosions to test the accuracy of the determined epicentres in all parts of the country. E.g. a detonation of 135 kg in Begstrup Vig in the central part of Denmark was located using Danish, Norwegian and Swedish stations with an accuracy of less than 2 km from ground truth. A systematic study of the explosions will sharpen our understanding of the seismicity in Denmark, and result in a more detailed understanding of the detection threshold. Furthermore the study will shed light on the sensitivity of the network to various seismograph outages.

  8. Towards improved characterization of northern wetlands (or other landscapes) by remote sensing - a rapid approach to collect ground truth data

    Science.gov (United States)

    Gålfalk, Magnus; Karlson, Martin; Crill, Patrick; Bastviken, David

    2017-04-01

    The calibration and validation of remote sensing land cover products is highly dependent on accurate ground truth data, which are costly and practically challenging to collect. This study evaluates a novel and efficient alternative to field surveys and UAV imaging commonly applied for this task. The method consists of i) a light weight, water proof, remote controlled RGB-camera mounted on an extendable monopod used for acquiring wide-field images of the ground from a height of 4.5 meters, and ii) a script for semi-automatic image classification. In the post-processing, the wide-field images are corrected for optical distortion and geometrically rectified so that the spatial resolution is the same over the surface area used for classification. The script distinguishes land surface components by color, brightness and spatial variability. The method was evaluated in wetland areas located around Abisko, northern Sweden. Proportional estimates of the six main surface components in the wetlands (wet and dry Sphagnum, shrub, grass, water, rock) were derived for 200 images, equivalent to 10 × 10 m field plots. These photo plots were then used as calibration data for a regional scale satellite based classification which separates the six wetland surface components using a Sentinel-1 time series. The method presented in this study is accurate, rapid, robust and cost efficient in comparison to field surveys (time consuming) and drone mapping (which require low wind speeds and no rain, suffer from battery limited flight times, have potential GPS/compass errors far north, and in some areas are prohibited by law).

  9. Micro-earthquakes"Justunderneath"Seismic Stations as Ground Truth Events1 Application to the 2008 Wenchuan Aftershock Sequence

    Institute of Scientific and Technical Information of China (English)

    Liu Chun; Wu Zhongliang; Jiang Changsheng

    2008-01-01

    Analyzing the aftershock sequence of the 2008 Wenchuan earthquake,we considered 26 microearthquakes"just underneath"seismic stations.Making use of such special station-event configurations to determine the depth of these micro-earthquakes provided accurate relocation of aftershocks with a reference set of"ground truth (GT) events".

  10. An Upscaling Algorithm to Obtain the Representative Ground Truth of LAI Time Series in Heterogeneous Land Surface

    Directory of Open Access Journals (Sweden)

    Yuechan Shi

    2015-09-01

    Full Text Available Upscaling in situ leaf area index (LAI measurements to the footprint scale is important for the validation of medium resolution remote sensing products. However, surface heterogeneity and temporal variation of vegetation make this difficult. In this study, a two-step upscaling algorithm was developed to obtain the representative ground truth of LAI time series in heterogeneous surfaces based on in situ LAI data measured by the wireless sensor network (WSN observation system. Since heterogeneity within a site usually arises from the mixture of vegetation and non-vegetation surfaces, the spatial heterogeneity of vegetation and land cover types were separately considered. Representative LAI time series of vegetation surfaces were obtained by upscaling in situ measurements using an optimal weighted combination method, incorporating the expectation maximum (EM algorithm to derive the weights. The ground truth of LAI over the whole site could then be determined using area weighted combination of representative LAIs of different land cover types. The algorithm was evaluated using a dataset collected in Heihe Watershed Allied Telemetry Experimental Research (HiWater experiment. The proposed algorithm can effectively obtain the representative ground truth of LAI time series in heterogeneous cropland areas. Using the normal method of an average LAI measurement to represent the heterogeneous surface produced a root mean square error (RMSE of 0.69, whereas the proposed algorithm provided RMSE = 0.032 using 23 sampling points. The proposed ground truth derived method was implemented to validate four major LAI products.

  11. MTI Ground Truth Collection Ivanpah Dry Lake Bed, California, May, July, and August 2002

    Energy Technology Data Exchange (ETDEWEB)

    David L. Hawley

    2002-10-01

    A multi-agency collaboration successfully completed a series of ground truth measurements at the Ivanpah Dry Lake bed during FY 2002. Four collection attempts were made: two in May, one in July, and one in August. The objective was to collect ground-based measurements and airborne data during Multispectral Thermal Imager satellite overpasses. The measurements were to aid in the calibration of the satellite data and in algorithm validation. The Remote Sensing Laboratory, Las Vegas, Nevada; the National Aeronautics and Space Administration; Los Alamos National Laboratory; and the University of Arizona participated in the effort. Field instrumentation included a sun photometer on loan from the University of Arizona and the Remote Sensing Laboratory's radiosonde weather balloon, weather station, thermal infrared radiometers, and spectral radiometer. In addition, three reflectance panels were deployed; certain tests used water baths set at two different temperatures. Local weather data as well as sky photography were collected. May presented several excellent days; however, it was later learned that tasking for the satellite was not available. A combination of cloud cover, wind, and dusty conditions limited useful data collections to two days, August 28 and 29. Despite less-than- ideal weather conditions, the data for the Multispectral Thermal Imager calibration were obtained. A unique set of circumstances also allowed data collection during overpasses of the LANDSAT7 and ASTER satellites.

  12. Ground truth methods for optical cross-section modeling of biological aerosols

    Science.gov (United States)

    Kalter, J.; Thrush, E.; Santarpia, J.; Chaudhry, Z.; Gilberry, J.; Brown, D. M.; Brown, A.; Carter, C. C.

    2011-05-01

    Light detection and ranging (LIDAR) systems have demonstrated some capability to meet the needs of a fastresponse standoff biological detection method for simulants in open air conditions. These systems are designed to exploit various cloud signatures, such as differential elastic backscatter, fluorescence, and depolarization in order to detect biological warfare agents (BWAs). However, because the release of BWAs in open air is forbidden, methods must be developed to predict candidate system performance against real agents. In support of such efforts, the Johns Hopkins University Applied Physics Lab (JHU/APL) has developed a modeling approach to predict the optical properties of agent materials from relatively simple, Biosafety Level 3-compatible bench top measurements. JHU/APL has fielded new ground truth instruments (in addition to standard particle sizers, such as the Aerodynamic particle sizer (APS) or GRIMM aerosol monitor (GRIMM)) to more thoroughly characterize the simulant aerosols released in recent field tests at Dugway Proving Ground (DPG). These instruments include the Scanning Mobility Particle Sizer (SMPS), the Ultraviolet Aerodynamic Particle Sizer (UVAPS), and the Aspect Aerosol Size and Shape Analyser (Aspect). The SMPS was employed as a means of measuring smallparticle concentrations for more accurate Mie scattering simulations; the UVAPS, which measures size-resolved fluorescence intensity, was employed as a path toward fluorescence cross section modeling; and the Aspect, which measures particle shape, was employed as a path towards depolarization modeling.

  13. Ground Truth Collection for Mining Explosions in Northern Fennoscandia and Russia

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D; Ringdal, F; Kremenetskaya, E; Mykkeltveit, S; Rock, D E; Schweitzer, J; Hauk, T; Lewis, J

    2004-07-15

    Analysis of data from our deployments and ground truth collection in northern Fennoscandia and northwestern Russia shows systematic variations in the P/S ratios of different types of explosions. The fact that this fundamental discriminant varies with firing practice is not in itself surprising - such variations probably contribute to the spread in P/S ratios normally observed for ripple-fired explosions. However, the nature of the variations is sometimes counterintuitive. Last year [Harris, 2003] we found that the P/S ratios of small compact underground explosions in mines of the Khibiny Massif are systematically lower than the P/S ratios of large ripple-fired surface explosions. We had anticipated that smaller underground shots would be more like single well-coupled explosions, thus having higher P/S ratios than large ripple-fired explosions. We now are performing a more extensive analysis of the data including compact and large ripple-fired explosions at additional mines and different types of explosions: small surface shots and large ripple-fired underground explosions. Our data are more complete as a result of an additional year of collection and allow a more complete sampling of the signals in range from the source. As of this writing we have measured Pn/Lg ratios on a larger number of explosions of three types: compact underground explosions, surface ripple-fired explosions and now underground ripple-fired explosions. We find that both types of underground explosions have systematically lower P/S ratios than surface ripple-fired shots; this effect is most pronounced in the 4-8 Hz frequency band. This result appears to be due to relatively diminished shear wave excitation by the surface explosions. We speculate that the relatively large shear phases in underground explosions may be caused by large amounts of rockfall in these events, which are designed to collapse the ceilings of tunnels. We have continued comprehensive ground truth collection at the Khibiny

  14. On Solving the Problem of Identifying Unreliable Sensors Without a Knowledge of the Ground Truth: The Case of Stochastic Environments.

    Science.gov (United States)

    Yazidi, Anis; Oommen, B John; Goodwin, Morten

    2016-04-28

    The purpose of this paper is to propose a solution to an extremely pertinent problem, namely, that of identifying unreliable sensors (in a domain of reliable and unreliable ones) without any knowledge of the ground truth. This fascinating paradox can be formulated in simple terms as trying to identify stochastic liars without any additional information about the truth. Though apparently impossible, we will show that it is feasible to solve the problem, a claim that is counter-intuitive in and of itself. One aspect of our contribution is to show how redundancy can be introduced, and how it can be effectively utilized in resolving this paradox. Legacy work and the reported literature (for example, in the so-called weighted majority algorithm) have merely addressed assessing the reliability of a sensor by comparing its reading to the ground truth either in an online or an offline manner. Unfortunately, the fundamental assumption of revealing the ground truth cannot be always guaranteed (or even expected) in many real life scenarios. While some extensions of the Condorcet jury theorem [9] can lead to a probabilistic guarantee on the quality of the fused process, they do not provide a solution to the unreliable sensor identification problem. The essence of our approach involves studying the agreement of each sensor with the rest of the sensors, and not comparing the reading of the individual sensors with the ground truth-as advocated in the literature. Under some mild conditions on the reliability of the sensors, we can prove that we can, indeed, filter out the unreliable ones. Our approach leverages the power of the theory of learning automata (LA) so as to gradually learn the identity of the reliable and unreliable sensors. To achieve this, we resort to a team of LA, where a distinct automaton is associated with each sensor. The solution provided here has been subjected to rigorous experimental tests, and the results presented are, in our opinion, both novel and

  15. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    Science.gov (United States)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  16. Ground truth measurements plan for the Multispectral Thermal Imager (MTI) satellite

    Energy Technology Data Exchange (ETDEWEB)

    Garrett, A.J.

    2000-01-03

    Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), and the Savannah River Technology Center (SRTC) have developed a diverse group of algorithms for processing and analyzing the data that will be collected by the Multispectral Thermal Imager (MTI) after launch late in 1999. Each of these algorithms must be verified by comparison to independent surface and atmospheric measurements. SRTC has selected 13 sites in the continental U.S. for ground truth data collections. These sites include a high altitude cold water target (Crater Lake), cooling lakes and towers in the warm, humid southeastern US, Department of Energy (DOE) climate research sites, the NASA Stennis satellite Validation and Verification (V and V) target array, waste sites at the Savannah River Site, mining sites in the Four Corners area and dry lake beds in the southwestern US. SRTC has established mutually beneficial relationships with the organizations that manage these sites to make use of their operating and research data and to install additional instrumentation needed for MTI algorithm V and V.

  17. The Gold Standard Paradox in Digital Image Analysis: Manual Versus Automated Scoring as Ground Truth.

    Science.gov (United States)

    Aeffner, Famke; Wilson, Kristin; Martin, Nathan T; Black, Joshua C; Hendriks, Cris L Luengo; Bolon, Brad; Rudmann, Daniel G; Gianani, Roberto; Koegler, Sally R; Krueger, Joseph; Young, G Dave

    2017-09-01

    - Novel therapeutics often target complex cellular mechanisms. Increasingly, quantitative methods like digital tissue image analysis (tIA) are required to evaluate correspondingly complex biomarkers to elucidate subtle phenotypes that can inform treatment decisions with these targeted therapies. These tIA systems need a gold standard, or reference method, to establish analytical validity. Conventional, subjective histopathologic scores assigned by an experienced pathologist are the gold standard in anatomic pathology and are an attractive reference method. The pathologist's score can establish the ground truth to assess a tIA solution's analytical performance. The paradox of this validation strategy, however, is that tIA is often used to assist pathologists to score complex biomarkers because it is more objective and reproducible than manual evaluation alone by overcoming known biases in a human's visual evaluation of tissue, and because it can generate endpoints that cannot be generated by a human observer. - To discuss common visual and cognitive traps known in traditional pathology-based scoring paradigms that may impact characterization of tIA-assisted scoring accuracy, sensitivity, and specificity. - This manuscript reviews the current literature from the past decades available for traditional subjective pathology scoring paradigms and known cognitive and visual traps relevant to these scoring paradigms. - Awareness of the gold standard paradox is necessary when using traditional pathologist scores to analytically validate a tIA tool because image analysis is used specifically to overcome known sources of bias in visual assessment of tissue sections.

  18. Automatic Scheduling and Planning (ASAP) in future ground control systems

    Science.gov (United States)

    Matlin, Sam

    1988-01-01

    This report describes two complementary approaches to the problem of space mission planning and scheduling. The first is an Expert System or Knowledge-Based System for automatically resolving most of the activity conflicts in a candidate plan. The second is an Interactive Graphics Decision Aid to assist the operator in manually resolving the residual conflicts which are beyond the scope of the Expert System. The two system designs are consistent with future ground control station activity requirements, support activity timing constraints, resource limits and activity priority guidelines.

  19. A Comprehensive Laboratory Study to Improve Ground Truth Calibration of Remotely Sensed Near-Surface Soil Moisture

    Science.gov (United States)

    Babaeian, E.; Tuller, M.; Sadeghi, M.; Sheng, W.; Jones, S. B.

    2016-12-01

    Optical satellite and airborne remote sensing (RS) have been widely applied for characterization of large-scale surface soil moisture distributions. However, despite the excellent spatial resolution of RS data, the electromagnetic radiation within the optical bands (400-2500 nm) penetrates the soil profile only to a depth of a few millimeters; hence obtained moisture estimates are limited to the soil surface region. Furthermore, moisture sensor networks employed for ground truth calibration of RS observations commonly exhibit very limited spatial resolution, which consequently leads to significant discrepancies between RS and ground truth observations. To better understand the relationship between surface and near-surface soil moisture, we employed a benchtop hyperspectral line-scan imaging system to generate high resolution surface reflectance maps during evaporation from soil columns filled with source soils covering a wide textural range and instrumented with a novel time domain reflectometry (TDR) sensor array that allows monitoring of near surface moisture at 0.5-cm resolution. A recently developed physical model for surface soil moisture predictions from shortwave infrared reflectance was applied to estimate surface soil moisture from surface reflectance and to explore the relationship between surface and near-surface moisture distributions during soil drying. Preliminary results are very promising and their applicability for ground truth calibration of RS observations will be discussed.

  20. Sunrise-driven movements of dust on the Moon: Apollo 12 Ground-truth measurements

    Science.gov (United States)

    O'Brien, Brian J.; Hollick, Monique

    2015-12-01

    The first sunrise after Apollo 12 astronauts left the Moon caused dust storms across the site where rocket exhausts had disrupted about 2000 kg of smooth fine dust. The next few sunrises started progressively weaker dust storms, and the Eastern horizon brightened, adding to direct sunlight for half an hour. These Ground truth measurements were made 100 cm above the surface by the 270 g Apollo 12 Dust Detector Experiment we invented in 1966. Dust deposited on the horizontal solar cell during two lunar days after the first sunrise was almost 30% of the total it then measured over 6 years. The vertical east-facing solar cell measured horizon brightening on 14 of the first 17 lunations, with none detected on the following 61 Lunar Days. Based on over 2 million such measurements we propose a new qualitative model of sunrise-driven transport of individual dust particles freed by Apollo 12 activities from strong particle-to-particle cohesive forces. Each sunrise caused sudden surface charging which, during the first few hours, freshly mobilised and lofted the dust remaining free, microscopically smoothing the disrupted local areas. Evidence of reliability of measurements includes consistency among all 6 sensors in measurements throughout an eclipse. We caution Google Lunar XPrize competitors and others planning missions to the Moon and large airless asteroids that, after a spacecraft lands, dust hazards may occur after each of the first few sunrises. Mechanical problems in its first such period stranded Chinese lunar rover Yutu in 2014, although we would not claim yet that the causes were dust. On the other hand, sunrise-driven microscopic smoothing of disturbed areas may offer regular natural mitigations of dust consequences of mining lunar resources and reduce fears that many expeditions might cause excessive fine dust globally around the Moon.

  1. Using tissue residues in aquatic animals to ground-truth ecological risk assessments

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, T.M. [EA Engineering, Science and Technology, Inc., Hunt Valley, MD (United States)

    1995-12-31

    Ecological risks are often estimated by comparing expected environmental exposures (EEE) to a toxicity benchmark (TB). This comparison is expressed quantitatively as a Hazard Quotient (HQ) by dividing the TB into the EEE. HQs are designed to be environmentally conservative to minimize false negatives. This conservatism allows risk managers to conclude with a high degree of certainty that HQs < 1.0 represent acceptable ecological risks. HQs > 1.0, however, are more difficult to interpret. Current state-of-the-practice is to conduct a weight-of-evidence analysis where multiple lines of information are considered in addition to the HQ results. This analysis indicates to the risk manager what level of certainty he or she should place in the HQ results. One line of evidence is the presence/absence of chemicals of concern in the tissues of field-collected receptors. This presentation will demonstrate how residue data were used to ground-truth HQs generated for an ecological risk assessment conducted at a DOD facility in the southeastern US. For some receptor-chemical combinations, tissue residues rescind substantial ecological risks suggested by elevated HQs. In other instances, tissue residues corroborated the HQ results suggesting where true ecological hazards exist. Specific examples will be given for when the use of tissue residue data is inappropriate. Finally, this presentation will suggest how tissue residue data can be incorporated in a weight-of-evidence Risk Characterization that is consistent with EPA`s Framework for Ecological Risk Assessment and draft Ecological Risk Assessment Guidance for Superfund.

  2. Towards Autonomous Agriculture: Automatic Ground Detection Using Trinocular Stereovision

    Directory of Open Access Journals (Sweden)

    Annalisa Milella

    2012-09-01

    Full Text Available Autonomous driving is a challenging problem, particularly when the domain is unstructured, as in an outdoor agricultural setting. Thus, advanced perception systems are primarily required to sense and understand the surrounding environment recognizing artificial and natural structures, topology, vegetation and paths. In this paper, a self-learning framework is proposed to automatically train a ground classifier for scene interpretation and autonomous navigation based on multi-baseline stereovision. The use of rich 3D data is emphasized where the sensor output includes range and color information of the surrounding environment. Two distinct classifiers are presented, one based on geometric data that can detect the broad class of ground and one based on color data that can further segment ground into subclasses. The geometry-based classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate geometric appearance of 3D stereo-generated data with class labels. Then, it makes predictions based on past observations. It serves as well to provide training labels to the color-based classifier. Once trained, the color-based classifier is able to recognize similar terrain classes in stereo imagery. The system is continuously updated online using the latest stereo readings, thus making it feasible for long range and long duration navigation, over changing environments. Experimental results, obtained with a tractor test platform operating in a rural environment, are presented to validate this approach, showing an average classification precision and recall of 91.0% and 77.3%, respectively.

  3. Ground truth : vertical seismic profile data enables geophysicists to image ahead of the drill bit

    Energy Technology Data Exchange (ETDEWEB)

    Eaton, S. [SR ECO Consultants Inc., Calgary, AB (Canada)

    2001-08-01

    This paper presented a new technology which makes it possible to obtain a vertical seismic profile (VSP) of a wellbore via a wireline tool. Downhole seismic is of extreme importance in cases when there is a discrepancy between the geology in the well and surface seismic data and when drilling has gone deeper than the prognosis for oil or gas. Once VSP data are interpreted, the decision can be made to either abandon the well or sidetrack it to an optimum target position. The VSP data give the geophysicist the opportunity to recalibrate the processing of conventional 2-D or 3-D surface seismic data while drilling. Crucial assumptions for the velocity fields can be tested. This new technology links geology and geophysics, making it possible to quantify subsurface reservoir parameters and to obtain downhole seismic that provides a higher frequency and spatial resolution than conventional surface seismic surveys. The energy source for downhole seismic is situated at ground level. The signal then travels down into the earth where it is recorded in the subsurface by a vertical array of geophones situated in the wellbore. Some of the signal travels past the bottom of the borehole, through the underlying layers that still have to be drilled. Geophysicists with PanCanadian Petroleum Ltd. and Baker Atlas state that a VSP gives ground truth because the acquired data enables the geophysicist to image ahead of the drill bit. VSP is the ultimate tool in interval velocity and time to depth conversion. Downhole seismic has 25 per cent higher frequencies than surface seismic. The technology has been successfully used by Talisman Energy Inc., to drill Foothills wells in the Monkman Pass area of northeastern British Columbia. VSP data can be used to predict formation pressures, porosities, lithologies or rock types, and fluid content. The technology has been useful in the drilling of hostile holes offshore Sable Island in Nova Scotia where wells can cost up to $30 million. VSPs are

  4. Towards ground-truthing of spaceborne estimates of above-ground biomass and leaf area index in tropical rain forests

    Science.gov (United States)

    Köhler, P.; Huth, A.

    2010-05-01

    The canopy height of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or lidar. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground biomass (AGB) (and thus carbon content of vegetation) and leaf area index (LAI). The process-based forest growth model FORMIND2.0 was applied to simulate (a) undisturbed forest growth and (b) a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia) in South-East Asia. It is found that for undisturbed forest and a variety of disturbed forests situations AGB can be expressed as a power-law function of canopy height h (AGB=a·hb) with an r2~60% for a spatial resolution of 20 m×20 m (0.04 ha, also called plot size). The regression is becoming significant better for the hectare wide analysis of the disturbed forest sites (r2=91%). There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2~60%) between AGB and the area fraction in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot data from the same region and with the large-scale forest inventory in Lambir. We conclude that the spaceborne remote sensing techniques have the potential to

  5. Towards ground-truthing of spaceborne estimates of above-ground biomass and leaf area index in tropical rain forests

    Directory of Open Access Journals (Sweden)

    P. Köhler

    2010-05-01

    Full Text Available The canopy height of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or lidar. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground biomass (AGB (and thus carbon content of vegetation and leaf area index (LAI. The process-based forest growth model FORMIND2.0 was applied to simulate (a undisturbed forest growth and (b a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia in South-East Asia. It is found that for undisturbed forest and a variety of disturbed forests situations AGB can be expressed as a power-law function of canopy height h (AGB=a·hb with an r2~60% for a spatial resolution of 20 m×20 m (0.04 ha, also called plot size. The regression is becoming significant better for the hectare wide analysis of the disturbed forest sites (r2=91%. There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2~60% between AGB and the area fraction in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot data from the same region and with the large-scale forest inventory in

  6. Surface Properties and Characteristics of Mars Landing Sites from Remote Sensing Data and Ground Truth

    Science.gov (United States)

    Golombek, M. P.; Haldemann, A. F.; Simpson, R. A.; Furgason, R. L.; Putzig, N. E.; Huertas, A.; Arvidson, R. E.; Heet, T.; Bell, J. F.; Mellon, M. T.; McEwen, A. S.

    2008-12-01

    Surface characteristics at the six sites where spacecraft have successfully landed on Mars can be related favorably to their signatures in remotely sensed data from orbit and from the Earth. Comparisons of the rock abundance, types and coverage of soils (and their physical properties), thermal inertia, albedo, and topographic slope all agree with orbital remote sensing estimates and show that the materials at the landing sites can be used as ground truth for the materials that make up most of the equatorial and mid- to moderately high-latitude regions of Mars. The six landing sites sample two of the three dominant global thermal inertia and albedo units that cover ~80% of the surface of Mars. The Viking, Spirit, Mars Pathfinder, and Phoenix landing sites are representative of the moderate to high thermal inertia and intermediate to high albedo unit that is dominated by crusty, cloddy, blocky or frozen soils (duricrust that may be layered) with various abundances of rocks and bright dust. The Opportunity landing site is representative of the moderate to high thermal inertia and low albedo surface unit that is relatively dust free and composed of dark eolian sand and/or increased abundance of rocks. Rock abundance derived from orbital thermal differencing techniques in the equatorial regions agrees with that determined from rock counts at the surface and varies from ~3-20% at the landing sites. The size-frequency distributions of rocks >1.5 m diameter fully resolvable in HiRISE images of the landing sites follow exponential models developed from lander measurements of smaller rocks and are continuous with these rock distributions indicating both are part of the same population. Interpretation of radar data confirms the presence of load bearing, relatively dense surfaces controlled by the soil type at the landing sites, regional rock populations from diffuse scattering similar to those observed directly at the sites, and root-mean-squared slopes that compare favorably

  7. LRO Camera Imaging of the Moon: Apollo 17 and other Sites for Ground Truth

    Science.gov (United States)

    Jolliff, B. L.; Wiseman, S. M.; Robinson, M. S.; Lawrence, S.; Denevi, B. W.; Bell, J. F.

    2009-12-01

    One of the fundamental goals of the Lunar Reconnaissance Orbiter (LRO) is the determination of mineralogic and compositional distributions and their relation to geologic features on the Moon’s surface. Through a combination of imaging with the LRO narrow-angle cameras and wide-angle camera (NAC, WAC), very fine-scale geologic features are resolved with better than meter-per-pixel resolution (NAC) and correlated to spectral variations mapped with the lower resolution, 7-band WAC (400-m/pix, ultraviolet bands centered at 321 and 360 nm; 100-m/pix, visible bands centered at 415, 566, 604, 643, and 689 nm). Keys to understanding spectral variations in terms of composition, and relationships between compositional variations and surface geology, are ground-truth sites where surface compositions and mineralogy, as well as geology and geologic history, are well known. The Apollo 17 site is especially useful because the site geology includes a range of features from high-Ti mare basalts to Serenitatis-Basin-related massifs containing basin impact-melt breccia and feldspathic highlands materials, and a regional black and orange pyroclastic deposit. Moreover, relative and absolute ages of these features are known. In addition to rock samples, astronauts collected well-documented soil samples at 22 different sample locations across this diverse area. Many of these sample sites can be located in the multispectral data using the co-registered NAC images. Digital elevation data are used to normalize illumination geometry and thus fully exploit the multispectral data and compare derived compositional parameters for different geologic units. Regolith characteristics that are known in detail from the Apollo 17 samples, such as maturity and petrography of mineral, glass, and lithic components, contribute to spectral variations and are considered in the assessment of spectral variability at the landing site. In this work, we focus on variations associated with the ilmenite content

  8. How Precise Are Preinterventional Measurements Using Centerline Analysis Applications? Objective Ground Truth Evaluation Reveals Software-Specific Centerline Characteristics.

    Science.gov (United States)

    Hoegen, Philipp; Wörz, Stefan; Müller-Eschner, Matthias; Geisbüsch, Philipp; Liao, Wei; Rohr, Karl; Schmitt, Matthias; Rengier, Fabian; Kauczor, Hans-Ulrich; von Tengg-Kobligk, Hendrik

    2017-08-01

    To evaluate different centerline analysis applications using objective ground truth from realistic aortic aneurysm phantoms with precisely defined geometry and centerlines to overcome the lack of unknown true dimensions in previously published in vivo validation studies. Three aortic phantoms were created using computer-aided design (CAD) software and a 3-dimensional (3D) printer. Computed tomography angiograms (CTAs) of phantoms and 3 patients were analyzed with 3 clinically approved and 1 research software application. The 3D centerline coordinates, intraluminal diameters, and lengths were validated against CAD ground truth using a dedicated evaluation software platform. The 3D centerline position mean error ranged from 0.7±0.8 to 2.9±2.5 mm between tested applications. All applications calculated centerlines significantly different from ground truth. Diameter mean errors varied from 0.5±1.2 to 1.1±1.0 mm among 3 applications, but exceeded 8.0±11.0 mm with one application due to an unsteady distortion of luminal dimensions along the centerline. All tested commercially available software tools systematically underestimated centerline total lengths by -4.6±0.9 mm to -10.4±4.3 mm (maximum error -14.6 mm). Applications with the highest 3D centerline accuracy yielded the most precise diameter and length measurements. One clinically approved application did not provide reproducible centerline-based analysis results, while another approved application showed length errors that might influence stent-graft choice and procedure success. The variety and specific characteristics of endovascular aneurysm repair planning software tools require scientific evaluation and user awareness.

  9. Implementation of a ground truth process for development of a submerged aquatic vegetation (SAV) mapping protocol using hyperspectral imagery

    Science.gov (United States)

    Hall, Carlton R.; Bostater, Charles R., Jr.; Virnstein, Robert W.

    2006-09-01

    Protocol development for science based mapping of submerged aquatic vegetation (SAV) requires comprehensive ground truth data describing the full range of variability observed in the target. The Indian River Lagoon, Florida, extends along 250 km of the east central Florida coast adjacent to the Atlantic Ocean. The lagoon crosses the transition zone between the Caribbean and Carolinian zoogeographic provinces making it highly diverse. For large scale mapping and management of SAV four common and three uncommon species of seagrass (Tracheophyta) and three broad groups of macroalgae; red algae (Rhodophyta), green algae (Chlorophyta), and brown algae (Phaeophyta) are recognized. Based on technical and cost limitations we established twenty, 7-10 km long flight transects for collection of 1.2 m2 spatial resolution hyperspectral imagery covering the length of the lagoon. Emphasis was placed on the area near the Sebastian River and adjacent Sebastian Inlet. Twenty six 40 m long ground truth transects were established in the lagoon using 1 m2 white panels to mark each transect end. Each transect target was located in the field using high precision GPS. Transects were positioned to cover a range of depths, SAV densities, mixed and monotypic species beds, water quality conditions and general sediment types. A 3 m wide by 30 m long grid was centered on each transect to avoid spectral influences of the white targets. Water depth, species of seagrasses, estimates of vegetation cover percentage, estimates of epiphytic density, and measured canopy height were made for each 1 m2 (n=90). This target based grid arrangement allows for identification and extraction of pixel based hyperspectral signatures corresponding to individual ground truth grid cells without significant concern for rectification and registration error.

  10. Modeling Truth Existence in Truth Discovery.

    Science.gov (United States)

    Zhi, Shi; Zhao, Bo; Tong, Wenzhu; Gao, Jing; Yu, Dian; Ji, Heng; Han, Jiawei

    2015-08-01

    When integrating information from multiple sources, it is common to encounter conflicting answers to the same question. Truth discovery is to infer the most accurate and complete integrated answers from conflicting sources. In some cases, there exist questions for which the true answers are excluded from the candidate answers provided by all sources. Without any prior knowledge, these questions, named no-truth questions, are difficult to be distinguished from the questions that have true answers, named has-truth questions. In particular, these no-truth questions degrade the precision of the answer integration system. We address such a challenge by introducing source quality, which is made up of three fine-grained measures: silent rate, false spoken rate and true spoken rate. By incorporating these three measures, we propose a probabilistic graphical model, which simultaneously infers truth as well as source quality without any a priori training involving ground truth answers. Moreover, since inferring this graphical model requires parameter tuning of the prior of truth, we propose an initialization scheme based upon a quantity named truth existence score, which synthesizes two indicators, namely, participation rate and consistency rate. Compared with existing methods, our method can effectively filter out no-truth questions, which results in more accurate source quality estimation. Consequently, our method provides more accurate and complete answers to both has-truth and no-truth questions. Experiments on three real-world datasets illustrate the notable advantage of our method over existing state-of-the-art truth discovery methods.

  11. Yuma proving grounds automatic UXO detection using biomorphic robots

    Energy Technology Data Exchange (ETDEWEB)

    Tilden, M.W.

    1996-07-01

    The current variety and dispersion of Unexploded Ordnance (UXO) is a daunting technological problem for current sensory and extraction techniques. The bottom line is that the only way to insure a live UXO has been found and removed is to step on it. As this is an upsetting proposition for biological organisms like animals, farmers, or Yuma field personnel, this paper details a non-biological approach to developing inexpensive, automatic machines that will find, tag, and may eventually remove UXO from a variety of terrains by several proposed methods. The Yuma proving grounds (Arizona) has been pelted with bombs, mines, missiles, and shells since the 1940s. The idea of automatic machines that can clean up after such testing is an old one but as yet unrealized because of the daunting cost, power and complexity requirements of capable robot mechanisms. A researcher at Los Alamos National Laboratory has invented and developed a new variety of living robots that are solar powered, legged, autonomous, adaptive to massive damage, and very inexpensive. This technology, called Nervous Networks (Nv), allows for the creation of capable walking mechanisms (known as Biomorphic robots, or Biomechs for short) that rather than work from task principles use instead a survival-based design philosophy. This allows Nv based machines to continue doing work even after multiple limbs and sensors have been removed or damaged, and to dynamically negotiate complex terrains as an emergent property of their operation (fighting to proceed, as it were). They are not programmed, and indeed, the twelve transistor Nv controller keeps their electronic cost well below that of most pocket radios. It is suspected that advanced forms of these machines in huge numbers may be an interesting, capable solution to the problem of general and specific UXO identification, tagging, and removal.

  12. Evaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensor

    Directory of Open Access Journals (Sweden)

    Bodo eRückauer

    2016-04-01

    Full Text Available In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS. For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240x180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS. This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera.

  13. High-Resolution Precipitation Mapping in a Mountainous Watershed: Ground Truth for Evaluating Uncertainty in a National Precipitation Dataset

    Science.gov (United States)

    Daly, C.; Slater, M. E.; Roberti, J. A.; Laseter, S. H.; Swift, L. W.

    2016-12-01

    A 69-station, densely-spaced rain gauge network was maintained over the period 1951-1958 in the Coweeta Hydrologic Laboratory, located in the southern Appalachians in western North Carolina, USA. This unique dataset was used to develop the first digital seasonal and annual precipitation maps for the Coweeta basin, using elevation regression functions and residual interpolation. It was found that a 10-m elevation grid filtered to an approximately 7-km effective wavelength explained the most variance in precipitation (R2 = 0.82-0.95). A "dump zone" of locally high precipitation a short distance downwind from the mountain crest marking the southern border of the basin was the main feature that was not explained well by the precipitation-elevation relationship. These data and maps provided a rare "ground-truth" for estimating uncertainty in the national-scale Parameter-elevation Relationships on Independent Slopes Model (PRISM) precipitation grids for this location and time period. Differences between PRISM and ground-truth were compared to uncertainty estimates produced by the PRISM model and cross-validation errors. Potential sources of uncertainty in the national PRISM grids were evaluated, including the effects of coarse grid resolution, limited station data, and imprecise station locations. The PRISM national grids matched closely (within five percent) with the Coweeta dataset. The PRISM regression prediction interval, which includes the influence of stations in an area of tens of km around a given location, overestimated the local error at Coweeta (12-20 percent). Offsetting biases and generally low error rates made it difficult to isolate major sources of uncertainty in the PRISM grids. However, station density and selection, and mis-location of stations were identified as likely sources of error. The methods used in this study can be repeated in other areas where high-density data exist to gain a more comprehensive picture of the uncertainties in national

  14. Assessment of MTI Water Temperature Retrievals with Ground Truth from the Comanche Peak Steam Electric Station Cooling Lake

    Energy Technology Data Exchange (ETDEWEB)

    Kurzeja, R.J.

    2002-12-09

    Surface water temperatures calculated from Multispectral Thermal Imager (MTI) brightness temperatures and the robust retrieval algorithm, developed by the Los Alamos National Laboratory (LANL), are compared with ground truth measurements at the Squaw Creek reservoir at the Comanche Peak Steam Electric Station near Granbury Texas. Temperatures calculated for thirty-four images covering the period May 2000 to March 2002 are compared with water temperatures measured at 10 instrumented buoy locations supplied by the Savannah River Technology Center. The data set was used to examine the effect of image quality on temperature retrieval as well as to document any bias between the sensor chip arrays (SCA's). A portion of the data set was used to evaluate the influence of proximity to shoreline on the water temperature retrievals. This study found errors in daytime water temperature retrievals of 1.8 C for SCA 2 and 4.0 C for SCA 1. The errors in nighttime water temperature retrievals were 3.8 C for SCA 1. Water temperature retrievals for nighttime appear to be related to image quality with the largest positive bias for the highest quality images and the largest negative bias for the lowest quality images. The daytime data show no apparent relationship between water temperature retrieval error and image quality. The average temperature retrieval error near open water buoys was less than corresponding values for the near-shore buoys. After subtraction of the estimated error in the ground truth data, the water temperature retrieval error was 1.2 C for the open-water buoys compared to 1.8 C for the near-shore buoys. The open-water error is comparable to that found at Nauru.

  15. Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor.

    Science.gov (United States)

    Rueckauer, Bodo; Delbruck, Tobi

    2016-01-01

    In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera.

  16. [Truth and truthfulness in philosophy].

    Science.gov (United States)

    Siep, L

    2000-01-01

    Truth and truthfulness are traditional subjects of philosophy. The paper discusses different concepts of truth (epistemological, ontological, normative) and different theories about the acquisition of truth (truth as correspondence, coherence or consensus, pragmatic theories of truth, and truth as immediate subjective evidence). The last part deals with the moral question of truthfulness or sincerity and its conflict with benevolence in cases of patient information about a fatal illness.

  17. System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator

    Science.gov (United States)

    2006-08-01

    System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator Jae-Jun Kim∗ and Brij N. Agrawal † Department of...TITLE AND SUBTITLE System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator 5a. CONTRACT NUMBER 5b...and Dynamics, Vol. 20, No. 4, July-August 1997, pp. 625-632. 6Schwartz, J. L. and Hall, C. D., “ System Identification of a Spherical Air-Bearing

  18. Science results from a Mars drilling simulation (Río Tinto, Spain) and ground truth for remote science observations.

    Science.gov (United States)

    Bonaccorsi, Rosalba; Stoker, Carol R

    2008-10-01

    Science results from a field-simulated lander payload and post-mission laboratory investigations provided "ground truth" to interpret remote science observations made as part of the 2005 Mars Astrobiology Research and Technology Experiment (MARTE) drilling mission simulation. The experiment was successful in detecting evidence for life, habitability, and preservation potential of organics in a relevant astrobiological analogue of Mars. SCIENCE RESULTS: Borehole 7 was drilled near the Río Tinto headwaters at Peña de Hierro (Spain) in the upper oxidized remnant of an acid rock drainage system. Analysis of 29 cores (215 cm of core was recovered from 606 cm penetrated depth) revealed a matrix of goethite- (42-94%) and hematite-rich (47-87%) rocks with pockets of phyllosilicates (47-74%) and fine- to coarse-grained loose material. Post-mission X-ray diffraction (XRD) analysis confirmed the range of hematite:goethite mixtures that were visually recognizable (approximately 1:1, approximately 1:2, and approximately 1:3 mixtures displayed a yellowish-red color whereas 3:1 mixtures displayed a dark reddish-brown color). Organic carbon was poorly preserved in hematite/goethite-rich materials (C(org) TRUTH VS. REMOTE SCIENCE ANALYSIS: Laboratory-based analytical results were compared to the analyses obtained by a Remote Science Team (RST) using a blind protocol. Ferric iron phases, lithostratigraphy, and inferred geologic history were correctly identified by the RST with the exception of phyllosilicate-rich materials that were misinterpreted as weathered igneous rock. Adenosine 5'-triphosphate (ATP) luminometry, a tool available to the RST, revealed ATP amounts above background noise, i.e., 278-876 Relative Luminosity Units (RLUs) in only 6 cores, whereas organic carbon was detected in all cores. Our manned vs. remote observations based on automated acquisitions during the project provide insights for the preparation of future astrobiology-driven Mars missions.

  19. Seasonal variations of infrasonic arrivals from long-term ground truth observations in Nevada and implication for event location

    Science.gov (United States)

    Negraru, Petru; Golden, Paul

    2017-04-01

    Long-term ground truth observations were collected at two infrasound arrays in Nevada to investigate how seasonal atmospheric variations affect the detection, traveltime and signal characteristics (azimuth, trace velocity, frequency content and amplitudes) of infrasonic arrivals at regional distances. The arrays were located in different azimuthal directions from a munition disposal facility in Nevada. FNIAR, located 154 km north of the source has a high detection rate throughout the year. Over 90 per cent of the detonations have traveltimes indicative of stratospheric arrivals, while tropospheric waveguides are observed from only 27 per cent of the detonations. The second array, DNIAR, located 293 km southeast of the source exhibits strong seasonal variations with high stratospheric detection rates in winter and the virtual absence of stratospheric arrivals in summer. Tropospheric waveguides and thermospheric arrivals are also observed for DNIAR. Modeling through the Naval Research Laboratory Ground to Space atmospheric sound speeds leads to mixed results: FNIAR arrivals are usually not predicted to be present at all (either stratospheric or tropospheric), while DNIAR arrivals are usually correctly predicted, but summer arrivals show a consistent traveltime bias. In the end, we show the possible improvement in location using empirically calibrated traveltime and azimuth observations. Using the Bayesian Infrasound Source Localization we show that we can decrease the area enclosed by the 90 per cent credibility contours by a factor of 2.5.

  20. Seasonal variations of infrasonic arrivals from long term ground truth observations in Nevada and implication for event location

    Science.gov (United States)

    Negraru, Petru; Golden, Paul

    2017-01-01

    SUMMARYLong term ground truth observations were collected at two infrasound arrays in Nevada to investigate how seasonal atmospheric variations affect the detection, travel time and signal characteristics (azimuth, trace velocity, frequency content and amplitudes) of infrasonic arrivals at regional distances. The arrays were located in different azimuthal directions from a munition disposal facility in Nevada. FNIAR, located 154 km north of the source has a high detection rate throughout the year. Over 90% of the detonations have travel times indicative of stratospheric arrivals, while tropospheric waveguides are observed from only 27% of the detonations. The second array, DNIAR, located 293 km southeast of the source exhibits strong seasonal variations with high stratospheric detection rates in winter and the virtual absence of stratospheric arrivals in summer. Tropospheric waveguides and thermospheric arrivals are also observed for DNIAR. Modelling through the Naval Research Laboratory Ground to Space (G2S) atmospheric sound speeds leads to mixed results: FNIAR arrivals are usually not predicted to be present at all (either stratospheric or tropospheric), while DNIAR arrivals are usually correctly predicted, but summer arrivals show a consistent travel time bias. In the end we show the possible improvement in location using empirically calibrated travel time and azimuth observations. Using the Bayesian Infrasound Source Localization we show that we can decrease the area enclosed by the 90% credibility contours by a factor of 2.5.

  1. Truth Troubles

    Science.gov (United States)

    Tullis Owen, Jillian A.; McRae, Chris; Adams, Tony E.; Vitale, Alisha

    2009-01-01

    "truth" is an issue of public discussion, research, and everyday performance. Processes of navigating truth, however, are obscure and often unknown. In this project, the authors highlight truth(s) of written life texts. They conceive of truth as "a" rather than "the" "rhetorical device" to use for evaluating personal research and believe that…

  2. Truth Troubles

    Science.gov (United States)

    Tullis Owen, Jillian A.; McRae, Chris; Adams, Tony E.; Vitale, Alisha

    2009-01-01

    "truth" is an issue of public discussion, research, and everyday performance. Processes of navigating truth, however, are obscure and often unknown. In this project, the authors highlight truth(s) of written life texts. They conceive of truth as "a" rather than "the" "rhetorical device" to use for evaluating personal research and believe that…

  3. Towards ground-truthing of spaceborne estimates of above-ground biomass and leaf area index in tropical rain forests

    OpenAIRE

    Köhler, P.; Huth, A.

    2010-01-01

    The canopy height of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or lidar. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degr...

  4. Geographic information system for fusion and analysis of high-resolution remote sensing and ground truth data

    Science.gov (United States)

    Freeman, Anthony; Way, Jo Bea; Dubois, Pascale; Leberl, Franz

    1992-01-01

    We seek to combine high-resolution remotely sensed data with models and ground truth measurements, in the context of a Geographical Information System, integrated with specialized image processing software. We will use this integrated system to analyze the data from two Case Studies, one at a bore Al forest site, the other a tropical forest site. We will assess the information content of the different components of the data, determine the optimum data combinations to study biogeophysical changes in the forest, assess the best way to visualize the results, and validate the models for the forest response to different radar wavelengths/polarizations. During the 1990's, unprecedented amounts of high-resolution images from space of the Earth's surface will become available to the applications scientist from the LANDSAT/TM series, European and Japanese ERS-1 satellites, RADARSAT and SIR-C missions. When the Earth Observation Systems (EOS) program is operational, the amount of data available for a particular site can only increase. The interdisciplinary scientist, seeking to use data from various sensors to study his site of interest, may be faced with massive difficulties in manipulating such large data sets, assessing their information content, determining the optimum combinations of data to study a particular parameter, visualizing his results and validating his model of the surface. The techniques to deal with these problems are also needed to support the analysis of data from NASA's current program of Multi-sensor Airborne Campaigns, which will also generate large volumes of data. In the Case Studies outlined in this proposal, we will have somewhat unique data sets. For the Bonanza Creek Experimental Forest (Case I) calibrated DC-8 SAR data and extensive ground truth measurement are already at our disposal. The data set shows documented evidence to temporal change. The Belize Forest Experiment (Case II) will produce calibrated DC-8 SAR and AVIRIS data, together with

  5. Geographic information system for fusion and analysis of high-resolution remote sensing and ground truth data

    Science.gov (United States)

    Freeman, Anthony; Way, Jo Bea; Dubois, Pascale; Leberl, Franz

    1992-01-01

    We seek to combine high-resolution remotely sensed data with models and ground truth measurements, in the context of a Geographical Information System, integrated with specialized image processing software. We will use this integrated system to analyze the data from two Case Studies, one at a bore Al forest site, the other a tropical forest site. We will assess the information content of the different components of the data, determine the optimum data combinations to study biogeophysical changes in the forest, assess the best way to visualize the results, and validate the models for the forest response to different radar wavelengths/polarizations. During the 1990's, unprecedented amounts of high-resolution images from space of the Earth's surface will become available to the applications scientist from the LANDSAT/TM series, European and Japanese ERS-1 satellites, RADARSAT and SIR-C missions. When the Earth Observation Systems (EOS) program is operational, the amount of data available for a particular site can only increase. The interdisciplinary scientist, seeking to use data from various sensors to study his site of interest, may be faced with massive difficulties in manipulating such large data sets, assessing their information content, determining the optimum combinations of data to study a particular parameter, visualizing his results and validating his model of the surface. The techniques to deal with these problems are also needed to support the analysis of data from NASA's current program of Multi-sensor Airborne Campaigns, which will also generate large volumes of data. In the Case Studies outlined in this proposal, we will have somewhat unique data sets. For the Bonanza Creek Experimental Forest (Case I) calibrated DC-8 SAR data and extensive ground truth measurement are already at our disposal. The data set shows documented evidence to temporal change. The Belize Forest Experiment (Case II) will produce calibrated DC-8 SAR and AVIRIS data, together with

  6. Science Results from a Mars Drilling Simulation (Río Tinto, Spain) and Ground Truth for Remote Science Observations

    Science.gov (United States)

    Bonaccorsi, Rosalba; Stoker, Carol R.

    2008-10-01

    Science results from a field-simulated lander payload and post-mission laboratory investigations provided "ground truth" to interpret remote science observations made as part of the 2005 Mars Astrobiology Research and Technology Experiment (MARTE) drilling mission simulation. The experiment was successful in detecting evidence for life, habitability, and preservation potential of organics in a relevant astrobiological analogue of Mars. Science results. Borehole 7 was drilled near the Río Tinto headwaters at Peña de Hierro (Spain) in the upper oxidized remnant of an acid rock drainage system. Analysis of 29 cores (215 cm of core was recovered from 606 cm penetrated depth) revealed a matrix of goethite- (42-94%) and hematite-rich (47-87%) rocks with pockets of phyllosilicates (47-74%) and fine- to coarse-grained loose material. Post-mission X-ray diffraction (XRD) analysis confirmed the range of hematite:goethite mixtures that were visually recognizable (˜1:1, ˜1:2, and ˜1:3 mixtures displayed a yellowish-red color whereas 3:1 mixtures displayed a dark reddish-brown color). Organic carbon was poorly preserved in hematite/goethite-rich materials (Corg <0.05 wt %) beneath the biologically active organic-rich soil horizon (Corg ˜3-11 wt %) in contrast to the phyllosilicate-rich zones (Corg ˜0.23 wt %). Ground truth vs. remote science analysis. Laboratory-based analytical results were compared to the analyses obtained by a Remote Science Team (RST) using a blind protocol. Ferric iron phases, lithostratigraphy, and inferred geologic history were correctly identified by the RST with the exception of phyllosilicate-rich materials that were misinterpreted as weathered igneous rock. Adenosine 5‧-triphosphate (ATP) luminometry, a tool available to the RST, revealed ATP amounts above background noise, i.e., 278-876 Relative Luminosity Units (RLUs) in only 6 cores, whereas organic carbon was detected in all cores. Our manned vs. remote observations based on automated

  7. Ground Truthing Orbital Clay Mineral Observations with the APXS Onboard Mars Exploration Rover Opportunity

    Science.gov (United States)

    Schroeder, C.; Gellert, R.; VanBommel, S.; Clark, B. C.; Ming, D. W.; Mittlefehldt, D. S.; Yen, A. S.

    2016-01-01

    NASA's Mars Exploration Rover Opportunity has been exploring approximately 22 km diameter Endeavour crater since 2011. Its rim segments predate the Hesperian-age Burns formation and expose Noachian-age material, which is associated with orbital Fe3+-Mg-rich clay mineral observations [1,2]. Moving to an orders of magnitude smaller instrumental field of view on the ground, the clay minerals were challenging to pinpoint on the basis of geochemical data because they appear to be the result of near-isochemical weathering of the local bedrock [3,4]. However, the APXS revealed a more complex mineral story as fracture fills and so-called red zones appear to contain more Al-rich clay minerals [5,6], which had not been observed from orbit. These observations are important to constrain clay mineral formation processes. More detail will be added as Opportunity is heading into her 10th extended mission, during which she will investigate Noachian bedrock that predates Endeavour crater, study sedimentary rocks inside Endeavour crater, and explore a fluid-carved gully. ESA's ExoMars rover will land on Noachian-age Oxia Planum where abundant Fe3+-Mg-rich clay minerals have been observed from orbit, but the story will undoubtedly become more complex once seen from the ground.

  8. Operation of an array of field-change detectors to provide ground truth for FORTE data

    Energy Technology Data Exchange (ETDEWEB)

    Massey, R.S.; Eack, K.B.; Eberle, M.H.; Shao, X.M.; Smith, D.A. [Los Alamos National Lab., NM (United States). Space and Atmospheric Sciences Group; Wiens, K.C. [New Mexico Inst. of Tech., Socorro, NM (United States)

    1999-06-01

    The authors have deployed an array of fast electric-field-change sensors around the state of New Mexico to help identify the lightning processes responsible for the VHF RF signals detected by the FORTE satellite`s wide-band transient radio emission receivers. The array provides them with locations and electric-field waveforms for events within New Mexico and into surrounding states, and operates continuously. They are particularly interested in events for which there are coincident FORTE observations. For these events, they can correct both the array and FORTE waveforms for time of flight, and can plot the two waveforms on a common time axis. Most of the coincident events are from cloud-go-ground discharges, but the most powerful are from a little-studied class of events variously called narrow bipolar events and compact intra-cloud discharges. They have therefore focused their attention on these events whether or not FORTE was in position to observe them.

  9. Ground-Truthing Seismic Refraction Tomography for Sinkhole Detection in Florida

    Science.gov (United States)

    Hiltunen, D. R.; Hudyma, N.; Quigley, T. P.; Samakur, C.

    2007-12-01

    In order to provide effective return of storm water runoff to the subsurface aquifer, the Florida Department of Transportation (FDOT) constructs detention basins adjacent to its transportation facilities. These basins serve as a collection point for runoff within a local drainage area, and the overburden soil above the aquifer provides a natural filter for contaminants in the surface runoff water. However, the geologic setting for many of these basins in Florida is karst, limestone bedrock at shallow depth, and the concentration of water flow in these basins leads to frequent development of sinkholes. These sinkholes are an environmental hazard, as they provide a direct, open conduit for contaminant-laden runoff water to return to the aquifer rather than percolate through the overburden soil. Consequently, FDOT is keenly interested in all aspects of sinkholes, including factors leading to formation, methods of early detection, and effective methods for rapid repair. Recently, FDOT has engaged in a research effort to evaluate the capabilities of a wide range of geophysical investigation tools with regard to detection of sinkhole-prone areas within sites being considered for construction of detention ponds. The geophysical techniques evaluated have included ground penetrating radar (GPR), multi- electrode electrical resistivity (MER), seismic MASW, and seismic refraction tomography. In addition to geophysical testing at the research sites, extensive traditional geotechnical site characterization has been conducted, including boring and sampling of soil and rock, standard penetration tests (SPT), and cone penetration tests (CPT). The proposed paper will evaluate the capabilities of seismic refraction tomography. Comparisons between refraction tomograms and borehole logs, SPT soundings, and CPT soundings suggest that the refraction method can map the laterally-variable top of bedrock surface typical of karst terrane. During a recent ground proving exercise at the

  10. A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures.

    Science.gov (United States)

    DeCost, Brian L; Holm, Elizabeth A

    2016-12-01

    This data article presents a data set comprised of 2048 synthetic scanning electron microscope (SEM) images of powder materials and descriptions of the corresponding 3D structures that they represent. These images were created using open source rendering software, and the generating scripts are included with the data set. Eight particle size distributions are represented with 256 independent images from each. The particle size distributions are relatively similar to each other, so that the dataset offers a useful benchmark to assess the fidelity of image analysis techniques. The characteristics of the PSDs and the resulting images are described and analyzed in more detail in the research article "Characterizing powder materials using keypoint-based computer vision methods" (B.L. DeCost, E.A. Holm, 2016) [1]. These data are freely available in a Mendeley Data archive "A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures" (B.L. DeCost, E.A. Holm, 2016) located at http://dx.doi.org/10.17632/tj4syyj9mr.1[2] for any academic, educational, or research purposes.

  11. On the construction of a ground truth framework for evaluating voxel-based diffusion tensor MRI analysis methods.

    Science.gov (United States)

    Van Hecke, Wim; Sijbers, Jan; De Backer, Steve; Poot, Dirk; Parizel, Paul M; Leemans, Alexander

    2009-07-01

    Although many studies are starting to use voxel-based analysis (VBA) methods to compare diffusion tensor images between healthy and diseased subjects, it has been demonstrated that VBA results depend heavily on parameter settings and implementation strategies, such as the applied coregistration technique, smoothing kernel width, statistical analysis, etc. In order to investigate the effect of different parameter settings and implementations on the accuracy and precision of the VBA results quantitatively, ground truth knowledge regarding the underlying microstructural alterations is required. To address the lack of such a gold standard, simulated diffusion tensor data sets are developed, which can model an array of anomalies in the diffusion properties of a predefined location. These data sets can be employed to evaluate the numerous parameters that characterize the pipeline of a VBA algorithm and to compare the accuracy, precision, and reproducibility of different post-processing approaches quantitatively. We are convinced that the use of these simulated data sets can improve the understanding of how different diffusion tensor image post-processing techniques affect the outcome of VBA. In turn, this may possibly lead to a more standardized and reliable evaluation of diffusion tensor data sets of large study groups with a wide range of white matter altering pathologies. The simulated DTI data sets will be made available online (http://www.dti.ua.ac.be).

  12. Ground Truth for Diffusion MRI in Cancer: A Model-Based Investigation of a Novel Tissue-Mimetic Material.

    Science.gov (United States)

    McHugh, Damien J; Zhou, Fenglei; Cristinacce, Penny L Hubbard; Naish, Josephine H; Parker, Geoffrey J M

    2015-01-01

    This work presents preliminary results on the development, characterisation, and use of a novel physical phantom designed as a simple mimic of tumour cellular structure, for diffusion-weighted magnetic resonance imaging (DW-MRI) applications. The phantom consists of a collection of roughly spherical, micron-sized core-shell polymer 'cells', providing a system whose ground truth microstructural properties can be determined and compared with those obtained from modelling the DW-MRI signal. A two-compartment analytic model combining restricted diffusion inside a sphere with hindered extracellular diffusion was initially investigated through Monte Carlo diffusion simulations, allowing a comparison between analytic and simulated signals. The model was then fitted to DW-MRI data acquired from the phantom over a range of gradient strengths and diffusion times, yielding estimates of 'cell' size, intracellular volume fraction and the free diffusion coefficient. An initial assessment of the accuracy and precision of these estimates is provided, using independent scanning electron microscope measurements and bootstrap-style simulations. Such phantoms may be useful for testing microstructural models relevant to the characterisation of tumour tissue.

  13. A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures

    Directory of Open Access Journals (Sweden)

    Brian L. DeCost

    2016-12-01

    Full Text Available This data article presents a data set comprised of 2048 synthetic scanning electron microscope (SEM images of powder materials and descriptions of the corresponding 3D structures that they represent. These images were created using open source rendering software, and the generating scripts are included with the data set. Eight particle size distributions are represented with 256 independent images from each. The particle size distributions are relatively similar to each other, so that the dataset offers a useful benchmark to assess the fidelity of image analysis techniques. The characteristics of the PSDs and the resulting images are described and analyzed in more detail in the research article “Characterizing powder materials using keypoint-based computer vision methods” (B.L. DeCost, E.A. Holm, 2016 [1]. These data are freely available in a Mendeley Data archive “A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures” (B.L. DeCost, E.A. Holm, 2016 located at http://dx.doi.org/10.17632/tj4syyj9mr.1 [2] for any academic, educational, or research purposes.

  14. Improvements on GPS Location Cluster Analysis for the Prediction of Large Carnivore Feeding Activities: Ground-Truth Detection Probability and Inclusion of Activity Sensor Measures.

    Directory of Open Access Journals (Sweden)

    Kevin A Blecha

    Full Text Available Animal space use studies using GPS collar technology are increasingly incorporating behavior based analysis of spatio-temporal data in order to expand inferences of resource use. GPS location cluster analysis is one such technique applied to large carnivores to identify the timing and location of feeding events. For logistical and financial reasons, researchers often implement predictive models for identifying these events. We present two separate improvements for predictive models that future practitioners can implement. Thus far, feeding prediction models have incorporated a small range of covariates, usually limited to spatio-temporal characteristics of the GPS data. Using GPS collared cougar (Puma concolor we include activity sensor data as an additional covariate to increase prediction performance of feeding presence/absence. Integral to the predictive modeling of feeding events is a ground-truthing component, in which GPS location clusters are visited by human observers to confirm the presence or absence of feeding remains. Failing to account for sources of ground-truthing false-absences can bias the number of predicted feeding events to be low. Thus we account for some ground-truthing error sources directly in the model with covariates and when applying model predictions. Accounting for these errors resulted in a 10% increase in the number of clusters predicted to be feeding events. Using a double-observer design, we show that the ground-truthing false-absence rate is relatively low (4% using a search delay of 2-60 days. Overall, we provide two separate improvements to the GPS cluster analysis techniques that can be expanded upon and implemented in future studies interested in identifying feeding behaviors of large carnivores.

  15. Land Use and Land Cover, Existing land use derived from orthoimagery. Ground-truthing from discussion with local plan commission members., Published in 2000, 1:12000 (1in=1000ft) scale, Portage County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Land Use and Land Cover dataset current as of 2000. Existing land use derived from orthoimagery. Ground-truthing from discussion with local plan commission members..

  16. Study on shift schedule saving energy of automatic transmission of ground vehicles

    Institute of Scientific and Technical Information of China (English)

    龚捷; 赵丁选; 陈鹰; 陈宁

    2004-01-01

    To improve ground vehicle efficiency, shift schedule energy saving was proposed for the ground vehicle automatic transmission by studying the function of the torque converter and transmission in the vehicular drivetrain. The shift schedule can keep the torque converter working in the high efficiency range under all the working conditions except in the low efficiency range on the left when the transmission worked at the lowest shift, and in the low efficiency range on the right when the transmission worked at the highest shift. The shift quality key factors were analysed. The automatic trans-mission's bench-test adopting this shift schedule was made on the automatic transmission's test-bed. The experimental results showed that the shift schedule was correct and that the shift quality was controllable.

  17. Study on shift schedule saving energy of automatic transmission of ground vehicles

    Institute of Scientific and Technical Information of China (English)

    龚捷; 赵丁选; 陈鹰; 陈宁

    2004-01-01

    To improve ground vehicle efficiency,shift schedule energy saving was proposed for the ground vehicle automatic transmission by studying the function of the torque converter and transmission in the vehicular drivetrain.The shift schedule can keep the torque converter working in the high efficiency range under all the working conditions except in the low efficiency range on the left when the transmission worked at the lowest shift,and in the low efficiency range on the right when the transmission worked at the highest shift.The shift quality key factors were analysed.The automatic transmission's bench-test adopting this shift schedule was made on the automatic transmission's test-bed.The experimental results showed that the shift schedule was correct and that the shift quality was controllable.

  18. Validation of ENVISAT/SCIAMACHY columnar methane by solar FTIR spectrometry at the Ground-Truthing Station Zugspitze

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2005-04-01

    Full Text Available Methane total-vertical column retrievals from ground-based solar FTIR measurements at the Permanent Ground-Truthing Station Zugspitze (47.42° N, 10.98° E, 2964 m a.s.l., Germany are used to validate column averaged methane retrieved from ENVISAT/SCIAMACHY spectra by WFM-DOAS (WFMD version 0.4 and 0.41 for 153 days in 2003. Smoothing errors are estimated to be below 0.10% for FTIR and 0.14% for SCIAMACHY-WFMD retrievals and can be neglected for the assessment of observed bias and day-to-day-scatter. In order to minimize the altitude-difference effect, dry-air column averaged mixing ratios (XCH4 have been utilized. From the FTIR-time series of XCH4 an atmospheric day-to-day variability of 1% was found, and a sinusoidal annual cycle with a ≈1.6% amplitude. To obtain the WFMD bias, a polynomial fitted to the FTIR series was used as a reference. The result is WFMD v0.4/FTIR=1.008±0.019 and WFMD v0.41/FTIR=1.058±0.008. WFMD v0.41 was significantly improved by a time-dependent bias correction. It can still not capture the natural day-to-day variability, i.e., the standard deviation calculated from the daily-mean values is 2.4% using averages within a 2000-km radius, and 2.7% for a 1000-km radius. These numbers are dominated by a residual time-dependent bias in the order of 3%/month. The latter can be reduced, e.g., from 2.4% to 1.6% as shown by an empirical time-dependent bias correction. Standard deviations of the daily means, calculated from the individual measurements of each day, are excluding time-dependent biases, thus showing the potential precision of WFMD daily means, i.e., 0.3% for a 2000-km selection radius, and 0.6% for a 1000-km selection radius. Therefore, the natural variability could be captured under the prerequisite of further advanced time-dependent bias corrections, or the use of other channels, where the icing issue is less prominent.

  19. Presentation video retrieval using automatically recovered slide and spoken text

    Science.gov (United States)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  20. 4. Truth(s)

    OpenAIRE

    2016-01-01

    Collective truth: Bacon That the collaborative interrogation of the natural world, promoted by the Royal Society in its early days, could be more productive than individual endeavour was brought home to the learned world by Francis Bacon. His depiction of the fictional Solomon’s House in the New Atlantis (1627) was to serve as a prototype of organised scientific activities for the satisfaction of human needs. Fig. 10 Title page of New Atlantis in the second edition of Francis Bacon’s Sylva s...

  1. GROUND TRUTH, MAGNITUDE CALIBRATION AND REGIONAL PHASE PROPAGATION AND DETECTION IN THE MIDDLE EAST AND HORN OF AFRICA

    Energy Technology Data Exchange (ETDEWEB)

    Nyblade, A; Adams, A; Brazier, R; Park, Y; Rodgers, A

    2006-07-10

    In this project, we are exploiting unique and open source seismic data sets to improve seismic monitoring across the Middle East (including the Iranian Plateau, Zagros Mountains, Arabian Peninsula, Turkish Plateau, Gulf of Aqaba, Dead Sea Rift) and the Horn of Africa (including the northern part of the East African Rift, Afar Depression, southern Red Sea and Gulf of Aden). The data sets are being used to perform three related tasks. (1) We are determining moment tensors, moment magnitudes and source depths for regional events in the magnitude 3.0 to 6.0 range. (2) These events are being used to characterize high-frequency (0.5-16 Hz) regional phase attenuation and detection thresholds, especially from events in Iran recorded at stations across the Arabian Peninsula. (3) We are collecting location ground truth at GT5 (local) and GT20 (regional) levels for seismic events with M > 2.5, including source geometry information and source depths. In the first phase of this project, seismograms from earthquakes in the Zagros Mountains recorded at regional distances have been inverted for moment tensors, and source depths for the earthquakes have been determined via waveform matching. Early studies of the distribution of seismicity in the Zagros region found evidence for earthquakes in the upper mantle. But subsequent relocations of teleseismic earthquakes suggest that source depths are generally much shallower, lying mainly within the upper crust. Nine events with magnitudes between 5 and 6 have been studied so far. Source depths for six of the events are within the upper crust, and three are located within the lower crust. The uncertainty in the source depths of the lower crustal events allows for the possibility that some of them may have even nucleated within the upper mantle. Eight events have thrust mechanisms and one has a strike-slip mechanism. We also report estimates of three-dimensional P- and S-wave velocity structure of the upper mantle beneath the Arabian

  2. Ground Truth, Magnitude Calibration and Regional Phase Propagation and Detection in the Middle East and Horn of Africa

    Energy Technology Data Exchange (ETDEWEB)

    Nyblade, A; Brazier, R; Adams, A; Park, Y; Rodgers, A; Al-Amri, A

    2007-07-08

    In this project, we are exploiting several seismic data sets to improve U.S. operational capabilities to monitor for low yield nuclear tests across the Middle East (including the Iranian Plateau, Zagros Mountains, Arabian Peninsula, Turkish Plateau, Gulf of Aqaba, Dead Sea Rift) and the Horn of Africa (including the northern part of the East African Rift, Afar Depression, southern Red Sea and Gulf of Aden). The data sets are being used to perform three related tasks. (1) We are determining moment tensors, moment magnitudes and source depths for regional events in the magnitude 3.0 to 6.0 range. (2) These events are being used to characterize high-frequency (0.5-16 Hz) regional phase attenuation and detection thresholds, especially from events in Iran recorded at stations across the Arabian Peninsula. (3) We are collecting location ground truth at GT5 (local) and GT20 (regional) levels for seismic events with M > 2.5, including source geometry information and source depths. Towards meeting these objectives, seismograms from earthquakes in the Zagros Mountains recorded at regional distances have been inverted for moment tensors, which have then been used to create synthetic seismograms to determine the source depths of the earthquakes via waveform matching. The source depths have been confirmed by modeling teleseismic depth phases recorded on GSN and IMS stations. Early studies of the distribution of seismicity in the Zagros region found evidence for earthquakes in the upper mantle. But subsequent relocations of teleseismic earthquakes suggest that source depths are generally much shallower, lying mainly within the upper crust. All of the regional events studied so far nucleated within the upper crust, and most of the events have thrust mechanisms. The source mechanisms for these events are being used to characterize high-frequency (0.5-16 Hz) regional phase attenuation and detection thresholds for broadband seismic stations in the Arabian Peninsula, including IMS

  3. Estimation of snowpack matching ground-truth data and MODIS satellite-based observations by using regression kriging

    Science.gov (United States)

    Juan Collados-Lara, Antonio; Pardo-Iguzquiza, Eulogio; Pulido-Velazquez, David

    2016-04-01

    The estimation of Snow Water Equivalent (SWE) is essential for an appropriate assessment of the available water resources in Alpine catchment. The hydrologic regime in these areas is dominated by the storage of water in the snowpack, which is discharged to rivers throughout the melt season. An accurate estimation of the resources will be necessary for an appropriate analysis of the system operation alternatives using basin scale management models. In order to obtain an appropriate estimation of the SWE we need to know the spatial distribution snowpack and snow density within the Snow Cover Area (SCA). Data for these snow variables can be extracted from in-situ point measurements and air-borne/space-borne remote sensing observations. Different interpolation and simulation techniques have been employed for the estimation of the cited variables. In this paper we propose to estimate snowpack from a reduced number of ground-truth data (1 or 2 campaigns per year with 23 observation point from 2000-2014) and MODIS satellite-based observations in the Sierra Nevada Mountain (Southern Spain). Regression based methodologies has been used to study snowpack distribution using different kind of explicative variables: geographic, topographic, climatic. 40 explicative variables were considered: the longitude, latitude, altitude, slope, eastness, northness, radiation, maximum upwind slope and some mathematical transformation of each of them [Ln(v), (v)^-1; (v)^2; (v)^0.5). Eight different structure of regression models have been tested (combining 1, 2, 3 or 4 explicative variables). Y=B0+B1Xi (1); Y=B0+B1XiXj (2); Y=B0+B1Xi+B2Xj (3); Y=B0+B1Xi+B2XjXl (4); Y=B0+B1XiXk+B2XjXl (5); Y=B0+B1Xi+B2Xj+B3Xl (6); Y=B0+B1Xi+B2Xj+B3XlXk (7); Y=B0+B1Xi+B2Xj+B3Xl+B4Xk (8). Where: Y is the snow depth; (Xi, Xj, Xl, Xk) are the prediction variables (any of the 40 variables); (B0, B1, B2, B3) are the coefficients to be estimated. The ground data are employed to calibrate the multiple regressions. In

  4. Automatic Checkout System for Ground Electronics of a Weapon System (Short Communication

    Directory of Open Access Journals (Sweden)

    V. Ashok Kumar

    1997-04-01

    Full Text Available An automatic checkout system (ACOS designed and developed for a surface-to-air missile system is described. The system has a built-in self-check and has been extensively used for checking faults in the subsystems of ground electronics. It has resulted in saving a lot of effort in quickly diagnosing and rectifying faults. The salient features of the ACOS have been described and the scope for further work in this area has been outline.

  5. Towards ground-truthing of spaceborne estimates of above-ground life biomass and leaf area index in tropical rain forests

    Science.gov (United States)

    Köhler, P.; Huth, A.

    2010-08-01

    The canopy height h of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or LIDAR. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground life biomass (AGB) (and thus carbon content of vegetation) and leaf area index (LAI) and identify how correlation and uncertainty vary for two different spatial scales. The process-based forest growth model FORMIND2.0 was applied to simulate (a) undisturbed forest growth and (b) a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia) in South-East Asia. In both undisturbed and disturbed forests AGB can be expressed as a power-law function of canopy height h (AGB = a · hb) with an r2 ~ 60% if data are analysed in a spatial resolution of 20 m × 20 m (0.04 ha, also called plot size). The correlation coefficient of the regression is becoming significant better in the disturbed forest sites (r2 = 91%) if data are analysed hectare wide. There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2 ~ 60%) between AGB and the area fraction of gaps in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot (PSP) data from the same region and with the

  6. Towards ground-truthing of spaceborne estimates of above-ground life biomass and leaf area index in tropical rain forests

    Directory of Open Access Journals (Sweden)

    P. Köhler

    2010-08-01

    Full Text Available The canopy height h of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or LIDAR. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground life biomass (AGB (and thus carbon content of vegetation and leaf area index (LAI and identify how correlation and uncertainty vary for two different spatial scales. The process-based forest growth model FORMIND2.0 was applied to simulate (a undisturbed forest growth and (b a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia in South-East Asia. In both undisturbed and disturbed forests AGB can be expressed as a power-law function of canopy height h (AGB = a · hb with an r2 ~ 60% if data are analysed in a spatial resolution of 20 m × 20 m (0.04 ha, also called plot size. The correlation coefficient of the regression is becoming significant better in the disturbed forest sites (r2 = 91% if data are analysed hectare wide. There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2 ~ 60% between AGB and the area fraction of gaps in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a

  7. Curiosity's traverse through the upper Murray formation (Gale crater): ground truth for orbital detections of Martian clay minerals

    Science.gov (United States)

    Dehouck, Erwin; Carter, John; Gasnault, Olivier; Pinet, Patrick; Daydou, Yves; Gondet, Brigitte; Mangold, Nicolas; Johnson, Jeffrey; Arvidson, Raymond; Maurice, Sylvestre; Wiens, Roger

    2017-04-01

    Orbital observations from visible/near-infrared (VNIR) spectrometers have shown that hydrated clay minerals are widespread on the surface of Mars (e.g., Carter et al., JGR, 2013), but implications in terms of past environmental conditions are debated. In this context, in situ missions can play a crucial role by providing "ground truth" and detailed geological setting for orbital signatures. Since its landing in 2012, the Mars Science Laboratory rover Curiosity has found evidence for clay minerals in several sedimentary formations within Gale crater. The first clays were encountered at Yellowknife Bay, where results from the CheMin X-ray diffractometer (XRD) showed the presence of 20 wt% tri-octahedral, Fe/Mg-bearing smectites (Vaniman et al., Science, 2014). However, due to dust cover, this location lacks any signature of clay minerals in orbital VNIR observations. Smaller amounts of clay minerals were found later in the rover's traverse, but again at locations with no specific signature from orbit. More recently, Curiosity reached the upper Murray formation, a sedimentary layer consisting primarily of mudstones and belonging to the basal part of Aeolis Mons (or Mt Sharp), the central mound of Gale crater. There, for the first time, orbital signatures of clay minerals can be compared to laterally-equivalent samples that were analyzed by Curiosity's payload. Orbital VNIR spectra suggest the prevalence of di-octahedral, Al/Fe-bearing smectites, clearly distinct from the tri-octahedral, Fe/Mg-bearing species of Yellowknife Bay (Carter et al., LPSC, 2016). Preliminary results from XRD and EGA analyses performed by the CheMin and SAM instruments at Marimba, Quela and Sebina drill sites are broadly consistent with such interpretation. However, and perhaps unsurprisingly, in situ data show more complexity than orbital observations. In particular, in situ data suggest the possible presence of an illitic component as well as the possible co-existence of both di

  8. Ground Truth Location of Earthquakes by Use of Ambient Seismic Noise From a Sparse Seismic Network: A Case Study in Western Australia

    Science.gov (United States)

    Zeng, Xiangfang; Xie, Jun; Ni, Sidao

    2015-06-01

    The estimated Green's function (EGF) extracted from the ambient seismic noise cross-correlation function (NCF) enables valuable calibration of surface wave propagation along the path connecting seismic stations. Such calibration is adopted in a new method for ground truth location of earthquakes, achieved from the location relative to a seismic station. The surface wave group travel times were obtained from the NCFs between a station near the earthquake and remote stations. The differential travel times from the NCFs and the surface wave of the earthquake were used in a relative location procedure. When this method was applied to earthquake location with only six seismic stations in western Australia, the location of the Mw 4.1 Kalannie (September 21, 2005) earthquake was found to be accurate to within 2 km compared with the ground truth location with InSAR for which azimuth coverage of seismic stations is preferable. Synthetic tests suggest that the group travel time is slightly affected by focal mechanism and focal depth, thus unknown earthquake source parameters did not introduce substantial bias to earthquake location with the group travel time method.

  9. A multivariate analytical method to characterize sediment attributes from high-frequency acoustic backscatter and ground-truthing data (Jade Bay, German North Sea coast)

    Science.gov (United States)

    Biondo, Manuela; Bartholomä, Alexander

    2017-04-01

    One of the burning issues on the topic of acoustic seabed classification is the lack of solid, repeatable, statistical procedures that can support the verification of acoustic variability in relation to seabed properties. Acoustic sediment classification schemes often lead to biased and subjective interpretation, as they ultimately aim at an oversimplified categorization of the seabed based on conventionally defined sediment types. However, grain size variability alone cannot be accounted for acoustic diversity, which will be ultimately affected by multiple physical processes, scale of heterogeneity, instrument settings, data quality, image processing and segmentation performances. Understanding and assessing the weight of all of these factors on backscatter is a difficult task, due to the spatially limited and fragmentary knowledge of the seabed from of direct observations (e.g. grab samples, cores, videos). In particular, large-scale mapping requires an enormous availability of ground-truthing data that is often obtained from heterogeneous and multidisciplinary sources, resulting into a further chance of misclassification. Independently from all of these limitations, acoustic segments still contain signals for seabed changes that, if appropriate procedures are established, can be translated into meaningful knowledge. In this study we design a simple, repeatable method, based on multivariate procedures, with the scope to classify a 100 km2, high-frequency (450 kHz) sidescan sonar mosaic acquired in the year 2012 in the shallow upper-mesotidal inlet of the Jade Bay (German North Sea coast). The tool used for the automated classification of the backscatter mosaic is the QTC SWATHVIEWTMsoftware. The ground-truthing database included grab sample data from multiple sources (2009-2011). The method was designed to extrapolate quantitative descriptors for acoustic backscatter and model their spatial changes in relation to grain size distribution and morphology. The

  10. Antirealist Truth

    NARCIS (Netherlands)

    Romeijn, Jan-Willem; Douven, Igor; Horsten, Leon

    2008-01-01

    Antirealists have hitherto offered at best sketches of a theory of truth. This paper presents an antirealist theory of truth in some formal detail. It is shown that the theory is able to deal satisfactorily with some problems that are standardly taken to beset antirealism.

  11. Design of Wireless Automatic Synchronization for the Low-Frequency Coded Ground Penetrating Radar

    Directory of Open Access Journals (Sweden)

    Zhenghuan Xia

    2015-01-01

    Full Text Available Low-frequency coded ground penetrating radar (GPR with a pair of wire dipole antennas has some advantages for deep detection. Due to the large distance between the two antennas, the synchronization design is a major challenge of implementing the GPR system. This paper proposes a simple and stable wireless automatic synchronization method based on our developed GPR system, which does not need any synchronization chips or modules and reduces the cost of the hardware system. The transmitter omits the synchronization preamble and pseudorandom binary sequence (PRBS at an appropriate time interval, while receiver automatically estimates the synchronization time and receives the returned signal from the underground targets. All the processes are performed in a single FPGA. The performance of the proposed synchronization method is validated with experiment.

  12. Up-Scaling Field Observations to Ground Truth Seismic Interpretations and Test Dynamic Models of Deep Water Rifted Margins: What are the Challenges?

    Science.gov (United States)

    Manatschal, G.; Nirrengarten, M.; Epin, M. E.

    2015-12-01

    Recent advances on the study of rifted margins resulted from the development of new, high-resolution seismic imaging methods and dynamic modelling that enable to image the crustal scale structure of rifted margins and experiment under what conditions they formed. However, both the used parameter space as well as the seismic interpretations and model results need to be ground truth by direct observations and data. In the case of deep-water rifted margins, the problem is that drill hole data is expensive, rare and only available from a handful of examples worldwide. In contrast, remnants preserving kilometre-scale outcrops of former deep-water rifted margins have been described from the Alps and the Pyrenees in Western Europe. These large-scale outcrops provide a direct access to mantle and crustal rocks and the associated sedimentary sequences and magmatic additions. The combination of world-class outcrops, classical, field-based mapping and analytical methods can provide the missing data that is necessary to calibrate and test dynamic models as well as to ground truth seismic interpretations. In my presentation I will use observations and data from key outcrops from the most distal fossil Alpine Tethys margins exposed in SE Switzerland with the aim to describe the deformation processes and conditions during final rifting and to test rift modes (semi-ductile flow vs. brittle poly-phase faulting). I will in particular focus on the way strain is distributed and the bulk rheology evolves during hyper-extension and mantle exhumation and compare the observations with model results and seismic interpretations. Up-and down scaling observations/data and bridging multiple spatial and temporal scales is a key to understand the large-scale extensional processes that are at the origin of the formation of hyper-extend and exhumed mantle domains. The major challenge is to understand how the learnings obtained from the well-documented examples in the Alps and Pyrenees can be used

  13. Derivation from the Landsat 7 NDVI and ground truth validation of LAI and interception storage capacity for wetland ecosystems in Biebrza Valley, Poland

    Science.gov (United States)

    Suliga, Joanna; Chormański, Jarosław; Szporak-Wasilewska, Sylwia; Kleniewska, Małgorzata; Berezowski, Tomasz; van Griensven, Ann; Verbeiren, Boud

    2015-10-01

    Wetlands are very valuable areas because they provide a wide range of ecosystems services therefore modeling of wetland areas is very relevant, however, the most widely used hydrological models were developed in the 90s and usually are not adjusted to simulate wetland conditions. In case of wetlands including interception storage into the model's calculation is even more challenging, because literature data hardly exists. This study includes the computation of interception storage capacity based on Landsat 7 image and ground truthing measurements conducted in the Biebrza Valley, Poland. The method was based on collecting and weighing dry, wet and fully saturated samples of sedges. During the experiments measurements of fresh/dry biomass and leaf area index (LAI) were performed. The research was repeated three times during the same season (May, June and July 2013) to observe temporal variability of parameters. Ground truthing measurements were used for the validating estimation of parameters derived from images acquired in a similar period as the measurements campaigns. The use of remote sensing has as major advantage of being able to obtain an area covering spatially and temporally distributed estimate of the interception storage capacity. Results from this study proved that interception capacity of wetlands vegetation is changing considerably during the vegetation season (temporal variability) and reaches its maximum value when plants are fully developed. Different areas depending on existing plants species are characterized with different values of interception capacity (spatial variability). This research frames within the INTREV and HiWET projects, funded respectively by National Science Centre (NCN) in Poland and BELSPO STEREO III.

  14. Feature Extraction and Automatic Material Classification of Underground Objects from Ground Penetrating Radar Data

    Directory of Open Access Journals (Sweden)

    Qingqing Lu

    2014-01-01

    Full Text Available Ground penetrating radar (GPR is a powerful tool for detecting objects buried underground. However, the interpretation of the acquired signals remains a challenging task since an experienced user is required to manage the entire operation. Particularly difficult is the classification of the material type of underground objects in noisy environment. This paper proposes a new feature extraction method. First, discrete wavelet transform (DWT transforms A-Scan data and approximation coefficients are extracted. Then, fractional Fourier transform (FRFT is used to transform approximation coefficients into fractional domain and we extract features. The features are supplied to the support vector machine (SVM classifiers to automatically identify underground objects material. Experiment results show that the proposed feature-based SVM system has good performances in classification accuracy compared to statistical and frequency domain feature-based SVM system in noisy environment and the classification accuracy of features proposed in this paper has little relationship with the SVM models.

  15. A study of National Lightning Detection Network responses to natural lightning based on ground truth data acquired at LOG with emphasis on cloud discharge activity

    Science.gov (United States)

    Zhu, Y.; Rakov, V. A.; Tran, M. D.; Nag, A.

    2016-12-01

    The U.S. National Lightning Detection Network (NLDN) detection efficiency (DE) and classification accuracy (CA) for cloud discharge (IC) activity (identified here by a sequence of non-return-stroke-type electric field pulses not accompanied by channels to ground) were evaluated using optical and electric field data acquired at the LOG (Lightning Observatory in Gainesville), Florida. Our ground truth "IC events" include 26 "isolated IC events" (complete IC flashes), 58 "IC events before first return stroke," and 69 "IC events after first return stroke." For the total of 153 IC events, 33% were detected by the NLDN, and the classification accuracy was 86%. For complete IC flashes, the detection efficiency and classification accuracy were 73% and 95%, respectively, and the average number of NLDN-reported cloud pulses was 2.9 per detected event. For 24 preliminary breakdown pulse trains in CG flashes, the detection efficiency and classification accuracy were 46% and 82%, respectively. We have additionally estimated the DE and CA for return strokes in CG flashes. Irrespective of stroke order and polarity, the DE was 92% (339/367), and the CA was also 92% (312/339). The DEs for negative first and subsequent strokes were 98% and 90%, respectively.

  16. Validation of the Carotid Intima-Media Thickness Variability: Can Manual Segmentations Be Trusted as Ground Truth?

    Science.gov (United States)

    Meiburger, Kristen M; Molinari, Filippo; Wong, Justin; Aguilar, Luis; Gallo, Diego; Steinman, David A; Morbiducci, Umberto

    2016-07-01

    The common carotid artery intima-media thickness (IMT) is widely accepted and used as an indicator of atherosclerosis. Recent studies, however, have found that the irregularity of the IMT along the carotid artery wall has a stronger correlation with atherosclerosis than the IMT itself. We set out to validate IMT variability (IMTV), a parameter defined to assess IMT irregularities along the wall. In particular, we analyzed whether or not manual segmentations of the lumen-intima and media-adventitia can be considered reliable in calculation of the IMTV parameter. To do this, we used a total of 60 simulated ultrasound images with a priori IMT and IMTV values. The images, simulated using the Fast And Mechanistic Ultrasound Simulation software, presented five different morphologies, four nominal IMT values and three different levels of variability along the carotid artery wall (no variability, small variability and large variability). Three experts traced the lumen-intima (LI) and media-adventitia (MA) profiles, and two automated algorithms were employed to obtain the LI and MA profiles. One expert also re-traced the LI and MA profiles to test intra-reader variability. The average IMTV measurements of the profiles used to simulate the longitudinal B-mode images were 0.002 ± 0.002, 0.149 ± 0.035 and 0.286 ± 0.068 mm for the cases of no variability, small variability and large variability, respectively. The IMTV measurements of one of the automated algorithms were statistically similar (p > 0.05, Wilcoxon signed rank) when considering small and large variability, but non-significant when considering no variability (p truth. On the other hand, our automated algorithm was found to be more reliable, indicating how automated techniques could therefore foster analysis of the carotid artery intima-media thickness irregularity.

  17. Using the UFL-8 UV fluorescent LIDAR to collect ground truth data for calibrating MODIS based CDOM, chlorophyll and suspended sediment measurements

    Science.gov (United States)

    Zlinszky, A.; Pelevin, V.; Goncharenko, I.; Soloviev, D.; Molnár, G.

    2009-04-01

    Satellite remote sensing of water quality parameters is becoming a routine method in oceanological applications around the world. One of the main difficulties of calibrating satellite images to map water quality parameters is the large number and high spatial coverage of ground truth data needed. The UFL-8 fluorescent LIDAR developed by the Shirshov Oceanological Institute of the Russian Academy of Sciences measures CDOM, chlorophyll and suspended sediment near-surface concentrations optically in situ, on a travelling boat, and so is capable of a large number of widespread measurements very quickly. The registration of the measured values is connected to a GPS, so all measurements are geo-tagged and can be used for interpolating maps of the measured parameters. Since this instrument also has to be calibrated, some water samples have to be collected, but the optical measurements usually show very strong correlation to the water sample data. This approach was tested on Lake Balaton, Hungary in September 2008. Lake Balaton is characterized by its large area (597 km2), elongated shape and relatively shallow water depth (avg 3,2 m). The lake has a strong trophic gradient from the SW to the NE, the main tributary river carries large amounts of CDOM and suspended sediment concentrations can be very high because the lake is shallow and the sediment is fine grained. We measured in diverse weather conditions, and in an enclosed bay, a narrow strait and a large area of open water. 28 water samples were collected during the LIDAR measurement and the CDOM, chlorophyll and suspended sediment concentrations were measured in the laboratory using classic hydrological methods. These results were used to calibrate the LIDAR measurements with R2 values between 0,90 and 0,95. The relative values measured by the LIDAR were converted to absolute values using this regression, and the point-by-point results were interpolated into a raster with a cell size equal to the spatial resolution of

  18. Choroidal thickness maps from spectral domain and swept source optical coherence tomography: algorithmic versus ground truth annotation.

    Science.gov (United States)

    Philip, Ana-Maria; Gerendas, Bianca S; Zhang, Li; Faatz, Henrik; Podkowinski, Dominika; Bogunovic, Hrvoje; Abramoff, Michael D; Hagmann, Michael; Leitner, Roland; Simader, Christian; Sonka, Milan; Waldstein, Sebastian M; Schmidt-Erfurth, Ursula

    2016-10-01

    The purpose of the study was to create a standardised protocol for choroidal thickness measurements and to determine whether choroidal thickness measurements made on images obtained by spectral domain optical coherence tomography (SD-OCT) and swept source (SS-) OCT from patients with healthy retina are interchangeable when performed manually or with an automatic algorithm. 36 grid cell measurements for choroidal thickness for each volumetric scan were obtained, which were measured for SD-OCT and SS-OCT with two methods on 18 eyes of healthy volunteers. Manual segmentation by experienced retinal graders from the Vienna Reading Center and automated segmentation on >6300 images of the choroid from both devices were statistically compared. Model-based comparison between SD-OCT/SS-OCT showed a systematic difference in choroidal thickness of 16.26±0.725 μm (pthickness of -0.68±0.513 μm (p=0.1833). The correlation coefficients for SD-OCT and SS-OCT measures within eyes were 0.975 for manual segmentation and 0.955 for automatic segmentation. Choroidal thickness measurements of SD-OCT and SS-OCT indicate that these two devices are interchangeable with a trend of choroidal thickness measurements being slightly thicker on SD-OCT with limited clinical relevance. Use of an automated algorithm to segment choroidal thickness was validated in healthy volunteers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Small UAV Automatic Ground Collision Avoidance System Design Considerations and Flight Test Results

    Science.gov (United States)

    Sorokowski, Paul; Skoog, Mark; Burrows, Scott; Thomas, SaraKatie

    2015-01-01

    The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center Small Unmanned Aerial Vehicle (SUAV) Automatic Ground Collision Avoidance System (Auto GCAS) project demonstrated several important collision avoidance technologies. First, the SUAV Auto GCAS design included capabilities to take advantage of terrain avoidance maneuvers flying turns to either side as well as straight over terrain. Second, the design also included innovative digital elevation model (DEM) scanning methods. The combination of multi-trajectory options and new scanning methods demonstrated the ability to reduce the nuisance potential of the SUAV while maintaining robust terrain avoidance. Third, the Auto GCAS algorithms were hosted on the processor inside a smartphone, providing a lightweight hardware configuration for use in either the ground control station or on board the test aircraft. Finally, compression of DEM data for the entire Earth and successful hosting of that data on the smartphone was demonstrated. The SUAV Auto GCAS project demonstrated that together these methods and technologies have the potential to dramatically reduce the number of controlled flight into terrain mishaps across a wide range of aviation platforms with similar capabilities including UAVs, general aviation aircraft, helicopters, and model aircraft.

  20. Semi-Automatic Selection of Ground Control Points for High Resolution Remote Sensing Data in Urban Areas

    Directory of Open Access Journals (Sweden)

    Gulbe Linda

    2016-12-01

    Full Text Available Geometrical accuracy of remote sensing data often is ensured by geometrical transforms based on Ground Control Points (GCPs. Manual selection of GCP is a time-consuming process, which requires some sort of automation. Therefore, the aim of this study is to present and evaluate methodology for easier, semi-automatic selection of ground control points for urban areas. Custom line scanning algorithm was implemented and applied to data in order to extract potential GCPs for an image analyst. The proposed method was tested for classical orthorectification and special object polygon transform. Results are convincing and show that in the test case semi-automatic methodology is able to correct locations of 70 % (thermal data – 80 % (orthophoto images of buildings. Geometrical transform for subimages of approximately 3 hectares with approximately 12 automatically found GCPs resulted in RSME approximately 1 meter with standard deviation of 1.2 meters.

  1. Evaluation of gravimetric ground truth soil moisture data collected for the agricultural soil moisture experiment, 1978 Colby, Kansas, aircraft mission

    Science.gov (United States)

    Arya, L. M.; Phinney, D. E. (Principal Investigator)

    1980-01-01

    Soil moisture data acquired to support the development of algorithms for estimating surface soil moisture from remotely sensed backscattering of microwaves from ground surfaces are presented. Aspects of field uniformity and variability of gravimetric soil moisture measurements are discussed. Moisture distribution patterns are illustrated by frequency distributions and contour plots. Standard deviations and coefficients of variation relative to degree of wetness and agronomic features of the fields are examined. Influence of sampling depth on observed moisture content an variability are indicated. For the various sets of measurements, soil moisture values that appear as outliers are flagged. The distribution and legal descriptions of the test fields are included along with examinations of soil types, agronomic features, and sampling plan. Bulk density data for experimental fields are appended, should analyses involving volumetric moisture content be of interest to the users of data in this report.

  2. Gebiss: an ImageJ plugin for the specification of ground truth and the performance evaluation of 3d segmentation algorithms

    Directory of Open Access Journals (Sweden)

    Yee Kwo

    2011-06-01

    Full Text Available Abstract Background Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss/.

  3. Comparing different methods for assessing ground truth of rover data analysis for the 2005 season of the Life in the Atacama Project

    Science.gov (United States)

    Thomas, G. W.; Peate, I. Ukstins; Nakamoto, J.; Pudenz, E.; Glasgow, J.; Bretthauer, J.; Cabrol, N.; Wettergreen, D.; Grin, E.; Coppin, P.; Dohm, J. M.; Piatek, J. L.; Warren-Rhodes, K.; Hock, A. N.; Weinstein, S.; Fisher, G.; Diaz, G. Chong; Cockell, C.; Marinangeli, L.; Minkley, N.; Moersch, J.; Ori, G. G.; Smith, T.; Stubb, K.; Wagner, M.; Waggoner, A. S.

    2007-12-01

    The scientific success of a remote exploration rover mission depends on the right combination of technology, teamwork and scientific insight. In order to quantitatively evaluate the success of a rover field trial, it is necessary to assess the accuracy of scientific interpretations made during the field test. This work compares three structured approaches to assessing the ground truth of scientific findings from a science team conducting a remote investigation of a locale using an autonomous rover. For the first approach, independent assessment, the daily science summaries were analyzed and reduced to a series of 1082 factual statements, which were treated as hypotheses. An independent scientist traveled to the field area to assess these hypotheses. For the second approach, guided self-study, the mission scientists themselves traveled to the field area and evaluated their own scientific interpretations. The third approach, discrepancy investigation, searched for the root causes of differences between the scientific interpretations made in the control room and those made in the field. The independent investigation provided sensitive, quantitative data, but suffered from the lack of context and continuity developed in the mission control room. The guided evaluation benefited from the context of the mission, but lacked clarity and consistency. The discrepancy investigation provided insight into the root causes behind the discrepancies, but was expensive and time consuming. The independent investigation method yielded particularly compelling results, but each method offers advantages and a comprehensive rover field trial assessment should include a combination of all three.

  4. LIF LiDAR high resolution ground truth data, suitable to validate medium-resolution bands of MODIS/Terra radiometer in case of inner waterbody ecological monitoring

    Science.gov (United States)

    Pelevin, Vadim; Zavialov, Peter; Zlinszky, Andras; Khimchenko, Elizaveta; Toth, Viktor; Kremenetskiy, Vyacheslav

    2017-04-01

    The report is based on field measurements on the lake Balaton (Hungary) in September 2008 as obtained by Light Induced Fluorescence (LIF) portable LiDAR UFL-8. It was tested in natural lake waters and validated by contact conventional measurements. We had opportunity to compare our results with the MODIS/Terra spectroradiometer satellite images received at the satellite monitoring station of the Eötvös Loránd University (Budapest, Hungary) to make an attempt of lidar calibration of satellite medium-resolution bands data. Water quality parameters were surveyed with the help of UFL-8 in a time interval very close to the satellite overpass. High resolution maps of the chlorophyll-a, chromophoric dissolved organic matter and total suspended sediments spatial distributions were obtained. Our results show that the resolution provided by laboratory measurements on a few water samples does not resemble actual conditions in the lake, and it would be more efficient to measure these parameters less accurately but in a better spatial distribution with the LiDAR. The UFL instrument has a great potential for being used for collecting ground truth data for satellite remote sensing of these parameters. Its measurement accuracy is comparable to classic water sample measurements, the measurement speed is high and large areas can be surveyed in a time interval very close to the satellite overpass.

  5. Semi-automatic handling of meteorological ground measurements using WeatherProg: prospects and practical implications

    Science.gov (United States)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; De Mascellis, Roberto; Manna, Piero; Terribile, Fabio

    2016-04-01

    WeatherProg is a computer program for the semi-automatic handling of data measured at ground stations within a climatic network. The program performs a set of tasks ranging from gathering raw point-based sensors measurements to the production of digital climatic maps. Originally the program was developed as the baseline asynchronous engine for the weather records management within the SOILCONSWEB Project (LIFE08 ENV/IT/000408), in which daily and hourly data where used to run water balance in the soil-plant-atmosphere continuum or pest simulation models. WeatherProg can be configured to automatically perform the following main operations: 1) data retrieval; 2) data decoding and ingestion into a database (e.g. SQL based); 3) data checking to recognize missing and anomalous values (using a set of differently combined checks including logical, climatological, spatial, temporal and persistence checks); 4) infilling of data flagged as missing or anomalous (deterministic or statistical methods); 5) spatial interpolation based on alternative/comparative methods such as inverse distance weighting, iterative regression kriging, and a weighted least squares regression (based on physiography), using an approach similar to PRISM. 6) data ingestion into a geodatabase (e.g. PostgreSQL+PostGIS or rasdaman). There is an increasing demand for digital climatic maps both for research and development (there is a gap between the major of scientific modelling approaches that requires digital climate maps and the gauged measurements) and for practical applications (e.g. the need to improve the management of weather records which in turn raises the support provided to farmers). The demand is particularly burdensome considering the requirement to handle climatic data at the daily (e.g. in the soil hydrological modelling) or even at the hourly time step (e.g. risk modelling in phytopathology). The key advantage of WeatherProg is the ability to perform all the required operations and

  6. Towards ground-truthing of spaceborne estimates of above-ground life biomass and leaf area index in tropical rain forests

    OpenAIRE

    Köhler, P.; Huth, A.

    2010-01-01

    The canopy height h of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or LIDAR. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model...

  7. Estimating Daily Maximum and Minimum Land Air Surface Temperature Using MODIS Land Surface Temperature Data and Ground Truth Data in Northern Vietnam

    Directory of Open Access Journals (Sweden)

    Phan Thanh Noi

    2016-12-01

    Full Text Available This study aims to evaluate quantitatively the land surface temperature (LST derived from MODIS (Moderate Resolution Imaging Spectroradiometer MOD11A1 and MYD11A1 Collection 5 products for daily land air surface temperature (Ta estimation over a mountainous region in northern Vietnam. The main objective is to estimate maximum and minimum Ta (Ta-max and Ta-min using both TERRA and AQUA MODIS LST products (daytime and nighttime and auxiliary data, solving the discontinuity problem of ground measurements. There exist no studies about Vietnam that have integrated both TERRA and AQUA LST of daytime and nighttime for Ta estimation (using four MODIS LST datasets. In addition, to find out which variables are the most effective to describe the differences between LST and Ta, we have tested several popular methods, such as: the Pearson correlation coefficient, stepwise, Bayesian information criterion (BIC, adjusted R-squared and the principal component analysis (PCA of 14 variables (including: LST products (four variables, NDVI, elevation, latitude, longitude, day length in hours, Julian day and four variables of the view zenith angle, and then, we applied nine models for Ta-max estimation and nine models for Ta-min estimation. The results showed that the differences between MODIS LST and ground truth temperature derived from 15 climate stations are time and regional topography dependent. The best results for Ta-max and Ta-min estimation were achieved when we combined both LST daytime and nighttime of TERRA and AQUA and data from the topography analysis.

  8. Cosmological ``Truths''

    Science.gov (United States)

    Bothun, Greg

    2011-10-01

    Ever since Aristotle placed us, with certainty, in the Center of the Cosmos, Cosmological models have more or less operated from a position of known truths for some time. As early as 1963, for instance, it was ``known'' that the Universe had to be 15-17 billion years old due to the suspected ages of globular clusters. For many years, attempts to determine the expansion age of the Universe (the inverse of the Hubble constant) were done against this preconceived and biased notion. Not surprisingly when more precise observations indicated a Hubble expansion age of 11-13 billion years, stellar models suddenly changed to produce a new age for globular cluster stars, consistent with 11-13 billion years. Then in 1980, to solve a variety of standard big bang problems, inflation was introduced in a fairly ad hoc manner. Inflation makes the simple prediction that the net curvature of spacetime is zero (i.e. spacetime is flat). The consequence of introducing inflation is now the necessary existence of a dark matter dominated Universe since the known baryonic material could comprise no more than 1% of the necessary energy density to make spacetime flat. As a result of this new cosmological ``truth'' a significant world wide effort was launched to detect the dark matter (which obviously also has particle physics implications). To date, no such cosmological component has been detected. Moreover, all available dynamical inferences of the mass density of the Universe showed in to be about 20% of that required for closure. This again was inconsistent with the truth that the real density of the Universe was the closure density (e.g. Omega = 1), that the observations were biased, and that 99% of the mass density had to be in the form of dark matter. That is, we know the universe is two component -- baryons and dark matter. Another prevailing cosmological truth during this time was that all the baryonic matter was known to be in galaxies that populated our galaxy catalogs. Subsequent

  9. Neighborhood socioeconomic deprivation and minority composition are associated with better potential spatial access to the ground-truthed food environment in a large rural area.

    Science.gov (United States)

    Sharkey, Joseph R; Horel, Scott

    2008-03-01

    Little is known about spatial inequalities and potential access to the food environment in rural areas. In this study, we assessed the food environment in a 6-county rural region of Texas (11,567 km2) through ground-truthed methods that included direct observation and on-site Global Positioning System technology to examine the relationship between neighborhood inequalities (e.g., socioeconomic deprivation and minority composition) and network distance from all 101 rural neighborhoods to the nearest food store (FS). Neighborhood deprivation was determined from socioeconomic characteristics using 2000 census block group (CBG) data. Network distances were calculated from the population-weighted center of each CBG to the nearest supermarket, grocery, convenience, and discount store. Multiple regression models examined associations among deprivation, minority composition, population density, and network distance to the nearest FS. The median distance to the nearest supermarket was 14.9 km one way (range 0.12 to 54.0 km). The distance decreased with increasing deprivation, minority composition, and population density. The worst deprived neighborhoods with the greatest minority composition had better potential spatial access to the nearest FS. For >20% of all rural residents, their neighborhoods were at least 17.7 km from the nearest supermarket or full-line grocery or 7.6 km from the nearest convenience store. This makes food shopping a challenge, especially in rural areas that lack public transportation and where many have no vehicular access. Knowledge of potential access to the food environment is essential for combining environmental approaches and health interventions so that families, especially those in rural areas, can make healthier food choices.

  10. Addressing the social dimensions of citizen observatories: The Ground Truth 2.0 socio-technical approach for sustainable implementation of citizen observatories

    Science.gov (United States)

    Wehn, Uta; Joshi, Somya; Pfeiffer, Ellen; Anema, Kim; Gharesifard, Mohammad; Momani, Abeer

    2017-04-01

    Owing to ICT-enabled citizen observatories, citizens can take on new roles in environmental monitoring, decision making and co-operative planning, and environmental stewardship. And yet implementing advanced citizen observatories for data collection, knowledge exchange and interactions to support policy objectives is neither always easy nor successful, given the required commitment, trust, and data reliability concerns. Many efforts are facing problems with the uptake and sustained engagement by citizens, limited scalability, unclear long-term sustainability and limited actual impact on governance processes. Similarly, to sustain the engagement of decision makers in citizen observatories, mechanisms are required from the start of the initiative in order to have them invest in and, hence, commit to and own the entire process. In order to implement sustainable citizen observatories, these social dimensions therefore need to be soundly managed. We provide empirical evidence of how the social dimensions of citizen observatories are being addressed in the Ground Truth 2.0 project, drawing on a range of relevant social science approaches. This project combines the social dimensions of citizen observatories with enabling technologies - via a socio-technical approach - so that their customisation and deployment is tailored to the envisaged societal and economic impacts of the observatories. The projects consists of the demonstration and validation of six scaled up citizen observatories in real operational conditions both in the EU and in Africa, with a specific focus on flora and fauna as well as water availability and water quality for land and natural resources management. The demonstration cases (4 EU and 2 African) cover the full 'spectrum' of citizen-sensed data usage and citizen engagement, and therefore allow testing and validation of the socio-technical concept for citizen observatories under a range of conditions.

  11. Comparative analyses of different variants of standard ground for automatic control systems of technical processes of oil and gas production

    Science.gov (United States)

    Gromakov, E. I.; Gazizov, A. T.; Lukin, V. P.; Chimrov, A. V.

    2017-01-01

    The paper analyses efficiency (interference resistance) of standard TT, TN, IT networks in control links of automatic control systems (ACS) of technical processes (TP) of oil and gas production. Electromagnetic compatibility (EMC) is a standard term used to describe the interference in grounding circuits. Improved EMC of ACS TP can significantly reduce risks and costs of malfunction of equipment that could have serious consequences. It has been proved that an IT network is the best type of grounds for protection of ACS TP in real life conditions. It allows reducing the interference down to the level that is stated in standards of oil and gas companies.

  12. Discovering the truth beyond the truth.

    Science.gov (United States)

    Becker, Gerhild; Jors, Karin; Block, Susan

    2015-03-01

    The question "What is truth?" is one of the oldest questions in philosophy. Truth within the field of medicine has gained relevance because of its fundamental relationship to the principle of patient autonomy. To fully participate in their medical care, patients must be told the truth-even in the most difficult of situations. Palliative care emphasizes patient autonomy and a patient-centered approach, and it is precisely among patients with chronic, life-threatening, or terminal illnesses that truth plays a particularly crucial role. For these patients, finding out the truth about their disease forces them to confront existential fears. As physicians, we must understand that truth, similar to the complexity of pain, is multidimensional. In this article, we discuss the truth from three linguistic perspectives: the Latin veritas, the Greek aletheia, and the Hebrew emeth. Veritas conveys an understanding of truth focused on facts and reality. Aletheia reveals truth as a process, and emeth shows that truth is experienced in truthful encounters with others. In everyday clinical practice, truth is typically equated with the facts. However, this limited understanding of the truth does not account for the uniqueness of each patient. Although two patients may receive the same diagnosis (or facts), each will be affected by this truth in a very individual way. To help patients apprehend the truth, physicians are called to engage in a delicate back-and-forth of multiple difficult conversations in which each patient is accepted as a unique individual. Copyright © 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  13. Automatic classification of scar tissue in late gadolinium enhancement cardiac MRI for the assessment of left-atrial wall injury after radiofrequency ablation.

    Science.gov (United States)

    Perry, Daniel; Morris, Alan; Burgon, Nathan; McGann, Christopher; Macleod, Robert; Cates, Joshua

    2012-02-23

    Radiofrequency ablation is a promising procedure for treating atrial fibrillation (AF) that relies on accurate lesion delivery in the left atrial (LA) wall for success. Late Gadolinium Enhancement MRI (LGE MRI) at three months post-ablation has proven effective for noninvasive assessment of the location and extent of scar formation, which are important factors for predicting patient outcome and planning of redo ablation procedures. We have developed an algorithm for automatic classification in LGE MRI of scar tissue in the LA wall and have evaluated accuracy and consistency compared to manual scar classifications by expert observers. Our approach clusters voxels based on normalized intensity and was chosen through a systematic comparison of the performance of multivariate clustering on many combinations of image texture. Algorithm performance was determined by overlap with ground truth, using multiple overlap measures, and the accuracy of the estimation of the total amount of scar in the LA. Ground truth was determined using the STAPLE algorithm, which produces a probabilistic estimate of the true scar classification from multiple expert manual segmentations. Evaluation of the ground truth data set was based on both inter- and intra-observer agreement, with variation among expert classifiers indicating the difficulty of scar classification for a given a dataset. Our proposed automatic scar classification algorithm performs well for both scar localization and estimation of scar volume: for ground truth datasets considered easy, variability from the ground truth was low; for those considered difficult, variability from ground truth was on par with the variability across experts.

  14. Application of ground-truth for classification and quantification of bird movements on migratory bird habitat initiative sites in southwest Louisiana: final report

    Science.gov (United States)

    Barrow, Wylie C.; Baldwin, Michael J.; Randall, Lori A.; Pitre, John; Dudley, Kyle J.

    2013-01-01

    This project was initiated to assess migrating and wintering bird use of lands enrolled in the Natural Resources Conservation Service’s (NRCS) Migratory Bird Habitat Initiative (MBHI). The MBHI program was developed in response to the Deepwater Horizon oil spill in 2010, with the goal of improving/creating habitat for waterbirds affected by the spill. In collaboration with the University of Delaware (UDEL), we used weather surveillance radar data (Sieges 2014), portable marine radar data, thermal infrared images, and visual observations to assess bird use of MBHI easements. Migrating and wintering birds routinely make synchronous flights near dusk (e.g., departure during migration, feeding flights during winter). Weather radars readily detect birds at the onset of these flights and have proven to be useful remote sensing tools for assessing bird-habitat relations during migration and determining the response of wintering waterfowl to wetland restoration (e.g., Wetlands Reserve Program lands). However, ground-truthing is required to identify radar echoes to species or species group. We designed a field study to ground-truth a larger-scale, weather radar assessment of bird use of MBHI sites in southwest Louisiana. We examined seasonal bird use of MBHI fields in fall, winter, and spring of 2011-2012. To assess diurnal use, we conducted total area surveys of MBHI sites in the afternoon, collecting data on bird species composition, abundance, behavior, and habitat use. In the evenings, we quantified bird activity at the MBHI easements and described flight behavior (i.e., birds landing in, departing from, circling, or flying over the MBHI tract). Our field sampling captured the onset of evening flights and spanned the period of collection of the weather radar data analyzed. Pre- and post-dusk surveys were conducted using a portable radar system and a thermal infrared camera. Landbirds, shorebirds, and wading birds were commonly found on MBHI fields during diurnal

  15. Archaeogeophysical data acquisition and analysis at Tel Burna, Israel: a valuable opportunity for ongoing ground-truth investigation and collaboration (Invited)

    Science.gov (United States)

    Pincus, J. A.

    2013-12-01

    , acquired in a zigzag east-west direction, proceeding south. The area extended from the present excavation border to the north and east. The following paper will discuss the method of data acquisition, post-processing, and analysis of the results. The final conclusions of the survey show a continuation of several key walls to the east, a valuable sub-surface tracing of the limestone bedrock, and the limit to which the archaeological material is present spatially in Area B to the north. These results play a major role in determining where to focus excavation efforts in the 2014 excavation season. This unique collaboration with the archaeological team and ongoing opportunity for archaeological ground-truthing will be documented and published as the site develops. As there is a limited presence of such data within the corpus of published archaeogeophysical research, we look forward to further investigations at the site in the coming years.

  16. Verifiably Truthful Mechanisms

    DEFF Research Database (Denmark)

    Branzei, Simina; Procaccia, Ariel D.

    2015-01-01

    It is typically expected that if a mechanism is truthful, then the agents would, indeed, truthfully report their private information. But why would an agent believe that the mechanism is truthful? We wish to design truthful mechanisms, whose truthfulness can be verified efficiently (in...... the computational sense). Our approach involves three steps: (i) specifying the structure of mechanisms, (ii) constructing a verification algorithm, and (iii) measuring the quality of verifiably truthful mechanisms. We demonstrate this approach using a case study: approximate mechanism design without money...

  17. Management of natural resources through automatic cartographic inventory

    Science.gov (United States)

    Rey, P. A.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Significant correspondence codes relating ERTS imagery to ground truth from vegetation and geology maps have been established. The use of color equidensity and color composite methods for selecting zones of equal densitometric value on ERTS imagery was perfected. Primary interest of temporal color composite is stressed. A chain of transfer operations from ERTS imagery to the automatic mapping of natural resources was developed.

  18. Truth and Dare

    DEFF Research Database (Denmark)

    Schmidt, Cecilie Ullerup; Liebmann, Andreas

    An artistic documentation of the workshop Truth and Dare at The Danish National School of Performing Arts - Continuing Education......An artistic documentation of the workshop Truth and Dare at The Danish National School of Performing Arts - Continuing Education...

  19. Truth and Dare

    DEFF Research Database (Denmark)

    Schmidt, Cecilie Ullerup; Liebmann, Andreas

    An artistic documentation of the workshop Truth and Dare at The Danish National School of Performing Arts - Continuing Education......An artistic documentation of the workshop Truth and Dare at The Danish National School of Performing Arts - Continuing Education...

  20. To Tell the Truth.

    Science.gov (United States)

    Arent, Ruth P.

    1991-01-01

    Discusses what teachers should do when older elementary students lie. Guidelines for handling the situation are presented along with suggestions for making children feel good about telling the truth. Three activities for encouraging truthfulness in the classroom are suggested. (SM)

  1. Truth telling in medicine: the Confucian view.

    Science.gov (United States)

    Fan, Ruiping; Li, Benfu

    2004-04-01

    Truth-telling to competent patients is widely affirmed as a cardinal moral and biomedical obligation in contemporary Western medical practice. In contrast, Chinese medical ethics remains committed to hiding the truth as well as to lying when necessary to achieve the family's view of the best interests of the patient. This essay intends to provide an account of the framing commitments that would both justify physician deception and have it function in a way authentically grounded in the familist moral concerns of Confucianism. It reflects on the moral conditions and possibilities for sustaining a Confucian understanding of truth-telling and consent in mainland China.

  2. Linguistic Truth Values Lattice Implication Algebras

    Institute of Scientific and Technical Information of China (English)

    PAN Xiao-dong; XU Yang

    2006-01-01

    In order to study uncertainty reasoning and automatic reasoning with linguistic terms, in this paper, the set of basic linguistic truth values and the set of modifiers are defined, according to common sense; partially orderings are defined on them. Based on it, a lattice implication algebra model L18 of linguistic terms is built; furthermore, its some basic properties are discussed.

  3. Rheticus Displacement: an Automatic Geo-Information Service Platform for Ground Instabilities Detection and Monitoring

    Science.gov (United States)

    Chiaradia, M. T.; Samarelli, S.; Agrimano, L.; Lorusso, A. P.; Nutricato, R.; Nitti, D. O.; Morea, A.; Tijani, K.

    2016-12-01

    Rheticus® is an innovative cloud-based data and services hub able to deliver Earth Observation added-value products through automatic complex processes and a minimum interaction with human operators. This target is achieved by means of programmable components working as different software layers in a modern enterprise system which relies on SOA (service-oriented-architecture) model. Due to its architecture, where every functionality is well defined and encapsulated in a standalone component, Rheticus is potentially highly scalable and distributable allowing different configurations depending on the user needs. Rheticus offers a portfolio of services, ranging from the detection and monitoring of geohazards and infrastructural instabilities, to marine water quality monitoring, wildfires detection or land cover monitoring. In this work, we outline the overall cloud-based platform and focus on the "Rheticus Displacement" service, aimed at providing accurate information to monitor movements occurring across landslide features or structural instabilities that could affect buildings or infrastructures. Using Sentinel-1 (S1) open data images and Multi-Temporal SAR Interferometry techniques (i.e., SPINUA), the service is complementary to traditional survey methods, providing a long-term solution to slope instability monitoring. Rheticus automatically browses and accesses (on a weekly basis) the products of the rolling archive of ESA S1 Scientific Data Hub; S1 data are then handled by a mature running processing chain, which is responsible of producing displacement maps immediately usable to measure with sub-centimetric precision movements of coherent points. Examples are provided, concerning the automatic displacement map generation process, as well as the integration of point and distributed scatterers, the integration of multi-sensors displacement maps (e.g., Sentinel-1 IW and COSMO-SkyMed HIMAGE), the combination of displacement rate maps acquired along both ascending

  4. Feature Extraction and Automatic Material Classification of Underground Objects from Ground Penetrating Radar Data

    OpenAIRE

    Qingqing Lu; Jiexin Pu; Zhonghua Liu

    2014-01-01

    Ground penetrating radar (GPR) is a powerful tool for detecting objects buried underground. However, the interpretation of the acquired signals remains a challenging task since an experienced user is required to manage the entire operation. Particularly difficult is the classification of the material type of underground objects in noisy environment. This paper proposes a new feature extraction method. First, discrete wavelet transform (DWT) transforms A-Scan data and approximation coefficient...

  5. Objective Performance Evaluation of Video Segmentation Algorithms with Ground-Truth%一种客观的视频对象分割算法性能评价方法

    Institute of Scientific and Technical Information of China (English)

    杨高波; 张兆扬

    2004-01-01

    While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance.In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation.Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy.Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames.The experimental results show the feasibility of our approach.Moreover, it is computationally more efficient than previous methods.It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.

  6. The end of truth?

    Directory of Open Access Journals (Sweden)

    C. W. du Toit

    1997-01-01

    Full Text Available As we are approaching the end of the century, many ideas, systems, and certainties, previously taken for granted, seem to be questioned, altered and rejected. One of these is the notion of truth, which pervades the very fibre of Western thinking. Rejecting the relevant critique as simply a postmodem fad, this article proceeds to give attention to the questions regarding the end of religious, scientific, and metaphysical truth. Truth and power are dealt with, as well as the narrative nature of truth. The article concludes with reference to the functioning of truth in the South African context as manifested in the Truth and Reconciliation Commission.

  7. The Truth of Wikipedia

    Directory of Open Access Journals (Sweden)

    Nathaniel Tkacz

    2012-05-01

    Full Text Available What does it mean to assert that Wikipedia has a relation to truth? That there is, despite regular claims to the contrary, an entire apparatus of truth in Wikipedia? In this article, I show that Wikipedia has in fact two distinct relations to truth: one which is well known and forms the basis of existing popular and scholarly commentaries, and another which refers to equally well-known aspects of Wikipedia, but has not been understood in terms of truth. I demonstrate Wikipedia's dual relation to truth through a close analysis of the Neutral Point of View core content policy (and one of the project's 'Five Pillars'. I conclude by indicating what is at stake in the assertion that Wikipedia has a regime of truth and what bearing this has on existing commentaries.

  8. The truth about Romanticism

    OpenAIRE

    Milnes, Tim

    2015-01-01

    How have our conceptions of truth been shaped by romantic literature? This question lies at the heart of this examination of the concept of truth both in romantic writing and in modern criticism. The romantic idea of truth has long been depicted as aesthetic, imaginative, and ideal. Tim Milnes challenges this picture, demonstrating a pragmatic strain in the writing of Keats, Shelley and Coleridge in particular, that bears a close resemblance to the theories of modern pragmatist thinkers such ...

  9. Significance Testing Without Truth

    Science.gov (United States)

    2012-07-27

    ICES REPORT 12-34 August 2012 Significance testing without truth by William Perkins, Mark Tygert, and Rachel Ward The Institute for Computational...testing without truth , ICES REPORT 12-34, The Institute for Computational Engineering and Sciences, The University of Texas at Austin, August 2012...2. REPORT TYPE 3. DATES COVERED 00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Significance testing without truth 5a. CONTRACT NUMBER 5b

  10. Automatic classification of pathological gait patterns using ground reaction forces and machine learning algorithms.

    Science.gov (United States)

    Alaqtash, Murad; Sarkodie-Gyan, Thompson; Yu, Huiying; Fuentes, Olac; Brower, Richard; Abdelgawad, Amr

    2011-01-01

    An automated gait classification method is developed in this study, which can be applied to analysis and to classify pathological gait patterns using 3D ground reaction force (GRFs) data. The study involved the discrimination of gait patterns of healthy, cerebral palsy (CP) and multiple sclerosis subjects. The acquired 3D GRFs data were categorized into three groups. Two different algorithms were used to extract the gait features; the GRFs parameters and the discrete wavelet transform (DWT), respectively. Nearest neighbor classifier (NNC) and artificial neural networks (ANN) were also investigated for the classification of gait features in this study. Furthermore, different feature sets were formed using a combination of the 3D GRFs components (mediolateral, anterioposterior, and vertical) and their various impacts on the acquired results were evaluated. The best leave-one-out (LOO) classification accuracy 85% was achieved. The results showed some improvement through the application of a features selection algorithm based on M-shaped value of vertical force and the statistical test ANOVA of mediolateral and anterioposterior forces. The optimal feature set of six features enhanced the accuracy to 95%. This work can provide an automated gait classification tool that may be useful to the clinician in the diagnosis and identification of pathological gait impairments.

  11. "Telling" Truths: Wounded Truths and the Activity of Truth Telling

    Science.gov (United States)

    Harwood, Valerie

    2004-01-01

    It appears that being young and queer seems to be all about woundedness: it means experiencing suffering, including the risk of suicide, increased drug use, homelessness and violence. Yet how are these wounded truths told, and further, why is it that people in education seem to tell them "unproblematically"? This paper considers these questions by…

  12. Semi-automatic template matching based extraction of hyperbolic signatures in ground-penetrating radar images

    Science.gov (United States)

    Sagnard, Florence; Tarel, Jean-Philippe

    2015-04-01

    In civil engineering applications, ground-penetrating radar (GPR) is one of the main non destructive technique based on the refraction and reflection of electromagnetic waves to probe the underground and particularly detect damages (cracks, delaminations, texture changes…) and buried objects (utilities, rebars…). An UWB ground-coupled radar operating in the frequency band [0.46;4] GHz and made of bowtie slot antennas has been used because, comparing to a air-launched radar, it increases energy transfer of electromagnetic radiation in the sub-surface and penetration depth. This paper proposes an original adaptation of the generic template matching algorithm to GPR images to recognize, localize and characterize with parameters a specific pattern associated with a hyperbola signature in the two main polarizations. The processing of a radargram (Bscan) is based on four main steps. The first step consists in pre-processing and scaling. The second step uses template matching to isolate and localize individual hyperbola signatures in an environment containing unwanted reflections, noise and overlapping signatures. The algorithm supposes to generate and collect a set of reference hyperbola templates made of a small reflection pattern in the vicinity of the apex in order to further analyze multiple time signals of embedded targets in an image. The standard Euclidian distance between the template shifted and a local zone in the radargram allows to obtain a map of distances. A user-defined threshold allows to select a reduced number of zones having a high similarity measure. In a third step, each zone is analyzed to detect minimum or maximum discrete amplitudes belonging to the first arrival times of a hyperbola signature. In the fourth step, the extracted discrete data (i,j) are fitted by a parametric hyperbola modeling based on the straight ray path hypothesis and using a constraint least square criterion associated with parameter ranges, that are the position, the

  13. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    Science.gov (United States)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  14. Frege on Truths, Truth and the True

    Directory of Open Access Journals (Sweden)

    Wolfgang Künne

    2008-08-01

    Full Text Available The founder of modern logic and grandfather of analytic philosophy was 70 years old when he published his paper 'Der Gedanke' (The Thought in 1918. This essay contains some of Gottlob Frege's deepest and most provocative reflections on the concept of truth, and it will play a prominent role in my lectures. The plan for my lectures is as follows. What is it that is (primarily true or false? 'Thoughts', is Frege's answer. In §1, I shall explain and defend this answer. In §2, I shall briefly consider his enthymematic argument for the conclusion that the word 'true' resists any attempt at defining it. In §3, I shall discuss his thesis that the thought that things are thus and so is identical with the thought that it is true that things are thus and so. The reasons we are offered for this thesis will be found wanting. In §4, I shall comment extensively on Frege's claim that, in a non-formal language like the one I am currently trying to speak, we can say whatever we want to say without ever using the word 'true' or any of its synonyms. I will reject the propositional-redundancy claim, endorse the assertive-redundancy claim and deny the connection Frege ascribes to them. In his classic 1892 paper 'Über Sinn und Bedeutung' (On Sense and Signification Frege argues that truth-values are objects. In §5, I shall scrutinize his argument. In §6, I will show that in Frege's ideography (Begriffsschrift truth, far from being redundant, is omnipresent. The final §7 is again on truth-bearers, this time as a topic in the theory of intentionality and in metaphysics. In the course of discussing Frege's views on the objecthood, the objectivity of thoughts and the timelessness of truth(s, I will plead for a somewhat mitigated Platonism.

  15. Withholding truth from patients.

    LENUS (Irish Health Repository)

    O'Sullivan, Elizabeth

    2012-01-31

    The issue of whether patients should always be told the truth regarding their diagnosis and prognosis has afforded much debate in healthcare literature. This article examines telling the truth from an ethical perspective. It puts forward arguments for and against being honest with patients, using a clinical example to illustrate each point.

  16. Truthful Monadic Abstractions

    DEFF Research Database (Denmark)

    Brock-Nannestad, Taus; Schürmann, Carsten

    2012-01-01

    indefinitely, finding neither a proof nor a disproof of a given subgoal. In this paper we characterize a family of truth-preserving abstractions from intuitionistic first-order logic to the monadic fragment of classical first-order logic. Because they are truthful, these abstractions can be used to disprove...

  17. Rashes: The Itchy Truth

    Science.gov (United States)

    ... About Puberty Train Your Temper Rashes: The Itchy Truth KidsHealth > For Kids > Rashes: The Itchy Truth Print A A A What's in this article? ... The Nemours Foundation, iStock, Getty Images, Corbis, Veer, Science Photo Library, Science Source Images, Shutterstock, and Clipart. ...

  18. Heart Truth for Latinas

    Science.gov (United States)

    ... That’s a man’s disease.” But here’s The Heart Truth : Heart disease is the #1 killer of Latinas ... TAKING ACTION Now that you know The Heart Truth , what should you do? First, find out your ...

  19. Truth and Methods.

    Science.gov (United States)

    Dasenbrock, Reed Way

    1995-01-01

    Examines literary theory's displacing of "method" in the New Historicist criticism. Argues that Stephen Greenblatt and Lee Paterson imply that no objective historical truth is possible and as a result do not give methodology its due weight in their criticism. Questions the theory of "truth" advanced in this vein of literary…

  20. Teaching and Truthfulness

    Science.gov (United States)

    Cooper, David E.

    2008-01-01

    Some tendencies in modern education--the stress on "performativity", for instance, and "celebration of difference"--threaten the value traditionally placed on truthful teaching. In this paper, truthfulness is mainly understood, following Bernard Williams, as a disposition to "Accuracy" and "Sincerity"--hence as a virtue. It is to be distinguished…

  1. Automatic segmentation of ground-glass opacities in lung CT images by using Markov random field-based algorithms.

    Science.gov (United States)

    Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo

    2012-06-01

    Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.

  2. Evaluation of the Performance Characteristics of CGLSS II and U.S. NLDN Using Ground-Truth Dalta from Launch Complex 398, Kennedy Space Center, Florida

    Science.gov (United States)

    Mata, C. T.; Mata, A. G.; Rakov, V. A.; Nag, A.; Saul, J.

    2012-01-01

    A new comprehensive lightning instrumentation system has been designed for Launch Complex 39B (LC39B) at the Kennedy Space Center, Florida. This new instrumentation system includes seven synchronized high-speed video cameras, current sensors installed on the nine downconductors of the new lightning protection system (LPS) for LC39B; four dH/dt, 3-axis measurement stations; and five dE/dt stations composed of two antennas each. The LPS received 8 direct lightning strikes (a total of 19 strokes) from March 31 through December 31 2011. The measured peak currents and locations are compared to those reported by the Cloud-to-Ground Lightning Surveillance System (CGLSS II) and the National Lightning Detection Network (NLDN). Results of comparison are presented and analyzed in this paper.

  3. The Concept of Truth Regime

    Directory of Open Access Journals (Sweden)

    Lorna Weir

    2008-07-01

    Full Text Available “Truth regime” is a much used but little theorized concept, with the Foucauldian literature presupposing that truth in modernity is uniformly scientific/quasi-scientific and enhances power. I argue that the forms of truth characteristic of our present are wider than Foucault recognized, their relations to power more various, and their historicity more complex. The truth regime of advanced modernity is characterized by multiple, irreducible truth formulae that co-exist and sometimes vie for dominance. A truth formula stabilizes a network of elements: a relation between representation and presentation (words and things, truth and non-truth, and the place of the subject in discourse. Our contemporary truth regime comprises radically heterogeneous truthful knowledges – science, governance, religion/politics, and common culture – that have distinct histories and relations to power.

  4. ENUMERATION OF REGULAR TRUTH FUNCTIONS

    Science.gov (United States)

    IN A PREVIOUS WORK, (Lockheed Missiles and Space Company, 6-90-61-26, Jan 1961) the classification problem of the linearly separable truth functions...was reduced to the enumeration of some special kind of linearly separable truth functions called canonical truth functions. A canonical truth ...function F of n variables has an important property: if x F and y x in the canonical partial order of Qn, then y F. Any truth function F of n

  5. A novel approach for automatic snow depth estimation using UAV-taken images without ground control points

    Science.gov (United States)

    Mizinski, Bartlomiej; Niedzielski, Tomasz

    2017-04-01

    Recent developments in snow depth reconstruction based on remote sensing techniques include the use of photographs of snow-covered terrain taken by unmanned aerial vehicles (UAVs). There are several approaches that utilize visible-light photos (RGB) or near infrared images (NIR). The majority of the methods in question are based on reconstructing the digital surface model (DSM) of the snow-covered area with the use of the Structure-from-Motion (SfM) algorithm and the stereo-vision software. Having reconstructed the above-mentioned DSM it is straightforward to calculate the snow depth map which may be produced as a difference between the DSM of snow-covered terrain and the snow-free DSM, known as the reference surface. In order to use the aforementioned procedure, the high spatial accuracy of the two DSMs must be ensured. Traditionally, this is done using the ground control points (GCPs), either artificial or natural terrain features that are visible on aerial images, the coordinates of which are measured in the field using the Global Navigation Satellite System (GNSS) receiver by qualified personnel. The field measurements may be time-taking (GCPs must be well distributed in the study area, therefore the field experts should travel over long distances) and dangerous (the field experts may be exposed to avalanche risk or cold). Thus, there is a need to elaborate methods that enable the above-mentioned automatic snow depth map production without the use of GCPs. One of such attempts is shown in this paper which aims to present the novel method which is based on real-time processing of snow-covered and snow-free dense point clouds produced by SfM. The two stage georeferencing is proposed. The initial (low accuracy) one assigns true geographic, and subsequently projected, coordinates to the two dense point clouds, while the said initially-registered dense point clouds are matched using the iterative closest point (ICP) algorithm in the final (high accuracy) stage. The

  6. Moball-Buoy Network: A Near-Real-Time Ground-Truth Distributed Monitoring System to Map Ice, Weather, Chemical Species, and Radiations, in the Arctic

    Science.gov (United States)

    Davoodi, F.; Shahabi, C.; Burdick, J.; Rais-Zadeh, M.; Menemenlis, D.

    2014-12-01

    The work had been funded by NASA HQ's office of Cryospheric Sciences Program. Recent observations of the Arctic have shown that sea ice has diminished drastically, consequently impacting the environment in the Arctic and beyond. Certain factors such as atmospheric anomalies, wind forces, temperature increase, and change in the distribution of cold and warm waters contribute to the sea ice reduction. However current measurement capabilities lack the accuracy, temporal sampling, and spatial coverage required to effectively quantify each contributing factor and to identify other missing factors. Addressing the need for new measurement capabilities for the new Arctic regime, we propose a game-changing in-situ Arctic-wide Distributed Mobile Monitoring system called Moball-buoy Network. Moball-buoy Network consists of a number of wind-propelled self-powered inflatable spheres referred to as Moball-buoys. The Moball-buoys are self-powered. They use their novel mechanical control and energy harvesting system to use the abundance of wind in the Arctic for their controlled mobility and energy harvesting. They are equipped with an array of low-power low-mass sensors and micro devices able to measure a wide range of environmental factors such as the ice conditions, chemical species wind vector patterns, cloud coverage, air temperature and pressure, electromagnetic fields, surface and subsurface water conditions, short- and long-wave radiations, bathymetry, and anthropogenic factors such as pollutions. The stop-and-go motion capability, using their novel mechanics, and the heads up cooperation control strategy at the core of the proposed distributed system enable the sensor network to be reconfigured dynamically according to the priority of the parameters to be monitored. The large number of Moball-buoys with their ground-based, sea-based, satellite and peer-to-peer communication capabilities would constitute a wireless mesh network that provides an interface for a global

  7. Impact melt-bearing breccias of the Mistastin Lake impact structure: A unique planetary analogue for ground-truthing proximal ejecta emplacement

    Science.gov (United States)

    Mader, M. M.; Osinski, G. R.

    2013-12-01

    in matrix content, melt-fragment concentration, and contact relationships with adjacent impactites. Initial findings suggest differing origins for impact melt-bearing breccias from a single impact event. Three examples are highlighted: 1) Impact melt-bearing breccias, on an inner terrace, formed in boundary zones where hot impact melt flowed over cooler, ballistically emplaced polymict impact breccias. 2) Locally, a dyke of impact melt-bearing breccia suggests that this unit originated as hot lithic flow that moved laterally along the ground and then intruded as a fracture fill into target rocks. 3) A m-scale lens of melt-bearing breccia within the middle of a thick, 80m impact melt rock unit situated on an inner terrace, suggests that this lens may have originated from the crater floor and been incorporated into the melt pond during emplacement (i.e. movement of the melt from the crater floor to terrace shelf). In summary, the Mistastin Lake impact structure displays a multiple layered ejecta sequence that is consistent with, and requires, a multi-stage ejecta emplacement model as proposed by [1]. References: [1] Osinski et al. (2011) EPSL (310:167-181. [2] Melosh (1989) Oxford Univ. 245 pp. [3] French B. M. (1998) LPI Contribution 954,120pp. [4] Mader et al. (2011) 42nd LPSC, No.1608. [5] Mader et al. (2013) 43rd LPSC, No. 2517.

  8. Validating the truth of propositions: behavioral and ERP indicators of truth evaluation processes.

    Science.gov (United States)

    Wiswede, Daniel; Koranyi, Nicolas; Müller, Florian; Langner, Oliver; Rothermund, Klaus

    2013-08-01

    We investigated processes of truth validation during reading. Participants responded to 'true' and 'false' probes after reading simple true or false sentences. Compatible sentence/probe combinations (true/'true', false/'false') facilitated responding compared with incompatible combinations (true/'false', false/'true'), indicating truth validation. Evidence for truth validation was obtained after inducing an evaluative mindset but not after inducing a non-evaluative mindset, using additional intermixed tasks requiring true/false decisions or sentence comparisons, respectively. Event-related potentials revealed an increased late negativity (500-1000 ms after onset of the last word of sentences) for false compared with true sentences. Paralleling behavioral results, this electroencephalographic marker only obtained in the evaluative mindset condition. Further, mere semantic mismatches between subject and object of sentences led to an elevated N400 for both mindset conditions. Taken together, our findings suggest that truth validation is a conditionally automatic process that is dependent on the current task demands and resulting mindset, whereas the processing of word meaning and semantic relations between words proceeds in an unconditionally automatic fashion.

  9. Truth, body and religion

    Directory of Open Access Journals (Sweden)

    Jarl-Thure Eriksson

    2011-01-01

    Full Text Available This paper is based on the words of welcome to the symposium on Religion and the Body on 16 June 2010. In a religious context ‘truth’ is like a mantra, a certain imperative to believe in sacred things. The concept of truth and falseness arises, when we as humans compare reality, as we experience it through our senses, with the representation we have in our memory, a comparison of new information with stored information. If we look for the truth, we have to search in the human mind. There we will also find religion.

  10. Goedel, truth and proof

    Energy Technology Data Exchange (ETDEWEB)

    Peregrin, Jaroslav [Academy of Sciences and Charles University, Prague (Czech Republic)

    2007-11-15

    The usual way of interpreting Goedel's (1931) incompleteness result is as showing that there is a gap between truth and provability, i.e. that we can never prove everything that is true. Moreover, this result is supposed to show that there are unprovable truths which we can know to be true. This, so the story goes, shows that we are more than machines that are restricted to acting as proof systems. Hence our minds are 'not mechanical'.

  11. Music, Emotions, and Truth

    Science.gov (United States)

    Packalen, Elina

    2008-01-01

    In this article Elina Packalen considers the notion of truth in connection with music. Her starting-point is the question of how music can be expressive of emotions; therefore she first summarizes some recent philosophical ideas of this issue. These ideas naturally raise the question of whether describing music in emotive terms has an epistemic…

  12. Truth, science, and psychology

    NARCIS (Netherlands)

    Haig, B.D.; Borsboom, D.

    2012-01-01

    According to the correspondence theory of truth, a proposition is true if and only if the world is as the proposition says it is. This theory has been both promoted and rejected by philosophers and scientists down through time. In this paper, we adopt the correspondence theory as a plausible theory

  13. Music, Emotions, and Truth

    Science.gov (United States)

    Packalen, Elina

    2008-01-01

    In this article Elina Packalen considers the notion of truth in connection with music. Her starting-point is the question of how music can be expressive of emotions; therefore she first summarizes some recent philosophical ideas of this issue. These ideas naturally raise the question of whether describing music in emotive terms has an epistemic…

  14. Ground Truth in Building Human Security

    Science.gov (United States)

    2012-11-01

    2011, equipped with a laptop computer and a digital camera, I accompanied a team implementing the USAID-funded Title Registration and Microfinance ...security or collateral for a microfinance loan, typically $100 to $500 USD for six months, to buy a sewing machine, basket weaving material, or other...equipment and supplies for an in- home business.82 The Title Registration and Microfinance Project ad- dresses the dual challenges of conducting land

  15. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study

    Energy Technology Data Exchange (ETDEWEB)

    Deeley, M A; Cmelak, A J; Malcolm, A W; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Ding, G X [Department of Radiation Oncology, Vanderbilt University, Nashville, TN (United States); Chen, A; Datteri, R; Noble, J H; Dawant, B M [Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN (United States); Donnelly, E F [Department of Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN (United States); Yei, F; Koyama, T, E-mail: matthew.deeley@uvm.edu [Department of Biostatistics, Vanderbilt University, Nashville, TN (United States)

    2011-07-21

    The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.

  16. Truthfulness and relevance

    OpenAIRE

    Wilson, Deirdre; Sperber, Dan

    2002-01-01

    This paper questions the widespread view that verbal communication is governed by a maxim, norm or convention of truthfulness which applies at the level of what is literally meant, or what is said. Pragmatic frameworks based on this view must explain the frequent occurrence and acceptability of loose and figurative uses of language. We argue against existing explanations of these phenomena and provide an alternative account, based on the assumption that verbal communication is governed not by...

  17. WHEN IS TRUTH RELEVANT?

    Science.gov (United States)

    Allison, Elizabeth; Fonagy, Peter

    2016-04-01

    The authors argue that the experience of knowing and having the truth about oneself known in the context of therapy is not an end in itself; rather, it is important because the trust engendered by this experience (epistemic trust or trust in new knowledge) opens one up to learning about one's social world and finding better ways to live in it. The authors consider the consequences of a lack of epistemic trust in terms of psychopathology.

  18. Necessary truth and proof

    Directory of Open Access Journals (Sweden)

    Stephen Read

    2010-06-01

    Full Text Available What makes necessary truths true? I argue that all truth supervenes on how things are, and that necessary truths are no exception. What makes them true are proofs. But if so, the notion of proof needs to be generalized to include verification-transcendent proofs, proofs whose correctness exceeds our ability to verify it. It is incumbent on me, therefore, to show that arguments, such as Dummett's, that verification-truth is not compatible with the theory of meaning, are mistaken. The answer is that what we can conceive and construct far outstrips our actual abilities. I conclude by proposing a proof-theoretic account of modality, rejecting a claim of Armstrong's that modality can reside in non-modal truthmakers.O que faz verdadeiras as verdades necessárias? Defendo que qualquer verdade sobrevém das coisas como elas são, e que as verdades necessárias não são exceções. O que as faz verdadeiras são provas. Mas, se assim for, a noção de prova precisa ser generalizada para incluir provas de verificação-transcendente, provas cuja correção extrapola a nossa própria habilidade de verificação. Além disso, tenho a incumbência de mostrar que argumentos, como o de Dummett, segundo o qual a verdade em termos de verificação não é compatível com a teoria do significado, não procedem. A resposta consiste no fato de que aquilo que podemos conceber e construir ultrapassa nossas habilidades efetivas. Concluo propondo um tratamento das modalidades em termos de teoria da prova, rejeitando a afirmação de Armstrong de que modalidades podem residir em fazedores de verdade não modais.

  19. The truth about the truth: a meta-analytic review of the truth effect.

    Science.gov (United States)

    Dechêne, Alice; Stahl, Christoph; Hansen, Jochim; Wänke, Michaela

    2010-05-01

    Repetition has been shown to increase subjective truth ratings of trivia statements. This truth effect can be measured in two ways: (a) as the increase in subjective truth from the first to the second encounter (within-items criterion) and (b) as the difference in truth ratings between repeated and other new statements (between-items criterion). Qualitative differences are assumed between the processes underlying both criteria. A meta-analysis of the truth effect was conducted that compared the two criteria. In all, 51 studies of the repetition-induced truth effect were included in the analysis. Results indicate that the between-items effect is larger than the within-items effect. Moderator analyses reveal that several moderators affect both effects differentially. This lends support to the notion that different psychological comparison processes may underlie the two effects. The results are discussed within the processing fluency account of the truth effect.

  20. The Unity of Truth and the Plurality of Truths

    Directory of Open Access Journals (Sweden)

    Susan Haack

    2005-12-01

    Full Text Available There is one truth, but many truths: i.e., one unambiguous, non-relative truth-concept, but many and various propositions that are true. One truth-concept: to say that a proposition is true is to say (not that anyone, or everyone, believes it, but that things are as it says; but many truths: particular empirical claims, scientific theories, historical propositions, mathematical theorems, logical principles, textual interpretations, state-ments about what a person wants or believes or intends, about gram-matical and legal rules, etc., etc. But, as Frank Ramsey once said, “There is no platitude so obvious that eminent philosophers have not de-nied it”; and as soon as you ask why anyone would deny that there is one truth-concept, or that there are many true propositions, it becomes apparent that my initial, simple formula disguises many complexities.

  1. Lying relies on the truth.

    Science.gov (United States)

    Debey, Evelyne; De Houwer, Jan; Verschuere, Bruno

    2014-09-01

    Cognitive models of deception focus on the conflict-inducing nature of the truth activation during lying. Here we tested the counterintuitive hypothesis that the truth can also serve a functional role in the act of lying. More specifically, we examined whether the construction of a lie can involve a two-step process, where the first step entails activating the truth, based upon which a lie response can be formulated in a second step. To investigate this hypothesis, we tried to capture the covert truth activation in a reaction-time based deception paradigm. Together with each question, we presented either the truth or lie response as distractors. If lying depends on the covert activation of the truth, deceptive responses would thus be facilitated by truth distractors relative to lie distractors. Our results indeed revealed such a "covert congruency" effect, both in errors and reaction times (Experiment 1). Moreover, stimulating participants to use the distractor information by increasing the proportion of truth distractor trials enlarged the "covert congruency" effects, and as such confirmed that the effects operate at a covert response level (Experiment 2). Our findings lend support to the idea that lying relies on a first step of truth telling, and call for a shift in theoretical thinking that highlights both the functional and interfering properties of the truth activation in the lying process. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Compensation of Cable Voltage Drops and Automatic Identification of Cable Parameters in 400 Hz Ground Power Units

    DEFF Research Database (Denmark)

    Borup, Uffe; Nielsen, Bo Vork; Blaabjerg, Frede

    2004-01-01

    In this paper a new cable voltage drop compensation scheme for ground power units (GPU) is presented. The scheme is able to predict and compensate the voltage drop in an output cable by measuring the current quantities at the source. The prediction is based on an advanced cable model that includes...

  3. Fiction Is Truth, and Sometimes Truth Is Fiction

    Science.gov (United States)

    Hess, Carol Lakey

    2008-01-01

    The art of fiction tells truth because it is the truth of life that goes into making good fiction: love, hate, fear, courage, delight, sorrow, betrayal, loyalty, confusion, choice, circumstance, luck, injustice. These essential qualities, says the author, are also the qualities of sound theology, with a sense of time and place; and raising…

  4. On Truth and Emancipation

    Directory of Open Access Journals (Sweden)

    Andreas Hjort Bundgaard

    2012-09-01

    Full Text Available This article has two main currents. First, it argues that an affinity or similarity can be identified between the philosophy of Gianni Vattimo (the so-called “Weak Thinking” and the “Discourse Theory” of Ernesto Laclau and Chantal Mouffe. The two theorizations are engaged with related problems, but have conceptualized them differently; they share central insights, but understand them with different vocabularies. The article furthermore illuminates as regards what this affinity consists in, and it discusses the differences and similarities between the two theoretical positions. The second current of the article takes the ‘postmodern’ philosophical problems of anti-foundationalism and nihilism as its point of departure. It raises the questions of: 1 how it is possible at the same time to take the critique of universality and objectivity seriously and still believe in the value of ethics and science; and, 2 how we are to understand emancipation if there is no necessary relationship between truth and freedom. The article investigates the status, meaning and interconnection of the categories of truth, knowledge, ethics, politics and emancipation in the light of the absence of metaphysical first principles. The article concludes that: A faith can constitute a “weak foundation” of knowledge and ethics; and, B nihilism can be combined with the political and ethical ambitions of universal human emancipation and radical democracy.

  5. Truth-Telling, Ritual Culture, and Latino College Graduates in the Anthropocene

    Science.gov (United States)

    Gildersleeve, Ryan Evely

    2017-01-01

    This article seeks to trace the cartography of truth-telling through a posthuamanist predicament of ritual culture in higher education and critical inquiry. Ritual culture in higher education such as graduation ceremony produces and reflects the realities of becoming subjects. These spaces are proliferating grounds for truth telling and practical…

  6. Lying relies on the truth

    NARCIS (Netherlands)

    Debey, E.; De Houwer, J.; Verschuere, B.

    2014-01-01

    Cognitive models of deception focus on the conflict-inducing nature of the truth activation during lying. Here we tested the counterintuitive hypothesis that the truth can also serve a functional role in the act of lying. More specifically, we examined whether the construction of a lie can involve a

  7. Lying relies on the truth

    NARCIS (Netherlands)

    Debey, E.; De Houwer, J.; Verschuere, B.

    2014-01-01

    Cognitive models of deception focus on the conflict-inducing nature of the truth activation during lying. Here we tested the counterintuitive hypothesis that the truth can also serve a functional role in the act of lying. More specifically, we examined whether the construction of a lie can involve a

  8. TWO TRUTHS IN SUNDANESE SCRIPT CARIOS TAMIM

    Directory of Open Access Journals (Sweden)

    Rohim

    2014-10-01

    Full Text Available This article aims to describe the Sundanese manuscript of wawacan Carios Tamim by way of editing its text and analyzing its structure.Besides, this article also explores how, by applying the actant theory and functional model promoted by Greimas, the events are sequentially related through the characters. From the analysis, it can be seen that the formal structure of Carios Tamim contains 14 pupuh in 390 couplets and 2644 lines in three elements of story making, i.e. the manggala (preliminary, the content of the story, and the ending. Meanwhile, the analysis of its narrative structure shows that Carios Tamim has the plot, the characters, the characterization, and the setting so closely related that they build up the theme of the story, i.e. raising a truth out of two truths. The truth as revealed in Carios Tamim is strongly concerned with the munakahat (marriage. On the ground of Greimas’ actant theory and functional model, the character Tamim Ibnu Habib Ad-Dāri and his wife as subjects are successfully managed to get on the object owing to the events they are experiencing are closely related one another in a causal relationship.

  9. THE QUEST FOR TRUTH AS THE FOUNDATION OF PSYCHOANALYTIC PRACTICE: A TRADITIONAL FREUDIAN-KLEINIAN PERSPECTIVE.

    Science.gov (United States)

    Blass, Rachel B

    2016-04-01

    In responding to the question of whether truth in psychoanalysis is relevant today, the author presents what she refers to as a traditional Freudian-Kleinian perspective. According to this perspective, truth is not only relevant, but rather the quest for it is the alpha and omega of psychoanalytic practice. The author reviews Freud's approach to truth and then discusses Klein's essential contribution to its understanding, grounding, and enrichment, highlighting Klein's thinking about phantasy and the life and death instincts. Finally, the author contends with the opposing view that the quest for truth is no longer relevant to contemporary analytic practice.

  10. Twardowski On Truth

    Directory of Open Access Journals (Sweden)

    Peter Simons

    2009-10-01

    Full Text Available Of those students of Franz Brentano who went on to become professional philosophers, Kazimierz Twardowski (1866-1938 is much less well-known than his older contemporaries Edmund Husserl and Alexius Meinong. Yet in terms of the importance of his contribution to the history of philosophy, he ranks among Brentano’s students behind at most those two, possibly only behind Husserl. The chief contribution of Twardowski to global philosophy came indirectly, through the influence of his theory of truth on his students, and they on their students, and so on. The most important of these grandstudents is one whom Twardowski presumably knew but never taught, and whose adopted name is obtained by deleting four letters from his own: Tarski.

  11. Picturing truth and reconciliation

    Directory of Open Access Journals (Sweden)

    Liebmann Marian

    2004-01-01

    Full Text Available This is an account of the workshop run at the conference Truth and reconciliation in the former Yugoslavia: Where are we now and where to go? Even in such a short workshop, it seemed that it was possible to share difficult areas of the past, and move on to looking at hopes for the future. The use of art materials seemed to facilitate the expression of aspects difficult to put into words. There seems to be potential for extending this method. Yet it is not without its dangers, if used with a vulnerable group of people, or in an insecure situation, or in an insensitive way. It could open doors which are difficult to shut again. The ability of art to bring up memories and emotions is both its strength and its risk.

  12. Davidson, Dualism, and Truth

    Directory of Open Access Journals (Sweden)

    Nathaniel Goldberg

    2012-12-01

    Full Text Available Happy accidents happen even in philosophy. Sometimes our arguments yield insights despite missing their target, though when they do others can often spot it more easily. Consider the work of Donald Davidson. Few did more to explore connections among mind, language, and world. Now that we have critical distance from his views, however, we can see that Davidson’s accomplishments are not quite what they seem. First, while Davidson attacked the dualism of conceptual scheme and empirical content, he in fact illustrated a way to hold it. Second, while Davidson used the principle of charity to argue against the dualism, his argument in effect treats the principle as constitutive of a conceptual scheme. And third, while Davidson asserted that he cannot define what truth ultimately is—and while I do not disagree—his work nonetheless allows us to saymore about truth than Davidson himself does. I aim to establish these three claims. Doing so enriches our understanding of issues central to the history of philosophy concerning how, if at all, to divvy up the mental or linguistic contribution, and the worldly contribution, to knowledge. As we see below, Davidson was right in taking his work to be one stage of a dialectic begun by Immanuel Kant.1 He was just wrong about what that stage is. Reconsidering Davidson’s views also moves the current debate forward, as they reveal a previously unrecognized yet intuitive notion of truth—even if Davidson himself remained largely unaware of it. We begin however with scheme/content dualism and Davidson’s argument against it.

  13. An existential theoiy of truth

    African Journals Online (AJOL)

    Western Oregon State College, USA ... existentialist writers - namely, Kierkegaard, Heidegger, ... truth in his writing is not representative of other existentialists. .... if we chose, opt not to participate in it - but that it is so fundamental to our being.

  14. The Truth and Bias Model of Judgment

    Science.gov (United States)

    West, Tessa V.; Kenny, David A.

    2011-01-01

    We present a new model for the general study of how the truth and biases affect human judgment. In the truth and bias model, judgments about the world are pulled by 2 primary forces, the truth force and the bias force, and these 2 forces are interrelated. The truth and bias model differentiates force and value, where the force is the strength of…

  15. The Truth and Bias Model of Judgment

    Science.gov (United States)

    West, Tessa V.; Kenny, David A.

    2011-01-01

    We present a new model for the general study of how the truth and biases affect human judgment. In the truth and bias model, judgments about the world are pulled by 2 primary forces, the truth force and the bias force, and these 2 forces are interrelated. The truth and bias model differentiates force and value, where the force is the strength of…

  16. Visual truths of citizen reportage

    DEFF Research Database (Denmark)

    Allan, Stuart; Peters, Chris

    2015-01-01

    In striving to better understand issues associated with citizen contributions to newsmaking in crisis situations, this article identifies and elaborates four specific research problematics – bearing witness, technologies of truth-telling, mediating visualities and affectivities of othering....... Rather than centring analysis on how crisis events highlight change, it discerns the basis for a critical tracing of the material configurations and contingencies shaping journalistic imperatives towards generating visually truthful reportage. In seeking to move debates about how best to enliven digital...

  17. Correspondence Truth and Quantum Mechanics

    CERN Document Server

    Karakostas, Vassilios

    2015-01-01

    The logic of a physical theory reflects the structure of the propositions referring to the behaviour of a physical system in the domain of the relevant theory. It is argued in relation to classical mechanics that the propositional structure of the theory allows truth-value assignment in conformity with the traditional conception of a correspondence theory of truth. Every proposition in classical mechanics is assigned a definite truth value, either 'true' or 'false', describing what is actually the case at a certain moment of time. Truth-value assignment in quantum mechanics, however, differs; it is known, by means of a variety of 'no go' theorems, that it is not possible to assign definite truth values to all propositions pertaining to a quantum system without generating a Kochen-Specker contradiction. In this respect, the Bub-Clifton 'uniqueness theorem' is utilized for arguing that truth-value definiteness is consistently restored with respect to a determinate sublattice of propositions defined by the state...

  18. Truth therapy/lie therapy.

    Science.gov (United States)

    Langs, R

    In this paper an attempt is made to conceptualize a basic dimension of various psychotherapeutic treatment modalities, especially psychoanalysis and psychoanalytically oriented psychotherapy. The central variable under consideration is the extent to which each endeavors to approach the truth within both patient and therapist as it exists dynamically in terms of their spiraling unconscious communicative interaction. That treatment modality which takes into account every possible dimension of such truths is termed truth therapy. Treatment modalities that make no attempt to arrive at these truths or that deliberately or inadvertently falsify their nature are termed lie or barrier therapies. Extensive consideration is given to truth therapy and the truth system on which it is based. The basis for the need for lie therapies is explored, and lie systems, which may arise from either patient or therapist, or both, are identified. A classification of common types of lie patients and lie therapists (and their main techniques) is offered. The implications of this delineation for our understanding of the dynamic therapies are discussed, and a number of new clinical issues arising from this perspective are addressed.

  19. Truthful Outcomes from Non-Truthful Position Auctions

    OpenAIRE

    Dütting, Paul; Fischer, Felix; Parkes, David C.

    2016-01-01

    We exhibit a property of the VCG mechanism that can help explain the surprising rarity with\\ud which it is used even in settings with unit demand: a relative lack of robustness to inaccuracies in\\ud the choice of its parameters. For a standard position auction environment in which the auctioneer\\ud may not know the precise relative values of the positions, we show that under both complete\\ud and incomplete information a non-truthful mechanism supports the truthful outcome of the\\ud VCG mechan...

  20. PSYCHOANALYSIS AND THE PROBLEM OF TRUTH.

    Science.gov (United States)

    Levine, Howard B

    2016-04-01

    After briefly reviewing Freud's search for "the truth" in psychoanalytic treatments, the author discusses Bion's views on truth and its prominence in his thinking. The author then addresses various definitions of truth, drawing particularly on recent comments by Ogden (2015). Considerations of the relationship between truth and philosophy, and of that between truth and the arts, follow; the author then returns to a focus on psychoanalytic truth as emergent. Our view of the latter has been strongly influenced, he notes, by changing views of therapeutic action and the goals of psychoanalysis.

  1. Medical science, culture, and truth.

    Science.gov (United States)

    Gillett, Grant

    2006-12-19

    There is a fairly closed circle between culture, language, meaning, and truth such that the world of a given culture is a world understood in terms of the meanings produced in that culture. Medicine is, in fact, a subculture of a powerful type and has its own language and understanding of the range of illnesses that affect human beings. So how does medicine get at the truth of people and their ills in such a way as to escape its own limited constructions? There is a way out of the closed circle implicit in the idea of a praxis and the engagement with reality that is central to it and the further possibility introduced by Jacques Lacan that signification is never comprehensive in relation to the subject's encounter with the real. I will explore both of these so as to develop a conception of truth that is apt for the knowledge that arises in the clinic.

  2. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...... maximization in this setting. With m being the number of alternatives, we exhibit a randomized truthful-in-expectation ordinal mechanism implementing an outcome whose expected social welfare is at least an Omega(m^{-3/4}) fraction of the social welfare of the socially optimal alternative. On the other hand, we...... show that for sufficiently many agents and any truthful-in-expectation ordinal mechanism, there is a valuation profile where the mechanism achieves at most an O(m^{-{2/3}) fraction of the optimal social welfare in expectation. We get tighter bounds for the natural special case of m = 3...

  3. Automatic Diabetic Macular Edema Detection in Fundus Images Using Publicly Available Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Garg, Seema [University of North Carolina; Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy. In a large scale screening environment DME can be assessed by detecting exudates (a type of bright lesions) in fundus images. In this work, we introduce a new methodology for diagnosis of DME using a novel set of features based on colour, wavelet decomposition and automatic lesion segmentation. These features are employed to train a classifier able to automatically diagnose DME. We present a new publicly available dataset with ground-truth data containing 169 patients from various ethnic groups and levels of DME. This and other two publicly available datasets are employed to evaluate our algorithm. We are able to achieve diagnosis performance comparable to retina experts on the MESSIDOR (an independently labelled dataset with 1200 images) with cross-dataset testing. Our algorithm is robust to segmentation uncertainties, does not need ground truth at lesion level, and is very fast, generating a diagnosis on an average of 4.4 seconds per image on an 2.6 GHz platform with an unoptimised Matlab implementation.

  4. Truth and Truthfulness in the Sociology of Educational Knowledge

    Science.gov (United States)

    Young, Michael; Muller, Johan

    2007-01-01

    The aim of this article is to reflect on and explore questions of truth and objectivity in the sociology of educational knowledge. It begins by reviewing the problems raised by the social constructivist approaches to knowledge associated with the "new sociology of education" of the I970s. It suggests that they have significant parallels…

  5. Automatic defect detection in video archives: application to Montreux Jazz Festival digital archives

    Science.gov (United States)

    Hanhart, Philippe; Rerabek, Martin; Ivanov, Ivan; Dufaux, Alain; Jones, Caryl; Delidais, Alexandre; Ebrahimi, Touradj

    2013-09-01

    Archival of audio-visual databases has become an important discipline in multimedia. Various defects are typ- ically present in such archives. Among those, one can mention recording related defects such as interference between audio and video signals, optical related artifacts, recording and play out artifacts such as horizontal lines, and dropouts, as well as those due to digitization such as diagonal lines. An automatic or semi-automatic detection to identify such defects is useful, especially for large databases. In this paper, we propose two auto- matic algorithms for detection of horizontal and diagonal lines, as well as dropouts that are among the most typical artifacts encountered. We then evaluate the performance of these algorithms by making use of ground truth scores obtained by human subjects.

  6. Heart Health: The Heart Truth Campaign 2009

    Science.gov (United States)

    ... Past Issues Cover Story Heart Health The Heart Truth Campaign 2009 Past Issues / Winter 2009 Table of ... of the celebrities supporting this year's The Heart Truth campaign. Both R&B singer Ashanti (center) and ...

  7. The Bain of Two Truths

    DEFF Research Database (Denmark)

    Hendricks, Vincent Fella

    2010-01-01

    A common view among methodologists is that truth and convergence are related in such a way that scientific theories in their historical order of appearance contribute to the convergence to an ultimate ideal theory. It is not a fact that science develops accordingly but rather a hypothetical thoug...

  8. Theatre of/or Truth

    NARCIS (Netherlands)

    Bleeker, M.A.

    2007-01-01

    The famous scene near the end of The Truman Show (Peter Weir, 1998), when Truman’s boat hits the wall of the television studio that has been his life’s scenery, is a moment of truth. Fans throughout the world hold their breath, glued to their television sets. Will Truman finally discover that his li

  9. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...

  10. Establishing truthfulness, consistency and transferability.

    Science.gov (United States)

    Kirkman, Christine

    2008-01-01

    This article by Christine Kirkman shows how a Convergent Truthfulness Evaluation was used to establish the veracity of accounts given by women who had partnered psychopathic men. The study investigated the nature of the relationships and the manner in which the characteristics of the men were manifested.

  11. The (SOF) Truth about ARSOF Logistics Transformation

    Science.gov (United States)

    2009-05-01

    The (SOF) Truth About ARSOF Logistics Transformation A Monograph by MAJ Jason M. Alvis U.S. Army School of Advanced...SOF) Truth About ARSOF Logistics Transformation 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 5d. PROJECT NUMBER 6. AUTHOR(S) Major Jason M...human dimension is through the Special Operations Forces (SOF) Truths . The SOF Truths can be applied to the tenets of the Army’s approved change

  12. MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION

    Science.gov (United States)

    Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)

  13. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  14. "#Factsmustfall"?--Education in a Post-Truth, Post-Truthful World

    Science.gov (United States)

    Horsthemke, Kai

    2017-01-01

    Taking its inspiration from the name of the recent "#FeesMustFall" movement on South African university campuses, this paper takes stock of the apparent disrepute into which truth, facts and also rationality have fallen in recent times. In the post-truth world, the blurring of borders between truth and deception, truthfulness and…

  15. Reversing the Truth Effect: Learning the Interpretation of Processing Fluency in Judgments of Truth

    Science.gov (United States)

    Unkelbach, Christian

    2007-01-01

    Repeated statements receive higher truth ratings than new statements. Given that repetition leads to greater experienced processing fluency, the author proposes that fluency is used in truth judgments according to its ecological validity. Thus, the truth effect occurs because people learn that fluency and truth tend to be positively correlated.…

  16. 75 FR 31673 - Truth in Savings

    Science.gov (United States)

    2010-06-04

    ... CFR Part 230 Truth in Savings AGENCY: Board of Governors of the Federal Reserve System. ACTION: Final... implements the Truth in Savings Act, and the official staff commentary to the regulation. The final rule... adopted a final rule amending Regulation DD, which implements the Truth in Savings Act, and the official...

  17. 75 FR 81836 - Truth in Lending

    Science.gov (United States)

    2010-12-29

    ... No. R-1366] Truth in Lending AGENCY: Board of Governors of the Federal Reserve System. ACTION... amending Regulation Z, which implements the Truth in Lending Act (TILA). This interim rule revises the... Regulation Z Congress enacted the Truth in Lending Act (TILA) based on findings that economic stability would...

  18. 75 FR 67457 - Truth in Lending

    Science.gov (United States)

    2010-11-02

    ... System 12 CFR Part 226 Truth in Lending; Proposed Rule #0;#0;Federal Register / Vol. 75 , No. 211...-AD55 Truth in Lending AGENCY: Board of Governors of the Federal Reserve System. ACTION: Proposed rule... (2009). The Credit Card Act primarily amended the Truth in Lending Act (TILA) and established a number...

  19. 75 FR 46837 - Truth in Lending

    Science.gov (United States)

    2010-08-04

    ... / Wednesday, August 4, 2010 / Rules and Regulations#0;#0; ] FEDERAL RESERVE SYSTEM 12 CFR Part 226 Truth in... requirements of Regulation Z (Truth in Lending). The Board is required to adjust annually the dollar amount.... Background The Truth in Lending Act (TILA; 15 U.S.C. 1601-1666j) requires creditors to disclose credit terms...

  20. 76 FR 35722 - Truth in Lending

    Science.gov (United States)

    2011-06-20

    ...-1424] Truth in Lending AGENCY: Board of Governors of the Federal Reserve System. ACTION: Final rule... interprets the requirements of Regulation Z, which implements the Truth in Lending Act (TILA). Effective July... Reform and Consumer Protection Act of 2010 (Dodd-Frank Act) increases the threshold in the Truth in...

  1. 75 FR 44093 - Truth in Lending

    Science.gov (United States)

    2010-07-28

    ... CFR Part 226 Truth in Lending June 29, 2010. AGENCY: Board of Governors of the Federal Reserve System..., which implements the Truth in Lending Act, and the staff commentary to the regulation in order to... June 29, 2010, (75 FR 37526) ] (FR Doc. 2010-14717) amending Regulation Z, which implements the Truth...

  2. 76 FR 3487 - Truth in Savings

    Science.gov (United States)

    2011-01-20

    ... ADMINISTRATION 12 CFR Part 707 RIN 3133-AD72 Truth in Savings AGENCY: National Credit Union Administration (NCUA). ACTION: Final rule. SUMMARY: On July 22, 2009, NCUA published a final rule amending NCUA's Truth in... through automated systems. This final rule amends NCUA's Truth in Savings rule and official staff...

  3. 75 FR 80675 - Truth in Lending

    Science.gov (United States)

    2010-12-23

    ... CFR Part 226 Truth in Lending AGENCY: Board of Governors of the Federal Reserve System. ACTION... implements the Truth in Lending Act, in order to implement provisions of the Dodd-Frank Wall Street Reform... October 28, 2010 (75 FR 66554) (Docket No. R- 1394), amending Regulation Z (Truth in Lending) to implement...

  4. 76 FR 35723 - Truth in Lending

    Science.gov (United States)

    2011-06-20

    ...-1422] Truth in Lending AGENCY: Board of Governors of the Federal Reserve System. ACTION: Final rule... interprets the requirements of Regulation Z (Truth in Lending). The Board is required to adjust annually the... INFORMATION: I. Background The Truth in Lending Act (TILA; 15 U.S.C. 1601-1666j) requires creditors to...

  5. 75 FR 37525 - Truth in Lending

    Science.gov (United States)

    2010-06-29

    ... System 12 CFR Part 226 Truth in Lending; Final Rule #0;#0;Federal Register / Vol. 75 , No. 124 / Tuesday, June 29, 2010 / Rules and Regulations#0;#0; ] FEDERAL RESERVE SYSTEM 12 CFR Part 226 Truth in Lending... amending Regulation Z, which implements the Truth in Lending Act, and the staff commentary to the...

  6. 76 FR 11597 - Truth in Lending

    Science.gov (United States)

    2011-03-02

    ... March 2, 2011 Part III Federal Reserve System 12 CFR Part 226 Truth in Lending; Proposed Rule #0;#0... SYSTEM 12 CFR Part 226 RIN No. 7100-AD 65 Truth in Lending AGENCY: Board of Governors of the Federal... public comment a proposed rule that would amend Regulation Z (Truth in Lending) to implement certain...

  7. Normativity and deflationary theories of truth

    Directory of Open Access Journals (Sweden)

    Bruno Mölder

    2008-12-01

    Full Text Available It has been argued that deflationary theories of truth stumble over the normativity of truth. This paper maintains that the normativity objection does not pose problems to at least one version of deflationism, minimalism. The rest of the paper discusses truth-related norms, showing that either they do not hold or they are not troublesome for deflationism.

  8. 75 FR 12333 - Truth in Lending

    Science.gov (United States)

    2010-03-15

    ... System 12 CFR Part 226 Truth in Lending; Proposed Rule #0;#0;Federal Register / Vol. 75 , No. 49 / Monday, March 15, 2010 / Proposed Rules#0;#0; ] FEDERAL RESERVE SYSTEM 12 CFR Part 226 Truth in Lending AGENCY.... SUMMARY: The Board proposes to amend Regulation Z, which implements the Truth in Lending Act, and...

  9. 75 FR 9126 - Truth in Savings

    Science.gov (United States)

    2010-03-01

    ... CFR Part 230 Truth in Savings AGENCY: Board of Governors of the Federal Reserve System. ACTION... amending Regulation DD, which implements the Truth in Savings Act, and the official staff commentary to the..., which implements the Truth in Savings Act, and the official staff commentary to the regulation....

  10. 75 FR 78636 - Truth in Lending

    Science.gov (United States)

    2010-12-16

    ... CFR Part 226 RIN 7100-AD59 Truth in Lending AGENCY: Board of Governors of the Federal Reserve System... Street Reform and Consumer Protection Act (Dodd-Frank Act) amends the Truth in Lending Act (TILA) by... 1100E amends Section 104(3) of the Truth in Lending Act (TILA) by establishing a new threshold...

  11. 75 FR 66553 - Truth in Lending

    Science.gov (United States)

    2010-10-28

    ... System 12 CFR Part 226 Truth in Lending; Interim Final Rule #0;#0;Federal Register / Vol. 75 , No. 208... Regulation Z; Docket No. R-1394 RIN AD-7100-56 Truth in Lending AGENCY: Board of Governors of the Federal... for public comment an interim final rule amending Regulation Z (Truth in Lending). The interim...

  12. 75 FR 7657 - Truth in Lending

    Science.gov (United States)

    2010-02-22

    ... System 12 CFR Parts 226 and 227 Truth in Lending; Unfair or Deceptive Acts or Practices; Final Rules #0...; ] FEDERAL RESERVE SYSTEM 12 CFR Part 226 Truth in Lending AGENCY: Board of Governors of the Federal Reserve System. ACTION: Final rule. SUMMARY: The Board is amending Regulation Z, which implements the Truth...

  13. Truth and the Capability of Learning

    Science.gov (United States)

    Hinchliffe, Geoffrey

    2007-01-01

    This paper examines learning as a capability, taking as its starting point the work of Amartya Sen and Martha Nussbaum. The paper is concerned to highlight the relation between learning and truth, and it does so by examining the idea of a genealogy of truth and also Donald Davidson's coherence theory. Thus the notion of truth is understood to be…

  14. Telling Lies: The Irrepressible Truth?

    Science.gov (United States)

    Williams, Emma J.; Bott, Lewis A.; Patrick, John; Lewis, Michael B.

    2013-01-01

    Telling a lie takes longer than telling the truth but precisely why remains uncertain. We investigated two processes suggested to increase response times, namely the decision to lie and the construction of a lie response. In Experiments 1 and 2, participants were directed or chose whether to lie or tell the truth. A colored square was presented and participants had to name either the true color of the square or lie about it by claiming it was a different color. In both experiments we found that there was a greater difference between lying and telling the truth when participants were directed to lie compared to when they chose to lie. In Experiments 3 and 4, we compared response times when participants had only one possible lie option to a choice of two or three possible options. There was a greater lying latency effect when questions involved more than one possible lie response. Experiment 5 examined response choice mechanisms through the manipulation of lie plausibility. Overall, results demonstrate several distinct mechanisms that contribute to additional processing requirements when individuals tell a lie. PMID:23573277

  15. Truth and reconciliation in Serbia

    Directory of Open Access Journals (Sweden)

    Nikolić-Ristanović Vesna Ž.

    2004-01-01

    Full Text Available The paper provides a general review of the current situation concerning the truth and reconciliation in Serbia. The existing attitude toward the past in Serbia is examined through an analysis of relations toward it and through the analyses of bottom up and top-down initiatives. In this respect, the paper’s focus is on the following: the media, nongovernmental organizations the individual citizen, state organs, primarily the authorities and criminal justice system, and the international community. The citizens’ opinions that were brought out in the panel discussions organized by the Victim logy Society of Serbia, within the project From remembering the past towards a positive future, and that refer to the need for a process of truth and reconciliation and the obstacles and difficulties related to that, are pointed out as well. Particular attention is devoted to the obstacles and difficulties related to the absence of a clear position of the authorities to the counter-productive decisions of the international community and the still negative role of the media that fail to deal broadly with the issues of truth and reconciliation.

  16. Truth Axes and the Transformation of Self.

    Science.gov (United States)

    Yadlin-Gadot, Shlomit

    2017-04-01

    Freud adhered to the idea that psychoanalysis is a science and that truth is one. The transition from a realistic epistemology to an epistemology of subjective idealism in psychoanalytic thought was accompanied by the splintering of the "one" realistic truth into a multiplicity of truths: realistic-correspondent, ideal, subjective-existential, intersubjective, coherent, and pragmatic truths. The present paper, employing the concept of "truth axes," explores these truths as they relate to basic human needs, self- states, and the structuring of subjectivity. Truth axes are posited as organizing principles of the psyche aimed at achieving stable images of reality across critical dimensions of the subject's life. Personality and experience render some axes dominant, while others remain foreclosed and dissociated. In this construal, the psychoanalytic process concerns the detailing, depicting, and understanding of the various truth axes. The psychological definition of truth illuminates its relation to clinical objectives and methodologies and emphasizes the ethical dimension involved in prioritization of truths. These ideas are illustrated by clinical vignettes.

  17. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  18. The Truth Hurts?: FDA Regulation of Truthful Speech

    OpenAIRE

    Collins, Elisebeth

    2000-01-01

    One of the more frightening pictures in our society is one of a speaker being silenced, simply by virtue of who the speaker is; yet, the FDA engages in that type of censorship on a regular basis. The FDA censors manufacturer dissemination of truthful information concerning unapproved ("off-label") uses of prescription drugs, and does so at the expense of the First Amendment. Nor does the FDA further its mission of protecting the health and safety of the American public through its censorship....

  19. Image simulation for automatic license plate recognition

    Science.gov (United States)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  20. AUTOMATIC RETINAL VESSEL DETECTION AND TORTUOSITY MEASUREMENT

    Directory of Open Access Journals (Sweden)

    Temitope Mapayi

    2016-07-01

    Full Text Available As retinopathies continue to be major causes of visual loss and blindness worldwide, early detection and management of these diseases will help achieve significant reduction of blindness cases. However, an efficient automatic retinal vessel segmentation approach remains a challenge. Since efficient vessel network detection is a very important step needed in ophthalmology for reliable retinal vessel characterization, this paper presents study on the combination of difference image and K-means clustering for the segmentation of retinal vessels. Stationary points in the vessel center-lines are used to model the detection of twists in the vessel segments. The combination of arc-chord ratio with stationary points is used to compute tortuosity index. Experimental results show that the proposed K-means combined with difference image achieved a robust segmentation of retinal vessels. A maximum average accuracy of 0.9556 and a maximum average sensitivity of 0.7581 were achieved on DRIVE database while a maximum average accuracy of 0.9509 and a maximum average sensitivity of 0.7666 were achieved on STARE database. When compared with the previously proposed techniques on DRIVE and STARE databases, the proposed technique yields higher mean sensitivity and mean accuracy rates in the same range of very good specificity. In a related development, a non-normalized tortuosity index that combined distance metric and the vessel twist frequency proposed in this paper also achieved a strong correlation of 0.80 with the expert ground truth.

  1. An Earthquake Ground Motion Database System with Automatic Record Selection Methods%一种支持自动化选波的地震波数据库系统

    Institute of Scientific and Technical Information of China (English)

    徐亚军; 王朝坤; 魏冬梅; 施炜; 潘鹏

    2011-01-01

    With many earthquakes happening in recent years, the seismic performance of building structures is more and more important. How to select earthquake ground motion records for testing buildings is becoming much more necessary. Although the Occident has set up some earthquake ground motion database systems for researches of seismic performance of building structures, these database systems are not able to cover characteristics of earthquake ground motions of our country, nor offer record election methods meeting requirements of our engineering designs, let alone the automatic record selection method. Therefore, it is much necessary to develop an earthquake ground motion database system and scientific and reasonable selection methods of our country immediately. This paper presents an earthquake ground motion database system, which collects a lot of earthquake ground motion records. And the system supports two kinds of earthquake ground motion record selection methods: the conditional ground motion selection method and the severest ground motion selection method. Many experiments prove that the efficiency and effects of earthquake ground motion record selection methods can both meet users' requirements.%近年来地震频发,建筑结构的抗震性越来越被人们所重视,因此如何选取需要的地震波来检测建筑结构变得非常重要.虽然欧美等国家已经建立了一些用于结构抗震研究的地震波数据库系统,但是这些数据库系统均未能涵盖能反映我国地震动特征的地震波,也未能提供满足我国工程设计的选波方法,更没有实现自动化选波,所以迅速开发我国自己的地震波数据库系统和研究科学合理的选波方法显得十分必要.设计并实现了一种可以自动化选波的地震波数据库系统,该系统收集了许多具有代表性和权威性的地震波,并支持条件选波和最不利选波.大量的实验表明,该地震波数据库系统的选波效率、选波

  2. Can religious priming induce truthful preference revelation?

    OpenAIRE

    Stachtiaris, Spiros; Drichoutis, Andreas; Nayga, Rodolfo; Klonaris, Stathis

    2011-01-01

    We examine whether religious priming can induce more truthful preference revelation in valuation research. Using induced value second price Vickrey auctions in both hypothetical and non-hypothetical contexts, our results suggest that religious priming can indeed induce more truthful bidding and eliminate hypothetical bias in hypothetical contexts. In non-hypothetical contexts where there are real economic incentives, religious priming induces similar truthful bidding as the absence of religio...

  3. Truthfulness in science teachers’ corporeal performances

    DEFF Research Database (Denmark)

    Daugbjerg, Peer

    2014-01-01

    Relations between science teachers’ corporeal performances and their statements regarding these actions are discussed. Dispositions of truthfulness are applied to grasp the personal nuances in these relations. Three teachers’ corporeal performances and truthfulness are analysed. Diana shows effort......, sincerity and trustworthiness in dealing with classroom management. Jane shows effort, fidelity and honesty in developing outdoor teaching. Simon shows transparency, objectivity and sincerity in his support of colleagues. By addressing the relations in the vocabulary of truthfulness the teachers...

  4. Broad-spectrum monitoring strategies for predicting occult precipitation contribution to water balance in a coastal watershed in California: Ground-truthing, areal monitoring and isotopic analysis of fog in the San Francisco Bay region

    Science.gov (United States)

    Koohafkan, M.; Thompson, S. E.; Leonardson, R.; Dufour, A.

    2013-12-01

    We showcase a fog monitoring study designed to quantitatively estimate the contribution of summer fog events to the water balance of a coastal watershed managed by the San Francisco Public Utilities Commission. Two decades of research now clearly show that fog and occult precipitation can be major contributors to the water balance of watersheds worldwide. Monitoring, understanding and predicting occult precipitation is therefore as hydrologically compelling as forecasting precipitation or evaporation, particularly in the face of climate variability. We combine ground-based monitoring and collection strategies with remote sensing technologies, time-lapse imagery, and isotope analysis to trace the ';signature' of fog in physical and ecological processes. Spatial coverage and duration of fog events in the watershed is monitored using time-lapse cameras and leaf wetness sensors strategically positioned to provide estimates of the fog bank extent and cloud base elevation, and this fine-scale data is used to estimate transpiration suppression by fog and is examined in the context of regional climate through the use of satellite imagery. Soil moisture sensors, throughfall collectors and advective fog collectors deployed throughout the watershed provide quantitative estimates of fog drip contribution to soil moisture and plants. Fog incidence records and streamflow monitoring provide daily estimates of fog contribution to streamflow. Isotope analysis of soil water, fog drip, stream water and vegetation samples are used to probe for evidence of direct root and leaf uptake of fog drip by plants. Using this diversity of fog monitoring methods, we develop an empirical framework for the inclusion of fog processes in water balance models.

  5. The Inconvenient Truth. Part 2

    Energy Technology Data Exchange (ETDEWEB)

    Athanasiou, T.

    2007-01-15

    Essay-type of publication on what should happen next after Al Gore's presentations on the Inconvenient Truth about the impacts of climate change. The essay states in the first lines: 'We've seen the movie, so we know the first part - we're in trouble deep. And it's time, past time, for at least some of us to go beyond warning to planning, to start talking seriously about a global crash program to stabilize the climate.

  6. The Experience of Truth in Jazz Improvisation

    DEFF Research Database (Denmark)

    Olsen, Jens Skou

    2015-01-01

    This is a book on truth, experience, and the interrelations between these two fundamental philosophical notions. The questions of truth and experience have their roots at the very heart of philosophy, both historically and thematically. This book gives an insight into how philosophers working...... of a mutually inspired collaboration between the Philosophy Departments at the University of Aarhus and the University of Turin, that is: between Danish and Italian philosophers, who met up in Aarhus during the graduate conference “The Experience of Truth – The Truth of Experience: Between Phenomenology...

  7. Truth-Bonding and Other Truth-Revealing Mechanisms for Courts

    OpenAIRE

    Cooter, Robert D.; Emons, Winand

    2000-01-01

    In trials witnesses often gain by slanting their testimony. The law tries to elicit the truth from witnesses by cross-examination under threat of criminal prosecution for perjury. As a truth-revealing mechanism, perjury law is crude and ineffective. We develop the mathematical form of a perfect truth-revealing mechanism, which exactly offsets the gain from slanted testimony by the risk of a possible sanction. Implementing an effective truth-revealing mechanism requires a witness to certify...

  8. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  9. Climate science, truth, and democracy.

    Science.gov (United States)

    Keller, Evelyn Fox

    2017-08-01

    This essay was written almost ten years ago when the urgency of America's failure as a nation to respond to the threats of climate change first came to preoccupy me. Although the essay was never published in full, I circulated it informally in an attempt to provoke a more public engagement among my colleagues in the history, philosophy, and sociology of science. In particular, it was written in almost direct response to Philip Kitcher's own book, Science, Truth and Democracy (2001), in an attempt to clarify what was special about Climate Science in its relation to truth and democracy. Kitcher's response was immensely encouraging, and it led to an extended dialogue that resulted, first, in a course we co-taught at Columbia University, and later, to the book The Seasons Alter: How to Save Our Planet in Six Acts (W. W. Norton) published this spring. The book was finished just after the Paris Climate Accord, and it reflects the relative optimism of that moment. Unfortunately events since have begun to evoke, once again, the darker mood of this essay. I am grateful to Greg Radick for suggesting its publication. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Automatic polar ice thickness estimation from SAR imagery

    Science.gov (United States)

    Rahnemoonfar, Maryam; Yari, Masoud; Fox, Geoffrey C.

    2016-05-01

    Global warming has caused serious damage to our environment in recent years. Accelerated loss of ice from Greenland and Antarctica has been observed in recent decades. The melting of polar ice sheets and mountain glaciers has a considerable influence on sea level rise and altering ocean currents, potentially leading to the flooding of the coastal regions and putting millions of people around the world at risk. Synthetic aperture radar (SAR) systems are able to provide relevant information about subsurface structure of polar ice sheets. Manual layer identification is prohibitively tedious and expensive and is not practical for regular, longterm ice-sheet monitoring. Automatic layer finding in noisy radar images is quite challenging due to huge amount of noise, limited resolution and variations in ice layers and bedrock. Here we propose an approach which automatically detects ice surface and bedrock boundaries using distance regularized level set evolution. In this approach the complex topology of ice and bedrock boundary layers can be detected simultaneously by evolving an initial curve in radar imagery. Using a distance regularized term, the regularity of the level set function is intrinsically maintained that solves the reinitialization issues arising from conventional level set approaches. The results are evaluated on a large dataset of airborne radar imagery collected during IceBridge mission over Antarctica and Greenland and show promising results in respect to hand-labeled ground truth.

  11. Automatic histology registration in application to x-ray modalities

    Science.gov (United States)

    Chicherova, Natalia; Hieber, Simone E.; Schulz, Georg; Khimchenko, Anna; Bikis, Christos; Cattin, Philippe C.; Müller, Bert

    2016-10-01

    Registration of microscope images to Computed Tomography (CT) 3D volumes is a challenging task because it requires not only multi-modal similarity measure but also 2D-3D or slice-to-volume correspondence. This type of registration is usually done manually which is very time-consuming and prone to errors. Recently we have developed the first automatic approach to localize histological sections in μCT data of a jaw bone. The median distance between the automatically found slices and the ground truth was below 35 μm. Here we explore the limitations of the method by applying it to three tomography datasets acquired with grating interferometry, laboratory-based μCT and single-distance phase retrieval. Moreover, we compare the performance of three feature detectors in the proposed framework, i.e. Speeded Up Robust Features (SURF), Scale Invariant Feature Transform (SIFT) and Affine SIFT (ASIFT). Our results show that all the feature detectors performed significantly better on the grating interferometry dataset than on other modalities. The median accuracy for the vertical position was 0.06 mm. Across the feature detector types the smallest error was achieved by the SURF-based feature detector (0.29 mm). Furthermore, the SURF-based method was computationally the most efficient. Thus, we recommend to use the SURF feature detector for the proposed framework.

  12. Hiding an Inconvenient Truth : Lies and Vagueness

    NARCIS (Netherlands)

    Serra Garcia, M.; van Damme, E.E.C.; Potters, J.J.M.

    2010-01-01

    When truth conflicts with e¢ ciency, can verbal communication destroy efficiency? Or are lies or vagueness used to hide inconvenient truths? We consider a sequential 2-player public good game in which the leader has private information about the value of the public good. This value can be low, high,

  13. THE ’TRUTH’ ABOUT FALSE CONFESSIONS

    Science.gov (United States)

    showed that subjects come to believe that their false statements are true when emitted in the presence of a discriminative truth stimulus. In an...attempted replication, the present study sought evidence to support an alternative explanation of this finding, based upon decreased vigilance induced by the truth stimulus.

  14. 75 FR 7925 - Truth in Lending

    Science.gov (United States)

    2010-02-22

    ... From the Federal Register Online via the Government Publishing Office FEDERAL RESERVE SYSTEM 12 CFR Part 226 Truth in Lending AGENCY: Board of Governors of the Federal Reserve System. ACTION: Final... implements the Truth in Lending Act (TILA), and the official staff commentary. The rule followed a...

  15. Ethics and Truth in Archival Research

    Science.gov (United States)

    Tesar, Marek

    2015-01-01

    The complexities of the ethics and truth in archival research are often unrecognised or invisible in educational research. This paper complicates the process of collecting data in the archives, as it problematises notions of ethics and truth in the archives. The archival research took place in the former Czechoslovakia and its turbulent political…

  16. 76 FR 31221 - Truth in Lending; Correction

    Science.gov (United States)

    2011-05-31

    ... From the Federal Register Online via the Government Publishing Office FEDERAL RESERVE SYSTEM 12 CFR Part 226 RIN 7100-AD55 Truth in Lending; Correction AGENCY: Board of Governors of the Federal... April 25, 2011. The final rule amends Regulation Z, which implements the Truth in Lending Act, in order...

  17. Students' Conceptions of Knowledge, Information, and Truth

    Science.gov (United States)

    Alexander, Patricia A.; Winters, Fielding I.; Loughlin, Sandra M.; Grossnickle, Emily M.

    2012-01-01

    In this study, everyday conceptions of knowledge, information, and truth were investigated as 161 US undergraduates completed three online tasks that investigated understandings of these foundational constructs. For the first task, respondents graphically represented the interrelations of knowledge, information, and truth; the second task required…

  18. Does the Truth Matter in Science?

    Science.gov (United States)

    Lipton, Peter

    2005-01-01

    Is science in the truth business, discovering ever more about an independent and largely unobservable world? Karl Popper and Thomas Kuhn, two of the most important figures in science studies in the 20th century, gave accounts of science that are in some tension with the truth view. Their central claims about science are considered here, along with…

  19. Students' Conceptions of Knowledge, Information, and Truth

    Science.gov (United States)

    Alexander, Patricia A.; Winters, Fielding I.; Loughlin, Sandra M.; Grossnickle, Emily M.

    2012-01-01

    In this study, everyday conceptions of knowledge, information, and truth were investigated as 161 US undergraduates completed three online tasks that investigated understandings of these foundational constructs. For the first task, respondents graphically represented the interrelations of knowledge, information, and truth; the second task required…

  20. Ethics and Truth in Archival Research

    Science.gov (United States)

    Tesar, Marek

    2015-01-01

    The complexities of the ethics and truth in archival research are often unrecognised or invisible in educational research. This paper complicates the process of collecting data in the archives, as it problematises notions of ethics and truth in the archives. The archival research took place in the former Czechoslovakia and its turbulent political…

  1. Heart Truth for African American Women

    Science.gov (United States)

    THE HEART TRUTH ® FOR AFRICAN AMERICAN WOMEN: AN ACTION PLAN When you hear the term “heart disease,” what’s your first reaction? ... That’s a man’s disease.” But here’s The Heart Truth ® : Heart disease is the #1 killer of women ...

  2. Does the Truth Matter in Science?

    Science.gov (United States)

    Lipton, Peter

    2005-01-01

    Is science in the truth business, discovering ever more about an independent and largely unobservable world? Karl Popper and Thomas Kuhn, two of the most important figures in science studies in the 20th century, gave accounts of science that are in some tension with the truth view. Their central claims about science are considered here, along with…

  3. The Philosophical Problem of Truth in Librarianship

    Science.gov (United States)

    Labaree, Robert V.; Scimeca, Ross

    2008-01-01

    The authors develop a framework for addressing the question of truth in librarianship and in doing so attempt to move considerations of truth closer to the core of philosophical debates within the profession. After establishing ways in which philosophy contributes to social scientific inquiry in library science, the authors examine concepts of…

  4. Fluency and positivity as possible causes of the truth effect.

    Science.gov (United States)

    Unkelbach, Christian; Bayer, Myriam; Alves, Hans; Koch, Alex; Stahl, Christoph

    2011-09-01

    Statements' rated truth increases when people encounter them repeatedly. Processing fluency is a central variable to explain this truth effect. However, people experience processing fluency positively, and these positive experiences might cause the truth effect. Three studies investigated positivity and fluency influences on the truth effect. Study 1 found correlations between elicited positive feelings and rated truth. Study 2 replicated the repetition-based truth effect, but positivity did not influence the effect. Study 3 conveyed positive and negative correlations between positivity and truth in a learning phase. We again replicated the truth effect, but positivity only influenced judgments for easy statements in the learning phase. Thus, across three studies, we found positivity effects on rated truth, but not on the repetition-based truth effect: We conclude that positivity does not explain the standard truth effect, but the role of positive experiences for truth judgments deserves further investigation. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. A SECOND OPINION ON RELATIVE TRUTH

    Directory of Open Access Journals (Sweden)

    RAMIRO CASO

    2015-01-01

    Full Text Available In 'An undermining diagnosis of relativism about truth', Horwich claims that the notion of relative truth is either explanatorily sterile or explanatorily superfluous. In the present paper, I argue that Horwich's explanatory demands set the bar unwarrantedly high: given the philosophical import of the theorems of a truth-theoretic semantic theory, Horwich's proposed explananda, what he calls acceptance facts, are too indirect for us to expect a complete explanation of them in terms of the deliverances of a theory of meaning based on the notion of relative truth. And, to the extent that there might be such an explanation in certain cases, there is no reason to expect relative truth to play an essential, ineliminable role, nor to endorse the claim that it should play such a role in order to be a theoretically useful notion.

  6. ON LANGUAGE AND TRUTH IN PSYCHOANALYSIS.

    Science.gov (United States)

    Ogden, Thomas H

    2016-04-01

    The author's focus in this paper is on the role that language plays in bringing to life the truth of the patient's lived experience in the analytic session. He discusses particular forms of discourse that enable the patient to experience with the analyst the truth that the patient had previously been unable to experience, much less put into words, on his own. The three forms of discourse that the author explores-direct discourse, tangential discourse, and discourse of non sequiturs-do not simply serve as ways of communicating the truth; they are integral aspects of the truth of what is happening at any given moment of a session. The truth that is experienced and expressed in the analytic discourse lies at least as much in the breaks (the disjunctions) in that discourse as in its manifest narrative.

  7. Prospective Analysis and Establishing Substantive Truth in Review of Merger Decisions in Court

    NARCIS (Netherlands)

    Gerbrandy, Anna

    2014-01-01

    In judicial review of decisions of administrative authorities courts generally aim towards grounding a judgment on substantively true facts. Such a substantive truth is usually understood as meaning ’that which happened’. But how can true facts be established if the facts have not yet occurred and w

  8. On the Discovery of Evolving Truth.

    Science.gov (United States)

    Li, Yaliang; Li, Qi; Gao, Jing; Su, Lu; Zhao, Bo; Fan, Wei; Han, Jiawei

    2015-08-01

    In the era of big data, information regarding the same objects can be collected from increasingly more sources. Unfortunately, there usually exist conflicts among the information coming from different sources. To tackle this challenge, truth discovery, i.e., to integrate multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. In many real world applications, however, the information may come sequentially, and as a consequence, the truth of objects as well as the reliability of sources may be dynamically evolving. Existing truth discovery methods, unfortunately, cannot handle such scenarios. To address this problem, we investigate the temporal relations among both object truths and source reliability, and propose an incremental truth discovery framework that can dynamically update object truths and source weights upon the arrival of new data. Theoretical analysis is provided to show that the proposed method is guaranteed to converge at a fast rate. The experiments on three real world applications and a set of synthetic data demonstrate the advantages of the proposed method over state-of-the-art truth discovery methods.

  9. [Medicine and truth: between science and narrative].

    Science.gov (United States)

    Materia, Enrico; Baglio, Giovanni

    2009-01-01

    To which idea of truth may medicine refer? Evidence-based medicine (EBM) is rooted in the scientific truth. To explain the meaning and to trace the evolution of scientific truth, this article outlines the history of the Scientific Revolution and of the parable of Modernity, up to the arrival of pragmatism and hermeneutics. Here, the concept of truth becomes somehow discomfiting and the momentum leans towards the integration of different points of view. The fuzzy set theory for the definition of disease, as well as the shift from disease to syndrome (which has operational relevance for geriatrics), seems to refer to a more complex perspective on knowledge, albeit one that is less defined as compared to the nosology in use. Supporters of narrative medicine seek the truth in the interpretation of the patients' stories, and take advantage of the medical humanities to find the truth in words, feelings and contact with the patients. Hence, it is possible to mention the parresia, which is the frank communication espoused by stoicism and epicureanism, a technical and ethical quality which allows one to care in the proper way, a true discourse for one's own moral stance. Meanwhile, EBM and narrative medicine are converging towards a point at which medicine is considered a practical knowledge. It is the perspective of complexity that as a zeitgeist explains these multiple instances and proposes multiplicity and uncertainty as key referents for the truth and the practice of medicine.

  10. Unveiling the truth: warnings reduce the repetition-based truth effect.

    Science.gov (United States)

    Nadarevic, Lena; Aßfalg, André

    2017-07-01

    Typically, people are more likely to consider a previously seen or heard statement as true compared to a novel statement. This repetition-based "truth effect" is thought to rely on fluency-truth attributions as the underlying cognitive mechanism. In two experiments, we tested the nature of the fluency-attribution mechanism by means of warning instructions, which informed participants about the truth effect and asked them to prevent it. In Experiment 1, we instructed warned participants to consider whether a statement had already been presented in the experiment to avoid the truth effect. However, warnings did not significantly reduce the truth effect. In Experiment 2, we introduced control questions and reminders to ensure that participants understood the warning instruction. This time, warning reduced, but did not eliminate the truth effect. Assuming that the truth effect relies on fluency-truth attributions, this finding suggests that warned participants could control their attributions but did not disregard fluency altogether when making truth judgments. Further, we found no evidence that participants overdiscount the influence of fluency on their truth judgments.

  11. LRO Diviner Soil Composition Measurements - Lunar Sample Ground Truth

    Science.gov (United States)

    Allen, Carlton C.; Greenhagen, Benjamin T.; Paige, David A.

    2010-01-01

    The Diviner Lunar Radiometer Experiment on the Lunar Reconnaissance Orbiter [1,2] includes three thermal infrared channels spanning the wavelength ranges 7.55-8.05 microns 8.10-8.40 microns, and 8.38-8.68 microns. These "8 micron" bands were specifically selected to measure the "Christiansen feature". The wavelength location of this feature, referred to herein as CF, is particularly sensitive to silicate minerals including plagioclase, pyroxene, and olivine the major crystalline components of lunar rocks and soil. The general trend is that lower CF values are correlated with higher silica content and higher CF values are correlated with lower silica content. In a companion abstract, Greenhagen et al. [3] discuss the details of lunar mineral identification using Diviner data.

  12. Community detection in networks: Structural communities versus ground truth

    Science.gov (United States)

    Hric, Darko; Darst, Richard K.; Fortunato, Santo

    2014-12-01

    Algorithms to find communities in networks rely just on structural information and search for cohesive subsets of nodes. On the other hand, most scholars implicitly or explicitly assume that structural communities represent groups of nodes with similar (nontopological) properties or functions. This hypothesis could not be verified, so far, because of the lack of network datasets with information on the classification of the nodes. We show that traditional community detection methods fail to find the metadata groups in many large networks. Our results show that there is a marked separation between structural communities and metadata groups, in line with recent findings. That means that either our current modeling of community structure has to be substantially modified, or that metadata groups may not be recoverable from topology alone.

  13. Global Ground Truth Data Set with Waveform and Arrival Data

    Science.gov (United States)

    2007-07-30

    notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not...NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON Robert J. Raistrick a. REPORT UNCLASSIFIED b. ABSTRACT UNCLASSIFIED c. THIS PAGE UNCLASSIFIED...local del Proyecto GASPI (1999-2002), Trabajos de Geología, Univ. de Oviedo, 24, 91-106, 2004. 10116 Terceira, Azores Islands 28/0 Kennett, B. L. N

  14. Improved Multiple Event Location Methods for Ground Truth Collection

    Science.gov (United States)

    2016-10-04

    likewise Gaussian with zero mean and variance matrix given by Γpri = ACAT . (24) The Rodi-Myers method specifies the model variance indirectly in terms of...respect to eij of the loga- rithm of the pick-error p.d.f. in (15). In the Gaussian case (p = 2) the weights are simply inverse variances: wij = σ −2 ij...data weights are set to reciprocal pick-error variances for any p, as they are in the Gaussian case. That is, the weights are set as wij = 1 σ2ij p 2/p

  15. ALP FOPEN Site Description and Ground Truth Summary

    Science.gov (United States)

    1990-02-01

    xylem layer is also influencing the dielectric constant measurement; 3. The inside of the phloem layer. The phloem layer can often be clearly separated...conposition of the bark layer always varies between tre. species. 163 ERIM Active Xylem Xylem or Heartwood Figure 83. Schematic Diagram of Tree Trunk...is comprised of inactive phloem cells. The next layer, which is usually only several millimeters thick, is the phloem layer. The cells in this layer

  16. Ground Truth calibration for the JEM-EUSO Mission

    CERN Document Server

    Adams, J H; Csorna, S E; Sarazin, F; Wiencke, L R

    2013-01-01

    The Extreme Universe Space Observatory is an experiment to investigate the highest energy cosmic rays by recording the extensive air showers they create in the atmosphere. This will be done by recording video clips of the development of these showers using a large high-speed video camera to be located on the Japanese Experiment Module of the International Space Station. The video clips will be used to determine the energies and arrival directions of these cosmic rays. The accuracy of these measurements depends on measuring the intrinsic luminosity and the direction of each shower accurately. This paper describes how the accuracy of these measurements will be tested and improved during the mission using a global light system consisting of calibrated flash lamps and lasers located deep in the atmosphere.

  17. Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning.

    Science.gov (United States)

    Wang, Guan; Sun, Yu; Wang, Jianxin

    2017-01-01

    Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.

  18. Developing a Satellite Based Automatic System for Crop Monitoring: Kenya's Great Rift Valley, A Case Study

    Science.gov (United States)

    Lucciani, Roberto; Laneve, Giovanni; Jahjah, Munzer; Mito, Collins

    2016-08-01

    The crop growth stage represents essential information for agricultural areas management. In this study we investigate the feasibility of a tool based on remotely sensed satellite (Landsat 8) imagery, capable of automatically classify crop fields and how much resolution enhancement based on pan-sharpening techniques and phenological information extraction, useful to create decision rules that allow to identify semantic class to assign to an object, can effectively support the classification process. Moreover we investigate the opportunity to extract vegetation health status information from remotely sensed assessment of the equivalent water thickness (EWT). Our case study is the Kenya's Great Rift valley, in this area a ground truth campaign was conducted during August 2015 in order to collect crop fields GPS measurements, leaf area index (LAI) and chlorophyll samples.

  19. Accurate and robust fully-automatic QCA: method and numerical validation.

    Science.gov (United States)

    Hernández-Vela, Antonio; Gatta, Carlo; Escalera, Sergio; Igual, Laura; Martin-Yuste, Victoria; Radeva, Petia

    2011-01-01

    The Quantitative Coronary Angiography (QCA) is a methodology used to evaluate the arterial diseases and, in particular, the degree of stenosis. In this paper we propose AQCA, a fully automatic method for vessel segmentation based on graph cut theory. Vesselness, geodesic paths and a new multi-scale edgeness map are used to compute a globally optimal artery segmentation. We evaluate the method performance in a rigorous numerical way on two datasets. The method can detect an artery with precision 92.9 +/- 5% and sensitivity 94.2 +/- 6%. The average absolute distance error between detected and ground truth centerline is 1.13 +/- 0.11 pixels (about 0.27 +/- 0.025 mm) and the absolute relative error in the vessel caliber estimation is 2.93% with almost no bias. Moreover, the method can discriminate between arteries and catheter with an accuracy of 96.4%.

  20. Truth-value transmittal fuzzy reasoning interpolator

    Institute of Scientific and Technical Information of China (English)

    YAN Jianping; LEUNG Yee

    2005-01-01

    In this paper, we firstly associate fuzzy reasoning algorithm with the interpolation algorithm and discuss the limitation of defuzzification methods used commonly in the fuzzy reasoning algorithm. Secondly, we give a new fuzzy reasoning algorithm in case of single input, called the truth-value transmittal method, and discuss its properties. Finally, we analyze the rationality to adopy the truth-value transmittal method as the defuzzification method of full implication triple I method, and show that although CRI and triple I fuzzy reasoning method are different from fuzzy output set, they are uniform finally under the truth-value transmittal defuzzification method.

  1. Degrees of Truthfulness in Accepted Scientific Claims

    National Research Council Canada - National Science Library

    Ahmed Hassan Mabrouk

    2008-01-01

    ...";">: Sciences adopt different methodologies in deriving claims and establishing theories. As a result, two accepted claims or theories belonging to two different sciences may not necessarily carry the same degree of truthfulness...

  2. 7 CFR 1940.401 - Truth in lending.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 13 2010-01-01 2009-01-01 true Truth in lending. 1940.401 Section 1940.401...) PROGRAM REGULATIONS (CONTINUED) GENERAL Truth in Lending-Real Estate Settlement Procedures § 1940.401 Truth in lending. (a) General. This section provides instructions for compliance with the Truth in...

  3. 75 FR 58489 - Regulation Z; Truth in Lending

    Science.gov (United States)

    2010-09-24

    ... CFR Part 226 Regulation Z; Truth in Lending AGENCY: Board of Governors of the Federal Reserve System... Regulation Z (Truth in Lending). The final rule implements Section 131(g) of the Truth in Lending Act (TILA.... SUPPLEMENTARY INFORMATION: I. Background The Truth in Lending Act (TILA), 15 U.S.C. 1601 et seq., seeks to...

  4. ON THE CLASSIFICATION OF LINEARLY SEPARABLE TRUTH FUNCTIONS

    Science.gov (United States)

    A solution to the classification problem of the linearly separable truth functions of n variables by reducing it to the enumeration of a special kind...of truth functions called canonical truth functions is given. The classification problem is formulated and the canonical truth functions are defined. The key lemma is proved, and the reduction is described. (Author)

  5. An algorithm for automatic parameter adjustment for brain extraction in BrainSuite

    Science.gov (United States)

    Rajagopal, Gautham; Joshi, Anand A.; Leahy, Richard M.

    2017-02-01

    Brain Extraction (classification of brain and non-brain tissue) of MRI brain images is a crucial pre-processing step necessary for imaging-based anatomical studies of the human brain. Several automated methods and software tools are available for performing this task, but differences in MR image parameters (pulse sequence, resolution) and instrumentand subject-dependent noise and artefacts affect the performance of these automated methods. We describe and evaluate a method that automatically adapts the default parameters of the Brain Surface Extraction (BSE) algorithm to optimize a cost function chosen to reflect accurate brain extraction. BSE uses a combination of anisotropic filtering, Marr-Hildreth edge detection, and binary morphology for brain extraction. Our algorithm automatically adapts four parameters associated with these steps to maximize the brain surface area to volume ratio. We evaluate the method on a total of 109 brain volumes with ground truth brain masks generated by an expert user. A quantitative evaluation of the performance of the proposed algorithm showed an improvement in the mean (s.d.) Dice coefficient from 0.8969 (0.0376) for default parameters to 0.9509 (0.0504) for the optimized case. These results indicate that automatic parameter optimization can result in significant improvements in definition of the brain mask.

  6. Experimental assessment of an automatic breast density classification algorithm based on principal component analysis applied to histogram data

    Science.gov (United States)

    Angulo, Antonio; Ferrer, Jose; Pinto, Joseph; Lavarello, Roberto; Guerrero, Jorge; Castaneda, Benjamín.

    2015-01-01

    Breast parenchymal density is considered a strong indicator of cancer risk. However, measures of breast density are often qualitative and require the subjective judgment of radiologists. This work proposes a supervised algorithm to automatically assign a BI-RADS breast density score to a digital mammogram. The algorithm applies principal component analysis to the histograms of a training dataset of digital mammograms to create four different spaces, one for each BI-RADS category. Scoring is achieved by projecting the histogram of the image to be classified onto the four spaces and assigning it to the closest class. In order to validate the algorithm, a training set of 86 images and a separate testing database of 964 images were built. All mammograms were acquired in the craniocaudal view from female patients without any visible pathology. Eight experienced radiologists categorized the mammograms according to a BIRADS score and the mode of their evaluations was considered as ground truth. Results show better agreement between the algorithm and ground truth for the training set (kappa=0.74) than for the test set (kappa=0.44) which suggests the method may be used for BI-RADS classification but a better training is required.

  7. Automatic apparatus and data transmission for field response tests of the ground; Automatisation et teletransmission des donnees pour les tests de reponse du terrain

    Energy Technology Data Exchange (ETDEWEB)

    Laloui, L.; Steinmann, G.

    2004-07-01

    This is the report on the third part of a development started 1998 at the Swiss Federal Institute of Technology Lausanne (EPFL) in Lausanne, Switzerland. Energy piles are becoming increasingly used as a heat exchanger and heat storage device, as are geothermal probes. Their design and sizing is subject to some uncertainty due to the fact that the planner has to estimate the thermal and mechanical properties of the ground surrounding the piles or probes. The aim of the project was to develop an apparatus for field measurements of thermal and mechanical properties of an energy pile or a geothermal probe (thermal response tests). In the reported third phase of the project the portable apparatus was equipped with a data transmission device using the Internet. Real-time data acquisition and supervision is now implemented and data processing has been improved. Another goal of the project was to obtain the official accreditation of such response tests according to the European standard EN 45,000. First operation experience from a test in Lyon, France is reported.

  8. 蒸渗仪地下水位自动平衡系统设计%An automatic balancing system design to ground water table of lysimeter

    Institute of Scientific and Technical Information of China (English)

    郭相平; 陆红飞; 陈盛

    2014-01-01

    为准确保持大田水位与蒸渗仪水位的实时相平,并测定地下水补给量和深层渗漏量,提出了一种蒸渗仪水位自动平衡系统的设计方法。该系统基于连通器原理,并利用水、汞的导电性差异而设计,由3部分组成:淤水位监测装置,包括蒸渗仪和大田的地下水位观测井及连接管;于水位感应和控制装置,由内盛液态汞的U形管以及控制电路组成,能比较大田和蒸渗仪的水位差,并通过电磁继电器打开/关闭供排水电磁阀;盂供(排)水与量测装置,由供水装置、排水装置及其电磁阀和量测装置组成。该系统结构简单,造价低廉,易于维护和管理;初步测试表明,该系统具有较好的控制精度,适于地下水位变化较大的地区使用。%In order to synchronize the water table of a lysimeter with that of farmland, and measure the amount of recharge of groundwater and deep percolation, a design method for an automatic balancing system of the water table of a lysimeter is put forward. Based on the principle of a communicating vessel, and designed according to the electrical conductivity differences between water and mercury, the system consists of three parts:(1) water level detecting equipment, consisting of two observation wells and connecting pipes, (2) a water level sensor and controlling equipment, composed of a U-shape glass filled with mercury and control circuits, which can compare the difference of groundwater level between farmland and a lysimeter, and can switch on/off solenoid valves for water supply and drainage through electromagnetic relays; and (3) various equipment, including water supply and drainage equipment, solenoid valves, and measurement devices. The system is simple in construction, cheap, and easy to maintain and manage. The initial testing results indicate that the system has good accuracy, and is appropriate for application in areas where the water table varies

  9. Automatic and hierarchical segmentation of the human skeleton in CT images

    Science.gov (United States)

    Fu, Yabo; Liu, Shi; Li, H. Harold; Yang, Deshan

    2017-04-01

    Accurate segmentation of each bone of the human skeleton is useful in many medical disciplines. The results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulty due to the high image contrast between the bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to the many limitations of the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all the major individual bones of the human skeleton above the upper legs in CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. The degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. The segmentation results are evaluated using the Dice coefficient and point-to-surface error (PSE) against manual segmentation results as the ground-truth. The results suggest that the reported method can automatically segment and label the human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for the mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic

  10. Automatic and hierarchical segmentation of the human skeleton in CT images.

    Science.gov (United States)

    Fu, Yabo; Liu, Shi; Li, Hui Harold; Yang, Deshan

    2017-02-14

    Accurate segmentation of each bone in human skeleton is useful in many medical disciplines. Results of bone segmentation could facilitate bone disease diagnosis and post-treatment assessment, and support planning and image guidance for many treatment modalities including surgery and radiation therapy. As a medium level medical image processing task, accurate bone segmentation can facilitate automatic internal organ segmentation by providing stable structural reference for inter- or intra-patient registration and internal organ localization. Even though bones in CT images can be visually observed with minimal difficulties due to high image contrast between bony structures and surrounding soft tissues, automatic and precise segmentation of individual bones is still challenging due to many limitations in the CT images. The common limitations include low signal-to-noise ratio, insufficient spatial resolution, and indistinguishable image intensity between spongy bones and soft tissues. In this study, a novel and automatic method is proposed to segment all major individual bones of human skeleton above the upper legs in the CT images based on an articulated skeleton atlas. The reported method is capable of automatically segmenting 62 major bones, including 24 vertebrae and 24 ribs, by traversing a hierarchical anatomical tree and by using both rigid and deformable image registration. Degrees of freedom of femora and humeri are modeled to support patients in different body and limb postures. Segmentation results are evaluated using Dice coefficient and point-to-surface error (PSE) against manual segmentation results as ground truth. The results suggest that the reported method can automatically segment and label human skeleton into detailed individual bones with high accuracy. The overall average Dice coefficient is 0.90. The average PSEs are 0.41 mm for mandible, 0.62 mm for cervical vertebrae, 0.92 mm for thoracic vertebrae, and 1.45 mm for pelvis bones.

  11. Telling the truth to patients with cancer: what is the truth?

    Science.gov (United States)

    Surbone, Antonella

    2006-11-01

    Attitudes and practices of truth-telling to people with cancer have shifted substantially in the past few years. However, cultural and individual differences persist, and some difficulties common to all medical specialties are magnified in oncology. In this Personal View, I review and analyse data for attitudes and practices of truth-telling worldwide. I also assess ethical justifications, with special reference to interpersonal aspects of patients' autonomy and the dynamic nature of truth in the clinical context. Examples are provided to show how this ethical perspective can help oncologists to frame the discourse on truth-telling and to find solutions to the dilemmas of whether, when, and how to tell the truth to their patients in clinical practice. Finally, I identify future targets for research.

  12. Automatic Evaluation of Photovoltaic Power Stations from High-Density RGB-T 3D Point Clouds

    Directory of Open Access Journals (Sweden)

    Luis López-Fernández

    2017-06-01

    Full Text Available A low-cost unmanned aerial platform (UAV equipped with RGB (Red, Green, Blue and thermographic sensors is used for the acquisition of all the data needed for the automatic detection and evaluation of thermal pathologies on photovoltaic (PV surfaces and geometric defects in the mounting on photovoltaic power stations. RGB imagery is used for the generation of a georeferenced 3D point cloud through digital image preprocessing, photogrammetric and computer vision algorithms. The point cloud is complemented with temperature values measured by the thermographic sensor and with intensity values derived from the RGB data in order to obtain a multidimensional product (5D: 3D geometry plus temperature and intensity on the visible spectrum. A segmentation workflow based on the proper integration of several state-of-the-art geomatic and mathematic techniques is applied to the 5D product for the detection and sizing of thermal pathologies and geometric defects in the mounting in the PV panels. It consists of a three-step segmentation procedure, involving first the geometric information, then the radiometric (RGB information, and last the thermal data. No configuration of parameters is required. Thus, the methodology presented contributes to the automation of the inspection of PV farms, through the maximization of the exploitation of the data acquired in the different spectra (visible and thermal infrared bands. Results of the proposed workflow were compared with a ground truth generated according to currently established protocols and complemented with a topographic survey. The proposed methodology was able to detect all pathologies established by the ground truth without adding any false positives. Discrepancies in the measurement of damaged surfaces regarding established ground truth, which can reach the 5% of total panel surface for the visual inspection by an expert operator, decrease with the proposed methodology under the 2%. The geometric evaluation

  13. Automatic learning-based beam angle selection for thoracic IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy; Marshall, Andrea [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, Toronto, Ontario M5G 1P5 (Canada); Levinshtein, Alex [Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4 (Canada); Hope, Andrew J.; Lindsay, Patricia [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Pekar, Vladimir [Philips Healthcare, Markham, Ontario L6C 2S3 (Canada)

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  14. Truth from language and truth from fit: the impact of linguistic concreteness and level of construal on subjective truth.

    Science.gov (United States)

    Hansen, Jochim; Wänke, Michaela

    2010-11-01

    In four experiments, the impact of concreteness of language on judgments of truth was examined. In Experiments 1 and 2, it was found that statements of the very same content were judged as more probably true when they were written in concrete language than when they were written in abstract language. Findings of Experiment 2 also showed that this linguistic concreteness effect on judgments of truth could most likely be attributed to greater perceived vividness of concrete compared to abstract statements. Two further experiments demonstrated an additional fit effect: The truth advantage of concrete statements occurred especially when participants were primed with a concrete (vs. abstract) mind-set (Experiment 3) or when the statements were presented in a spatially proximal (vs. distant) location (Experiment 4). Implications for communication strategies are discussed.

  15. Ground State Spin Logic

    CERN Document Server

    Whitfield, J D; Biamonte, J D

    2012-01-01

    Designing and optimizing cost functions and energy landscapes is a problem encountered in many fields of science and engineering. These landscapes and cost functions can be embedded and annealed in experimentally controllable spin Hamiltonians. Using an approach based on group theory and symmetries, we examine the embedding of Boolean logic gates into the ground state subspace of such spin systems. We describe parameterized families of diagonal Hamiltonians and symmetry operations which preserve the ground state subspace encoding the truth tables of Boolean formulas. The ground state embeddings of adder circuits are used to illustrate how gates are combined and simplified using symmetry. Our work is relevant for experimental demonstrations of ground state embeddings found in both classical optimization as well as adiabatic quantum optimization.

  16. Formalism and the notion of truth

    Science.gov (United States)

    Spencer, Joseph M.

    The most widely acknowledged conceptions of truth take some kind of relation to be at truth's core. This dissertation attempts to establish that an adequate conception of this relation begins with an investigation of the entanglement of the formal and the material as set forth in the model theoretical development of set theoretical mathematics. Truth concerns first and most crucially a certain commerce across the border between the formal and the material, between the ideal and the real. The entanglement of the formal and the material must be thought in itself, apart from or prior to any assimilation into philosophical schemas committed to larger metaphysical claims. This is accomplished in model theory. The twentieth century witnessed two attempts at bringing model theoretical mathematics to bear on accounting philosophically for the concept of truth: that of Alfred Tarski, and that of Alain Badiou. In order to investigate the relevance of model theory to the task of working out a philosophical conception of truth, this dissertation investigates, through comparative work, these two thinkers. It is necessary to see where their projects converge in important ways, as well as where their projects diverge in equally important ways. What brings their work into close proximity is their shared conviction that truth must be thought in light of model theory. Nonetheless, the two do not agree about exactly how model theory sheds light on truth. Comparative study thus reveals both a shared site for thinking and a struggle over the significance of that site. Agreement between Tarski and Badiou concerns the excess of the purely formal over itself, marked by the generation of an undecidable statement within formal systems of a certain level of complexity. Both thinkers determine that this formal excess touches on the material, and both further determine that the consequent entanglement of the formal and the material provides the basic frame for any philosophical consideration

  17. Can Partisan Voting Lead to Truth?

    CERN Document Server

    Masuda, Naoki

    2010-01-01

    We study an extension of the voter model in which each agent is endowed with an innate preference for one of two states that we term as "truth" or "falsehood". Due to interactions with neighbors, an agent that innately prefers truth can be persuaded to adopt a false opinion (and thus be discordant with its innate preference) or the agent can possess an internally concordant "true" opinion. Parallel states exist for agents that inherently prefer falsehood. We determine the conditions under which a population of such agents can ultimately reach a consensus for the truth, a consensus for falsehood, or reach an impasse where an agent tends to adopt the opinion that is in internal concordance with its innate preference so that consensus is never achieved.

  18. STS, symmetry and post-truth.

    Science.gov (United States)

    Lynch, Michael

    2017-08-01

    This essay takes up a series of questions about the connection between 'symmetry' in Science and Technology Studies (STS) and 'post-truth' in contemporary politics. A recent editorial in this journal by Sergio Sismondo argues that current discussions of 'post-truth' have little to do with conceptions of 'symmetry' or with concerns about 'epistemic democracy' in STS, while others, such as Steve Fuller and Harry Collins, insist that there are such connections. The present essay discusses a series of questions about the meaning of 'post-truth' and 'symmetry', and the connections of those concepts to each other and to 'epistemic democracy'. The essay ends with a series of other questions about STS and contemporary politics, and an invitation to further discussions.

  19. Logical Form and Truth-Conditions

    Directory of Open Access Journals (Sweden)

    Andrea Iacona

    2013-09-01

    Full Text Available This paper outlines a truth-conditional view of logical form, that is, a view according to which logical form is essentially a matter of truth-conditions. The main motivation for the view is a fact that seems crucial to logic. As §1 suggests, fundamental logical relations such as entailment or contradiction can formally be explained only if truth-conditions are formally represented.§2 spells out the view. §3 dwells on its anity with a conception of logical form that has been defended in the past. §§4-6 draw attention to its impact on three major issues that concern, respectively, the extension of the domain of formal explanation, the semantics of tensed discourse, and the analysis of quantication.

  20. [Which truth for patients and their family].

    Science.gov (United States)

    Bréchot, J-M

    2007-10-01

    Must the truth always be told to a cancer patient and/or his relatives? Taking a personal experience as the basis for discussion, the author examines this question in the context of Western cultural norms where death is taboo. The legal obligations to inform patients and the representation of cancer are discussed. Two key situations are considered: the delivery of a diagnosis of cancer and the announcement of a bad prognosis. What does it really mean "to tell the truth"? A best strategy for giving information to relatives is developed. The author's conclusion is that it seems more important to establish a "true" relationship with the cancer patient and his relatives than telling or not telling the whole truth.

  1. Automatic Exudate Detection from Non-dilated Diabetic Retinopathy Retinal Images Using Fuzzy C-means Clustering.

    Science.gov (United States)

    Sopharak, Akara; Uyyanonvara, Bunyarit; Barman, Sarah

    2009-01-01

    Exudates are the primary sign of Diabetic Retinopathy. Early detection can potentially reduce the risk of blindness. An automatic method to detect exudates from low-contrast digital images of retinopathy patients with non-dilated pupils using a Fuzzy C-Means (FCM) clustering is proposed. Contrast enhancement preprocessing is applied before four features, namely intensity, standard deviation on intensity, hue and a number of edge pixels, are extracted to supply as input parameters to coarse segmentation using FCM clustering method. The first result is then fine-tuned with morphological techniques. The detection results are validated by comparing with expert ophthalmologists' hand-drawn ground-truths. Sensitivity, specificity, positive predictive value (PPV), positive likelihood ratio (PLR) and accuracy are used to evaluate overall performance. It is found that the proposed method detects exudates successfully with sensitivity, specificity, PPV, PLR and accuracy of 87.28%, 99.24%, 42.77%, 224.26 and 99.11%, respectively.

  2. An Algorithm for Automatic Road Asphalt Edge Delineation from Mobile Laser Scanner Data Using the Line Clouds Concept

    Directory of Open Access Journals (Sweden)

    Carlos Cabo

    2016-09-01

    Full Text Available Accurate road asphalt extent delineation is needed for road and street planning, road maintenance, and road safety assessment. In this article, a new approach for automatic roadside delineation is developed based on the line clouds concept. The method relies on line cloud grouping from point cloud laser data. Using geometric criteria, the initial 3D LiDAR point data is structured in lines covering the road surface. These lines are then grouped according to a set of quasi-planar restriction rules. Road asphalt edge limits are extracted from the end points of lines belonging to these groups. Finally a two-stage smoothing procedure is applied to correct for edge occlusions and other anomalies. The method was tested on a 2.1 km stretch of road, and the results were checked using a RTK-GNSS measured dataset as ground truth. Correctness and completeness were 99% and 97%, respectively.

  3. Truthful Feedback for Sanctioning Reputation Mechanisms

    CERN Document Server

    Witkowski, Jens

    2012-01-01

    For product rating environments, similar to that of Amazon Reviews, it has been shown that the truthful elicitation of feedback is possible through mechanisms which pay buyer reports contingent on the reports of other buyers. We study whether similar mechanisms can be designed for reputation mechanisms at online auction sites where the buyers' experiences are partially determined by a strategic seller. We show that this is impossible for the basic setting. However, introducing a small prior belief that the seller is a cooperative commitment player leads to a payment scheme with a truthful perfect Bayesian equilibrium.

  4. Truth, laws and the progress of science

    Directory of Open Access Journals (Sweden)

    Mauro Dorato

    2011-06-01

    Full Text Available In this paper I analyze the difficult question of the truth of mature scientific theories by tackling the problem of the truth of laws. After introducing the main philosophical positions in the field of scientific realism, I discuss and then counter the two main arguments against realism, namely the pessimistic meta-induction and the abstract and idealized character of scientific laws. I conclude by defending the view that well-confirmed physical theories are true only relatively to certain values of the variables that appear in the laws.

  5. CART IV: improving automatic camouflage assessment with assistance methods

    Science.gov (United States)

    Müller, Thomas; Müller, Markus

    2010-04-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007, SPIE 2008 and SPIE 2009 [1], [2], [3]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors. The conspicuity of camouflaged objects due to their movement can be assessed with a purpose-built processing method named MTI snail track algorithm. This paper presents the enhancements over the recent year and addresses procedures to assist the camouflage assessment of moving objects for image data material with strong noise or image artefacts. This extends the evaluation methods significantly to a broader application range. For example, some noisy infrared image data material can be evaluated for the first time by applying the presented methods which fathom the correlations between camouflage assessment, MTI (moving target indication) and dedicated noise filtering.

  6. A Flexible Semi-Automatic Approach for Glioblastoma multiforme Segmentation

    CERN Document Server

    Egger, Jan; Kuhnt, Daniela; Kappus, Christoph; Carl, Barbara; Freisleben, Bernd; Nimsky, Christopher

    2011-01-01

    Gliomas are the most common primary brain tumors, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the preoperative tumor volume is essential. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming process that can be overcome with the help of segmentation methods. In this paper, a flexible semi-automatic approach for grade IV glioma segmentation is presented. The approach uses a novel segmentation scheme for spherical objects that creates a directed 3D graph. Thereafter, the minimal cost closed set on the graph is computed via a polynomial time s-t cut, creating an optimal segmentation of the tumor. The user can improve the results by specifying an arbitrary number of additional seed points to support the algorithm with grey value information and geometrical constraints. The presented method is tested on 12 magnetic resonance imaging datasets. The ground truth of the tumor boundaries are manually extracted by neurosurgeons. The...

  7. Automatic medical X-ray image classification using annotation.

    Science.gov (United States)

    Zare, Mohammad Reza; Mueen, Ahmed; Seng, Woo Chaw

    2014-02-01

    The demand for automatically classification of medical X-ray images is rising faster than ever. In this paper, an approach is presented to gain high accuracy rate for those classes of medical database with high ratio of intraclass variability and interclass similarities. The classification framework was constructed via annotation using the following three techniques: annotation by binary classification, annotation by probabilistic latent semantic analysis, and annotation using top similar images. Next, final annotation was constructed by applying ranking similarity on annotated keywords made by each technique. The final annotation keywords were then divided into three levels according to the body region, specific bone structure in body region as well as imaging direction. Different weights were given to each level of the keywords; they are then used to calculate the weightage for each category of medical images based on their ground truth annotation. The weightage computed from the generated annotation of query image was compared with the weightage of each category of medical images, and then the query image would be assigned to the category with closest weightage to the query image. The average accuracy rate reported is 87.5 %.

  8. A framework for automatic information quality ranking of diabetes websites.

    Science.gov (United States)

    Belen Sağlam, Rahime; Taskaya Temizel, Tugba

    2015-01-01

    Objective: When searching for particular medical information on the internet the challenge lies in distinguishing the websites that are relevant to the topic, and contain accurate information. In this article, we propose a framework that automatically identifies and ranks diabetes websites according to their relevance and information quality based on the website content. Design: The proposed framework ranks diabetes websites according to their content quality, relevance and evidence based medicine. The framework combines information retrieval techniques with a lexical resource based on Sentiwordnet making it possible to work with biased and untrusted websites while, at the same time, ensuring the content relevance. Measurement: The evaluation measurements used were Pearson-correlation, true positives, false positives and accuracy. We tested the framework with a benchmark data set consisting of 55 websites with varying degrees of information quality problems. Results: The proposed framework gives good results that are comparable with the non-automated information quality measuring approaches in the literature. The correlation between the results of the proposed automated framework and ground-truth is 0.68 on an average with p < 0.001 which is greater than the other proposed automated methods in the literature (r score in average is 0.33).

  9. A Bayesian Approach to Discovering Truth from Conflicting Sources for Data Integration

    CERN Document Server

    Zhao, Bo; Gemmell, Jim; Han, Jiawei

    2012-01-01

    In practical data integration systems, it is common for the data sources being integrated to provide conflicting information about the same entity. Consequently, a major challenge for data integration is to derive the most complete and accurate integrated records from diverse and sometimes conflicting sources. We term this challenge the truth finding problem. We observe that some sources are generally more reliable than others, and therefore a good model of source quality is the key to solving the truth finding problem. In this work, we propose a probabilistic graphical model that can automatically infer true records and source quality without any supervision. In contrast to previous methods, our principled approach leverages a generative process of two types of errors (false positive and false negative) by modeling two different aspects of source quality. In so doing, ours is also the first approach designed to merge multi-valued attribute types. Our method is scalable, due to an efficient sampling-based inf...

  10. Truth-Telling by Wrong-Doers? The Construction of Avowal in Canada’s Truth and Reconciliation Commission

    Directory of Open Access Journals (Sweden)

    Jason

    2015-07-01

    Full Text Available The truth commission has emerged in the last thirty years as a distinct juridical form that views the production of truth as necessary, and in some cases sufficient, for achieving justice. In his history of truth-telling in juridical forms, Michel Foucault conducts a genealogy of avowal (or confession in western judicial practice; critical to his definition of avowal is that the truth-teller and wrong-doer must be the same subject. In my analysis, I consider avowal in light of a relatively recent judicial innovation: the truth commission, with Canada’s Indian Residential Schools Truth and Reconciliation Commission (TRC as a particular case. The TRC’s emphasis on the testimony of victims rather than perpetrators means that truth-telling and wrong-doing are decoupled in this juridical form, suggesting that avowal is not a function of truth commissions according to Foucault’s criteria. Does this mean that truth commissions are not involved in truth production, or perhaps that they are not a juridical form in the lineage of those examined by Foucault? The truth commission is a juridical form that Foucault was unable to address because it developed only after his death, and it is possible that it challenges his core understanding of avowal; however, the truth commission also appears to be consistent with trends that he predicted about the role of truth-telling in the modern judicial system.

  11. Automatic Reading

    Institute of Scientific and Technical Information of China (English)

    胡迪

    2007-01-01

    <正>Reading is the key to school success and,like any skill,it takes practice.A child learns to walk by practising until he no longer has to think about how to put one foot in front of the other.The great athlete practises until he can play quickly,accurately and without thinking.Ed- ucators call it automaticity.

  12. Introducing “Seeking Truth in Love”

    Institute of Scientific and Technical Information of China (English)

    Theresa; Carino

    2006-01-01

    An excellent guide and commentary on Bishop KH Ting’s writings by scholars from Asia, Europe and North America, the bilingual volume Seeking Truth in Love"is now available at bookstores in China. Compiled and edited by Wang Peng,Dean of Studies at Nanjing Union Theological Seminary,the 393-page book was published in February,2006 by the Reli-

  13. Truth and (self) censorship in military memoirs

    NARCIS (Netherlands)

    Kleinreesink, E.; Soeters, J.M.M.L.

    2016-01-01

    It can be difficult for researchers from outside the military to gain access to the field. However, there is a rich source on the military that is readily available for every researcher: military memoirs. This source does provide some methodological challenges with regard to truth and (self)

  14. Empirical progress and nomic truth approximation revisited

    NARCIS (Netherlands)

    Kuipers, Theodorus

    2014-01-01

    In my From Instrumentalism to Constructive Realism (2000) I have shown how an instrumentalist account of empirical progress can be related to nomic truth approximation. However, it was assumed that a strong notion of nomic theories was needed for that analysis. In this paper it is shown, in terms of

  15. Five Half-Truths about Classroom Management

    Science.gov (United States)

    Englehart, Joshua M.

    2012-01-01

    Teachers' classroom management practices are rooted in assumptions based on their experiences and perceptions. At times, these assumptions are only partially informed, and serve to limit action and perceived responsibility. In this article, five common "half-truths" that guide classroom management are discussed. For each, the basic premise is…

  16. Freedom of Expression, Diversity, and Truth

    DEFF Research Database (Denmark)

    Kappel, Klemens; Hallsson, Bjørn Gunnar; Møller, Emil Frederik Lundbjerg

    2016-01-01

    be thought to have epistemically valuable outcomes. We relate these results to the moral justification of free speech. Finally, we characterise a collective action problem concerning the compliance with truth-conducive norms of deliberation, and suggest what may solve this problem....

  17. Beauty, a road to the truth

    NARCIS (Netherlands)

    Kuipers, T.A.F.

    2002-01-01

    In this article I give a naturalistic-cum-formal analysis of the relation between beauty, empirical success, and truth. The analysis is based on the one hand on a hypothetical variant of the so-called 'mere-exposure effect' which has been more or less established in experimental psychology regarding

  18. Are the Moral Fixed Points Conceptual Truths?

    NARCIS (Netherlands)

    Evers, Daan; Streumer, Bart

    2016-01-01

    Terence Cuneo and Russ Shafer-Landau have recently proposed a new version of moral non-naturalism, according to which there are non-natural moral concepts and truths but no non-natural moral facts. This view entails that moral error theorists are conceptually deficient. We explain why moral error th

  19. Five Half-Truths about Classroom Management

    Science.gov (United States)

    Englehart, Joshua M.

    2012-01-01

    Teachers' classroom management practices are rooted in assumptions based on their experiences and perceptions. At times, these assumptions are only partially informed, and serve to limit action and perceived responsibility. In this article, five common "half-truths" that guide classroom management are discussed. For each, the basic premise is…

  20. Peirce's Truth-functiona Analysis and the Origin of Truth Tables

    CERN Document Server

    Anellis, Irving H

    2011-01-01

    We explore the technical details and historical evolution of Charles Peirce's articulation of a truth table in 1893, against the background of his investigation into the truth-functional analysis of propositions involving implication. In 1997, John Shosky discovered, on the verso of a page of the typed transcript of Bertrand Russell's 1912 lecture on "The Philosophy of Logical Atomism" truth table matrices. The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example ...

  1. 47 CFR 64.2401 - Truth-in-Billing Requirements.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Truth-in-Billing Requirements. 64.2401 Section... (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Truth-in-Billing Requirements for Common Carriers § 64.2401 Truth-in-Billing Requirements. (a) Bill organization. Telephone bills shall be clearly...

  2. 77 FR 69738 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2012-11-21

    ... PROTECTION 12 CFR Part 1026 Truth in Lending (Regulation Z) AGENCY: Bureau of Consumer Financial Protection...) is publishing a final rule amending the official interpretations for Regulation Z (Truth in Lending.... Background The Truth in Lending Act (TILA; 15 U.S.C. 1601-1666j) requires creditors to disclose credit terms...

  3. 77 FR 21875 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2012-04-12

    ...; ] BUREAU OF CONSUMER FINANCIAL PROTECTION 12 CFR Part 1026 RIN 3170-AA21 Truth in Lending (Regulation Z... implements the Truth In Lending Act, and the official interpretation to the regulation, which interprets the... Card Act) was signed into law on May 22, 2009.\\1\\ The Credit Card Act primarily amended the Truth in...

  4. 77 FR 69736 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2012-11-21

    ... CFR Part 226 RIN 7100-AD94 BUREAU OF CONSUMER FINANCIAL PROTECTION 12 CFR Part 1026 Truth in Lending... the agencies' regulations that implement the Truth in Lending Act (TILA). Effective July 21, 2011, the... threshold in the Truth in Lending Act (TILA) for exempt consumer credit transactions \\1\\ from $25,000 to $50...

  5. 75 FR 58505 - Regulation Z; Truth in Lending

    Science.gov (United States)

    2010-09-24

    ... 226 RIN No. AD 7100-AD54 Regulation Z; Truth in Lending AGENCY: Board of Governors of the Federal... comment a proposed rule to amend Regulation Z, which implements the Truth in Lending Act (TILA). The.... SUPPLEMENTARY INFORMATION: I. Background A. TILA and Regulation Z Congress enacted the Truth in Lending Act...

  6. 78 FR 25818 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2013-05-03

    ... PROTECTION 12 CFR Part 1026 RIN 3170-AA28 Truth in Lending (Regulation Z) AGENCY: Bureau of Consumer... Financial Protection (Bureau) issues this final rule to amend Regulation Z, which implements the Truth in...) was enacted in 2009 as an amendment to the Truth in Lending Act (TILA) to address concerns that...

  7. 28 CFR 91.4 - Truth in Sentencing Incentive Grants.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Truth in Sentencing Incentive Grants. 91... FACILITIES General § 91.4 Truth in Sentencing Incentive Grants. (a) Half of the total amount of funds... available for Truth in Sentencing Incentive Grants. (b) Eligibility. To be eligible to receive such a grant...

  8. 78 FR 76033 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2013-12-16

    ... PROTECTION 12 CFR Part 1026 Truth in Lending (Regulation Z) AGENCY: Bureau of Consumer Financial Protection... Regulation Z, which implements the Truth in Lending Act (TILA). The Bureau is required to calculate annually... Accountability Responsibility and Disclosure Act of 2009 (CARD Act), which amended the Truth in Lending Act (TILA...

  9. 78 FR 18795 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2013-03-28

    ... Part 1026 RIN 3170-AA21 Truth in Lending (Regulation Z) AGENCY: Bureau of Consumer Financial Protection... (Bureau) is amending Regulation Z, which implements the Truth in Lending Act, and the Official.... Public Law 111-24, 123 Stat. 1734 (2009). The Credit Card Act primarily amended the Truth in Lending Act...

  10. 76 FR 79275 - Truth in Savings (Regulation DD)

    Science.gov (United States)

    2011-12-21

    ... December 21, 2011 Part II Bureau of Consumer Financial Protection 12 CFR Part 1030 Truth in Savings... Truth in Savings (Regulation DD) AGENCY: Bureau of Consumer Financial Protection. ACTION: Interim final... Reserve System's (Board's) rulemaking authority for the Truth in Savings Act (TISA) to the Bureau, the...

  11. 76 FR 27389 - Regulation Z; Truth in Lending

    Science.gov (United States)

    2011-05-11

    ... May 11, 2011 Part II Federal Reserve System 12 CFR Part 226 Regulation Z; Truth in Lending; Proposed...; ] FEDERAL RESERVE SYSTEM 12 CFR Part 226 RIN 7100-AD75 Regulation Z; Truth in Lending AGENCY: Board of... Board is publishing for public comment a proposed rule amending Regulation Z (Truth in Lending) to...

  12. 76 FR 43111 - Regulation Z; Truth in Lending

    Science.gov (United States)

    2011-07-20

    ...; Truth in Lending AGENCY: Board of Governors of the Federal Reserve System (Board). ACTION: Final rule... commentary to Regulation Z, which implements the Truth in Lending Act (TILA). The commentary applies and.... SUPPLEMENTARY INFORMATION: I. Background Congress enacted the Truth in Lending Act (TILA; 15 U.S.C. 1601 et...

  13. 77 FR 66748 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2012-11-07

    ...; ] BUREAU OF CONSUMER FINANCIAL PROTECTION 12 CFR Part 1026 RIN 3170-AA28 Truth in Lending (Regulation Z... implements the Truth in Lending Act (TILA), and the official interpretation to the regulation, which... Card Act primarily amended the Truth in Lending Act (TILA) and instituted new substantive...

  14. 32 CFR 776.49 - Truthfulness in statements to others.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Truthfulness in statements to others. 776.49... ADVOCATE GENERAL Rules of Professional Conduct § 776.49 Truthfulness in statements to others. (a) Truthfulness in statements to others. In the course of representing a client a covered attorney shall not...

  15. Fundamental consideration of truth: In general and in criminal law

    Directory of Open Access Journals (Sweden)

    Drakić Dragiša S.

    2016-01-01

    Full Text Available In this paper the author deals with the eternal issue of truth - in general and in criminal law. In the first part of the paper the author speaks about the significance of truth as such, but also about its role and place in contemporary society. He continues elaborating the idea of truth as it was in ancient Greek philosophy and poetic tradition given that Ancient Greece represents the cradle of the idea of truth. This part of the paper ends with an analysis of the Aristotle's understanding of truth which is, according to the author, the beginning of the new era of modern understanding of the truth phenomenon. In the following part of his paper the author analyses the opinions of the 20th century outstanding philosophers who occupied themselves with so called factual truth - which, in his opinion, includes the truth reached in criminal law. This type of truth is analysed further. The author concludes that the maximum to be reached in the criminal procedure is conviction i.e. belief that we are in the possession of truth. When reaching ' objective ' truth is concerned as undoubtless certainty that a criminal offence has been commited, as well as all details referring to its commission, the author believes that this is the 'eternal' but never accomplished ideal. Finally, the author proposes the modality of truth, which is most likely to be accomplished within criminal law and procedure. .

  16. A ground automatic testing system for the BGO calorimeter of dark matter particle explorer satellite%暗物质粒子探测卫星BGO量能器地面自动化测试软件

    Institute of Scientific and Technical Information of China (English)

    马思源; 封常青; 沈仲弢; 王奇; 刘树彬; 安琪

    2015-01-01

    Background: Bismuth Germanate (BGO) calorimeter is the key sub-detector of the Dark Matter Particle Explorer (DAMPE) satellite. There are extensive tests including the performance testing of 40 Front-end Electronics (FEE) in the engineering development model and flight model, the LED source calibration of 1600 Photomultiplier Tubes (PMT) and different kinds of ground environment simulation test as long as several months during production. Purpose: This study aims to ensure the progress of production by developing a ground test system to meet the demand for automated data collection and command parameters during the experiments and tests.Methods: First of all, the testing requirements were analyzed in details, then a framework of hardware and software was designed according to available resources. Finally the software was implemented and optimised by means of LabWindows/CVI and virtual instrument technology to fulfil all test functionalities.Results: The user interface (UI) and function modules of the software meet the requirements of data acquisition and automatic control for the BGO calorimeter of DAMPE satellite.Conclusion: This testing system has been put into practical application for more than 18 months, successfully reduced the working strength and improved the test efficiency.%锗酸铋(Bismuth Germanate, BGO)量能器是我国即将发射的暗物质粒子探测卫星的关键分系统之一.在为期数年的工程研制过程中,需开展大量测试工作,包括初样鉴定件、正样飞行件共计40块前端电子学板的性能测试,初、正样共计约1 600个光电倍增管的LED光源刻度测试,以及初、正样单机各长达数月的多项地面环境模拟试验.针对以上测试和试验过程中的自动化数据采集和指令参数配置等需求,设计开发完成了一个基于LabWindows/CVI开发平台和虚拟仪器技术的自动化测试系统,并投入实际应用,减轻了实验人员的工作强度,提高了测试效率,为

  17. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    Science.gov (United States)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83-0.96, p human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  18. Fully automatic detection of deep white matter T1 hypointense lesions in multiple sclerosis

    Science.gov (United States)

    Spies, Lothar; Tewes, Anja; Suppa, Per; Opfer, Roland; Buchert, Ralph; Winkler, Gerhard; Raji, Alaleh

    2013-12-01

    A novel method is presented for fully automatic detection of candidate white matter (WM) T1 hypointense lesions in three-dimensional high-resolution T1-weighted magnetic resonance (MR) images. By definition, T1 hypointense lesions have similar intensity as gray matter (GM) and thus appear darker than surrounding normal WM in T1-weighted images. The novel method uses a standard classification algorithm to partition T1-weighted images into GM, WM and cerebrospinal fluid (CSF). As a consequence, T1 hypointense lesions are assigned an increased GM probability by the standard classification algorithm. The GM component image of a patient is then tested voxel-by-voxel against GM component images of a normative database of healthy individuals. Clusters (≥0.1 ml) of significantly increased GM density within a predefined mask of deep WM are defined as lesions. The performance of the algorithm was assessed on voxel level by a simulation study. A maximum dice similarity coefficient of 60% was found for a typical T1 lesion pattern with contrasts ranging from WM to cortical GM, indicating substantial agreement between ground truth and automatic detection. Retrospective application to 10 patients with multiple sclerosis demonstrated that 93 out of 96 T1 hypointense lesions were detected. On average 3.6 false positive T1 hypointense lesions per patient were found. The novel method is promising to support the detection of hypointense lesions in T1-weighted images which warrants further evaluation in larger patient samples.

  19. Automatic orientation and 3D modelling from markerless rock art imagery

    Science.gov (United States)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  20. Automatic detection and counting of cattle in UAV imagery based on machine vision technology (Conference Presentation)

    Science.gov (United States)

    Rahnemoonfar, Maryam; Foster, Jamie; Starek, Michael J.

    2017-05-01

    Beef production is the main agricultural industry in Texas, and livestock are managed in pasture and rangeland which are usually huge in size, and are not easily accessible by vehicles. The current research method for livestock location identification and counting is visual observation which is very time consuming and costly. For animals on large tracts of land, manned aircraft may be necessary to count animals which is noisy and disturbs the animals, and may introduce a source of error in counts. Such manual approaches are expensive, slow and labor intensive. In this paper we study the combination of small unmanned aerial vehicle (sUAV) and machine vision technology as a valuable solution to manual animal surveying. A fixed-wing UAV fitted with GPS and digital RGB camera for photogrammetry was flown at the Welder Wildlife Foundation in Sinton, TX. Over 600 acres were flown with four UAS flights and individual photographs used to develop orthomosaic imagery. To detect animals in UAV imagery, a fully automatic technique was developed based on spatial and spectral characteristics of objects. This automatic technique can even detect small animals that are partially occluded by bushes. Experimental results in comparison to ground-truth show the effectiveness of our algorithm.

  1. Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Thariat, Juliette, E-mail: jthariat@hotmail.com [Department of Radiation Oncology/Institut de biologie et developpement du cancer (IBDC) centre national de la recherche scientifique (CNRS) unite mixte de recherche UMR 6543, Cancer Center Antoine-Lacassagne, University of Nice Sophia-Antipolis, Nice Cedex (France); Ramus, Liliane [DOSIsoft, Cachan (France); INRIA (Institut National de Recherche en Automatique et en Automatique)-Asclepios Research Project, Sophia-Antipolis (France); Maingon, Philippe [Department of Radiation Oncology, Centre Georges-Francois Leclerc, Dijon Cedex (France); Odin, Guillaume [Department of Head-and-Neck Surgery, Centre Hospitalier Universitaire-Institut Universitaire de la Face et du Cou, Nice Cedex (France); Gregoire, Vincent [Department of Radiation Oncology, St.-Luc University Hospital, Brussels (Belgium); Darcourt, Vincent [Department of Radiation Oncology-Dentistry, Cancer Center Antoine-Lacassagne, University of Nice Sophia-Antipolis, Nice Cedex (France); Guevara, Nicolas [Department of Head-and-Neck Surgery, Centre Hospitalier Universitaire-Institut Universitaire de la Face et du Cou, Nice Cedex (France); Orlanducci, Marie-Helene [Department of Odontology, CHU, Nice (France); Marcie, Serge [Department of Radiation Oncology/Institut de biologie et developpement du cancer (IBDC) centre national de la recherche scientifique (CNRS) unite mixte de recherche UMR 6543, Cancer Center Antoine-Lacassagne, University of Nice Sophia-Antipolis, Nice Cedex (France); Poissonnet, Gilles [Department of Head-and-Neck Surgery, Cancer Center Antoine-Lacassagne-Institut Universitaire de la Face et du Cou, Nice Cedex (France); Marcy, Pierre-Yves [Department of Radiology, Cancer Center Antoine-Lacassagne, University of Nice Sophia-Antipolis, Nice Cedex (France); and others

    2012-04-01

    Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as ground truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.

  2. A superpixel-based framework for automatic tumor segmentation on breast DCE-MRI

    Science.gov (United States)

    Yu, Ning; Wu, Jia; Weinstein, Susan P.; Gaonkar, Bilwaj; Keller, Brad M.; Ashraf, Ahmed B.; Jiang, YunQing; Davatzikos, Christos; Conant, Emily F.; Kontos, Despina

    2015-03-01

    Accurate and efficient automated tumor segmentation in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is highly desirable for computer-aided tumor diagnosis. We propose a novel automatic segmentation framework which incorporates mean-shift smoothing, superpixel-wise classification, pixel-wise graph-cuts partitioning, and morphological refinement. A set of 15 breast DCE-MR images, obtained from the American College of Radiology Imaging Network (ACRIN) 6657 I-SPY trial, were manually segmented to generate tumor masks (as ground truth) and breast masks (as regions of interest). Four state-of-the-art segmentation approaches based on diverse models were also utilized for comparison. Based on five standard evaluation metrics for segmentation, the proposed framework consistently outperformed all other approaches. The performance of the proposed framework was: 1) 0.83 for Dice similarity coefficient, 2) 0.96 for pixel-wise accuracy, 3) 0.72 for VOC score, 4) 0.79 mm for mean absolute difference, and 5) 11.71 mm for maximum Hausdorff distance, which surpassed the second best method (i.e., adaptive geodesic transformation), a semi-automatic algorithm depending on precise initialization. Our results suggest promising potential applications of our segmentation framework in assisting analysis of breast carcinomas.

  3. A fully automatic framework for cell segmentation on non-confocal adaptive optics images

    Science.gov (United States)

    Liu, Jianfei; Dubra, Alfredo; Tam, Johnny

    2016-03-01

    By the time most retinal diseases are diagnosed, macroscopic irreversible cellular loss has already occurred. Earlier detection of subtle structural changes at the single photoreceptor level is now possible, using the adaptive optics scanning light ophthalmoscope (AOSLO). This work aims to develop a fully automatic segmentation framework to extract cell boundaries from non-confocal split-detection AOSLO images of the cone photoreceptor mosaic in the living human eye. Significant challenges include anisotropy, heterogeneous cell regions arising from shading effects, and low contrast between cells and background. To overcome these challenges, we propose the use of: 1) multi-scale Hessian response to detect heterogeneous cell regions, 2) convex hulls to create boundary templates, and 3) circularlyconstrained geodesic active contours to refine cell boundaries. We acquired images from three healthy subjects at eccentric retinal regions and manually contoured cells to generate ground-truth for evaluating segmentation accuracy. Dice coefficient, relative absolute area difference, and average contour distance were 82±2%, 11±6%, and 2.0±0.2 pixels (Mean±SD), respectively. We find that strong shading effects from vessels are a main factor that causes cell oversegmentation and false segmentation of non-cell regions. Our segmentation algorithm can automatically and accurately segment photoreceptor cells on non-confocal AOSLO images, which is the first step in longitudinal tracking of cellular changes in the individual eye over the time course of disease progression.

  4. Event-related potentials reveal task-dependence and inter-individual differences in negation processing during silent listening and explicit truth-value evaluation.

    Science.gov (United States)

    Herbert, C; Kissler, J

    2014-09-26

    In sentences such as dogs cannot fly/bark, evaluation of the truth-value of the sentence is assumed to appear after the negation has been integrated into the sentence structure. Moreover negation processing and truth-value processing are considered effortful processes, whereas processing of the semantic relatedness of the words within sentences is thought to occur automatically. In the present study, modulation of event-related brain potentials (N400 and late positive potential, LPP) was investigated during an implicit task (silent listening) and active truth-value evaluation to test these theoretical assumptions and determine if truth-value evaluation will be modulated by the way participants processed the negated information implicitly prior to truth-value verification. Participants first listened to negated sentences and then evaluated these sentences for their truth-value in an active evaluation task. During passive listening, the LPP was generally more pronounced for targets in false negative (FN) than true negative (TN) sentences, indicating enhanced attention allocation to semantically-related but false targets. N400 modulation by truth-value (FN>TN) was observed in 11 out of 24 participants. However, during active evaluation, processing of semantically-unrelated but true targets (TN) elicited larger N400 and LPP amplitudes as well as a pronounced frontal negativity. This pattern was particularly prominent in those 11 individuals, whose N400 modulation during silent listening indicated that they were more sensitive to violations of the truth-value than to semantic priming effects. The results provide evidence for implicit truth-value processing during silent listening of negated sentences and for task dependence related to inter-individual differences in implicit negation processing. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Truth, Reason, and Faith in Modern Civilisation: The violence of truth and the truth of violence in modern �secular� Western civilisation

    Directory of Open Access Journals (Sweden)

    Johann-Albrecht Meylahn

    2012-02-01

    Full Text Available What is truth? What is reason? What is faith? These questions have been hotly debated and have been the cause of violence prior to the rise of the modern and so-called secular state. The rise of the modern �secular� state was founded on the distinction between reason and faith thus bringing to an end the religious violence which was inspired by their respective truths. The concept of truth will be questioned, thus questioning the �truth� that reason and faith can be neatly separated from each other and consequently that the secular and religious can be separated into neat categories. There is an inherent violence (political, religious and linguistic in the Truth(s, be it the truths of either religion or secular reason, namely the originary linguistic violence of truth. This article will ask the question: How can one speak of truth, reason and faith in a modern civilisation and seek ways beyond the violence of truths towards interdisciplinary open dialogue of a democracy still to come?

  6. An inconvenient truth; Une verite qui derange

    Energy Technology Data Exchange (ETDEWEB)

    Al, Gore

    2007-01-15

    Our climate crisis may at times appear to be happening slowly, but in fact it is happening very quickly-and has become a true planetary emergency. The Chinese expression for crisis consists of two characters. The first is a symbol for danger; the second is a symbol for opportunity. In order to face down the danger that is stalking us and move through it, we first have to recognize that we are facing a crisis. So why is it that our leaders seem not to hear such clarion warnings? Are they resisting the truth because they know that the moment they acknowledge it, they will face a moral imperative to act? Is it simply more convenient to ignore the warnings? Perhaps, but inconvenient truths do not go away just because they are not seen. Indeed, when they are responded to, their significance does not diminish; it grows. (author)

  7. Telling the truth and medical ethics.

    Science.gov (United States)

    Gillon, R

    1985-11-30

    Gillon discusses the conflicting moral implications of the principles of respect for autonomy and of beneficence and non-maleficence when telling patients the truth about their illnesses and treatments. The case for nondisclosure is usually based on three major arguments: that doctors' Hippocratic obligations to benefit and not harm their patients take precedence over not deceiving them; that the range of possible conditions and prognoses makes it difficult for physicians to know the full truth or for patients to comprehend it; and that patients do not wish to be told dire news. Gillon rejects each of these arguments, contending that avoiding deceit is a basic moral norm that can be defended from utilitarian as well as deontological points of view. With regard to the argument concerning patient attitudes, he recommends that pilot studies be done, asking patients what their preferences are at the time they register with a doctor or hospital.

  8. Truth and beauty in contemporary urban photography

    Directory of Open Access Journals (Sweden)

    Daniele Colistra

    2014-05-01

    Full Text Available Does city still need photography? Or does it show itself more effectively through other forms of communication? The question brings us back almost two hundred years ago, at the time of the spread of the first daguerreotypes, when the query was: Does city still need painting? The question raises several other issues - truth and beauty, analogical and digital, truth and photo editing - that this essay examines by comparing some images. We are convinced that “the more we can speak of a picture, the more unlikely it is to speak of photography” (R. Barthes. The essay describes the work of some artists/photographers who have addressed the issue of urban photography. Works in which the figurative and visionary component is based on the interaction of traditional shooting techniques and processes of digital post-production.

  9. Virtue and truth in clinical science.

    Science.gov (United States)

    Gillett, G

    1995-06-01

    Since the time of Hippocrates, medical science sought to develop a practice based on "knowledge rather than opinion". However, in the light of recent alternative approaches to healing and a philosophy of science that, through thinkers like Kuhn, Rorty, and Foucault, is critical of claims to objective truth, we must reappraise the way in which medical interventions can be based on proven pathophysiological knowledge rather than opinion. Developing insights in Foucault, Lacan, and Wittgenstein, this essay argues for a recovery of the Aristotelian idea of a techne, where there is a dynamic interplay between praxis and conceptualization. The result is a post-Kuhnian epistemology for medical science that recognizes the evaluative dimension of knowledge, but that also looks to a Platonic conception of the good as the ultimate constraint on human thought, thus avoiding the radically self-contained accounts of truth found in some post-modern thinkers.

  10. Truth, virtue and beauty: midwifery and philosophy.

    Science.gov (United States)

    Parker, J M; Gibbs, M

    1998-09-01

    In this paper, we outline three moments in the history of Western philosophy--Classical Greek, Modernity, Postmodernity--and the ways in which issues of truth, virtue and beauty have been understood within these philosophical formations. In particular, we investigate the ways in which notions of truth, virtue and beauty influenced the orthodoxy of birthing practices at these different moments. Finally, we examine current, critical reflections on the role of the intellectual in postmodern society and use these reflections as a heuristic for understanding the role of the contemporary midwife. We suggest that midwifery must reconcile two divergent demands. The first is to mobilise the positive, instrumental benefits of Western medical science to improve mortality and morbidity outcomes. The second is to remain sensitive to the cultural and social meanings attached to traditional birthing practices and to understand the roles these play in the well-being of mother and child.

  11. Truthful Unsplittable Flow for Large Capacity Networks

    CERN Document Server

    Azar, Yossi; Gutner, Shai

    2008-01-01

    In this paper, we focus our attention on the large capacities unsplittable flow problem in a game theoretic setting. In this setting, there are selfish agents, which control some of the requests characteristics, and may be dishonest about them. It is worth noting that in game theoretic settings many standard techniques, such as randomized rounding, violate certain monotonicity properties, which are imperative for truthfulness, and therefore cannot be employed. In light of this state of affairs, we design a monotone deterministic algorithm, which is based on a primal-dual machinery, which attains an approximation ratio of $\\frac{e}{e-1}$, up to a disparity of $\\epsilon$ away. This implies an improvement on the current best truthful mechanism, as well as an improvement on the current best combinatorial algorithm for the problem under consideration. Surprisingly, we demonstrate that any algorithm in the family of reasonable iterative path minimizing algorithms, cannot yield a better approximation ratio. Conseque...

  12. Do citation systems represent theories of truth?

    Directory of Open Access Journals (Sweden)

    Betsy Van der Veer Martens

    2001-01-01

    Full Text Available This article suggests that the citation can be viewed not only as a "concept symbol" but also as a "boundary object". The scientific, legal, and patent citation systems in America are examined at the micro, meso, and macro levels in order to understand how they function as commodified theories of truth in contemporary knowledge representation. This approach also offers a meta-theoretical overview of existing citation research efforts in science, law, and technology that may be of interdisciplinary interest.

  13. Finding a single point of truth

    Energy Technology Data Exchange (ETDEWEB)

    Sokolov, S.; Thijssen, H. [Autodesk Inc, Toronto, ON (Canada); Laslo, D.; Martin, J. [Autodesk Inc., San Rafael, CA (United States)

    2010-07-01

    Electric utilities collect large volumes of data at every level of their business, including SCADA, Smart Metering and Smart Grid initiatives, LIDAR and other 3D imagery surveys. Different types of database systems are used to store the information, rendering data flow within the utility business process extremely complicated. The industry trend has been to endure redundancy of data input and maintenance of multiple copies of the same data across different solution data sets. Efforts have been made to improve the situation with point to point interfaces, but with the tools and solutions available today, a single point of truth can be achieved. Consolidated and validated data can be published into a data warehouse at the right point in the process, making the information available to all other enterprise systems and solutions. This paper explained how the single point of truth spatial data warehouse and process automation services can be configured to streamline the flow of data within the utility business process using the initiate-plan-execute-close (IPEC) utility workflow model. The paper first discussed geospatial challenges faced by utilities and then presented the approach and technology aspects. It was concluded that adoption of systems and solutions that can function with and be controlled by the IPEC workflow can provide significant improvement for utility operations, particularly if those systems are coupled with the spatial data warehouse that reflects a single point of truth. 6 refs., 3 figs.

  14. Culture, Truth, and Science After Lacan.

    Science.gov (United States)

    Gillett, Grant

    2015-12-01

    Truth and knowledge are conceptually related and there is a way of construing both that implies that they cannot be solely derived from a description that restricts itself to a set of scientific facts. In the first section of this essay, I analyse truth as a relation between a praxis, ways of knowing, and the world. In the second section, I invoke the third thing-the objective reality on which we triangulate as knowing subjects for the purpose of complex scientific endeavours like medical science and clinical care. Such praxes develop robust methods of "keeping in touch" with disease and illness (like biomarkers). An analysis drawing on philosophical semantics motivates the needed (anti-scientistic) account of meaning and truth (and therefore knowledge) and underpins the following argument: (i) the formulation and dissemination of knowledge rests on language; (ii) language is selective in what it represents in any given situation; (iii) the praxes of a given (sub)culture are based on this selectivity; but (iv) human health and illness involve whole human beings in a human life-world; therefore, (v) medical knowledge should reflectively transcend, where required, biomedical science towards a more inclusive view. Parts three and four argue that a post-structuralist (Lacanian) account of the human subject can avoid both scientism and idealism or unconstrained relativism.

  15. Ground Truth: The Implications of Joint Interdependence for Air and Ground Operations

    Science.gov (United States)

    2006-03-01

    Quarterly, no 37, Spring 2005, 40-45. Clancy, Tom, and General Charles Horner (Ret.), Every Man a Tiger, New York, N. Y.: G. P. Putnam’s Sons...Force Magazine Online, vol. 87, no. 6 June 2004, pp. 1-11 26 Cristopher R. Paparone, COL, USA James A. Crupi, Ph.D. “What is Joint

  16. Study on Ground Automatic Identification Technology for Intelligent Vehicle Based on Vision Sensor%基于视觉传感器的自主车辆地面自动辨识技术研究

    Institute of Scientific and Technical Information of China (English)

    崔根群; 余建明; 赵娴; 赵丛琳

    2011-01-01

    The ground automatic identification technology for intelligent vehicle is iaking Leobor-Edu autonomous vehicle as a test vector and using DH-HV2003UC-T vision sensor to collect image infarmaiion of five common lane roads( cobbled road, concrete road, dirt road, grass road, tile road) , then using MATLAB image processing module to perform coding compression, recovery reconstruction, smoothing, sharpening, enhancement, feature extraction and other related processing,then using MATLAB BP neural network module to carry on pattern recognition.Through analyzing the pattern recognition result, lt shows that the objective error is 20%, the road recognition rate has reached the intended requirement in the system,and it can be universally applied in the smart vehicle or robots and other related fields.%谊自主车辆地面自动辨识技术是以Leobot-Edu自主车辆作为试验载体,并应用DH-HV2003UC-T视觉传感器对常见的5种行车路面(石子路面、水泥路面、土壤路面、草地路面、砖地路面)进行图像信息的采集,应用Matlab图像处理模块对其依次进行压缩编码、复原重建、平滑、锐化、增强、特征提取等相关处理后,再应用Matlab BP神经网络模块进行模式识别.通过对模式识别结果分析可知,网络训练目标的函数误差为20%,该系统路面识别率达到预定要求,可以在智能车辆或移动机器人等相关领域普及使用.

  17. The use of the truth and deception in dementia care amongst general hospital staff.

    Science.gov (United States)

    Turner, Alex; Eccles, Fiona; Keady, John; Simpson, Jane; Elvish, Ruth

    2017-08-01

    Deceptive practice has been shown to be endemic in long-term care settings. However, little is known about the use of deception in dementia care within general hospitals and staff attitudes towards this practice. This study aimed to develop understanding of the experiences of general hospital staff and explore their decision-making processes when choosing whether to tell the truth or deceive a patient with dementia. This qualitative study drew upon a constructivist grounded theory approach to analyse data gathered from semi-structured interviews with a range of hospital staff. A model, grounded in participant experiences, was developed to describe their decision-making processes. Participants identified particular triggers that set in motion the need for a response. Various mediating factors influenced how staff chose to respond to these triggers. Overall, hospital staff were reluctant to either tell the truth or to lie to patients. Instead, 'distracting' or 'passing the buck' to another member of staff were preferred strategies. The issue of how truth and deception are defined was identified. The study adds to the growing research regarding the use of lies in dementia care by considering the decision-making processes for staff in general hospitals. Various factors influence how staff choose to respond to patients with dementia and whether deception is used. Similarities and differences with long-term dementia care settings are discussed. Clinical and research implications include: opening up the topic for further debate, implementing staff training about communication and evaluating the impact of these processes.

  18. Long-term ionospheric anomaly monitoring for ground based augmentation systems

    Science.gov (United States)

    Jung, Sungwook; Lee, Jiyun

    2012-08-01

    Extreme ionospheric anomalies can pose a potential integrity threat to ground-based augmentation of the Global Positioning System (GPS), and thus the development of ionospheric anomaly threat models for each region of operation is essential for system design and operation. This paper presents a methodology for automated long-term ionospheric anomaly monitoring, which will be used to build an ionospheric anomaly threat model, evaluate its validity over the life cycle of the system, continuously monitor ionospheric anomalies, and update the threat model if necessary. This procedure automatically processes GPS data collected from external networks and estimates ionospheric gradients at regular intervals. If ionospheric gradients large enough to be potentially hazardous to users are identified, manual data examination is triggered. This paper also develops a simplified truth processing method to create precise ionospheric delay estimates in near real-time, which is the key to automating the ionospheric monitoring procedure. The performance of the method is examined using data from the 20 November 2003 and 9 November 2004 ionospheric storms. These results demonstrate the effectiveness of simplified truth processing within long-term ionosphere monitoring. From the case studies, the automated procedure successfully identified extreme ionospheric anomalies, including the two worst ionospheric gradients observed and validated previously based on manual analysis. The automation of data processing enables us to analyze ionospheric data continuously going forward and to more accurately categorize ionospheric behavior under both nominal and anomalous conditions.

  19. MATHEMATICAL-Universe-Hypothesis(MUH) BECOME SCENARIO(MUS)!!! (NOT YET A THEORY) VIA 10-DIGITS[ 0 --> 9] SEPHIROT CREATION AUTOMATICALLY from DIGITS AVERAGED-PROBABILITY Newcomb-Benford LOG-Law; UTTER-SIMPLICITY!!!: It's a Jack-in-the-Box Universe: Accidental?/Purposeful?; EMET/TRUTH!!!

    Science.gov (United States)

    Siegel, Edward Carl-Ludwig

    2015-04-01

    Siegel(2012) 10-DIGITS[0 --> 9] AVERAGE PROBABILITY LOG-Law SCALE-INVARIANCE UTTER-SIMPLICITY: Kabbala SEPHIROT SCENARIO AUTOMATICALLY CREATES a UNIVERSE: (1) a big-bang[bosons(BEQS) created from Newcomb[Am.J.Math.4(1),39(1881;THE discovery of the QUANTUM!!!)-Poincare[Calcul des Probabilites,313(12)]-Weyl[Goett.Nach.(14);Math.Ann.77,313(16)] DIGITS AVERAGE STATISTICS LOG-Law[ = log(1 +1/d) = log([d +1]/d)] algebraic-inversion, (2)[initial (at first space-time point created) c = ∞ elongating to timelike-pencil spreading into finite-c light-cone] hidden-dark-energy (HDE)[forming at every-spacetime-point], (3) inflation[logarithm algebraic-inversion-to exponential], (4) hidden[in Siegel(87) ``COMPLEX quantum-statistics in (Nottale-Linde)FRACTAL-dimensions'' expansion around unit-circle/roots-of-unity]-dark-matter(HDM), (4)null massless bosons(E) --> Mellin-(light-speed squared)-transform/Englert-Higgs ``mechanism'' -->(timelike) massive fermions(m), (5) cosmic-microwave-background (CMB)[power-spectrum] Zipf-law HYPERBOLICITY, (6) supersymmetry(SUSY) [projective-geometry conic-sections/conics merging in R/ C projective-plane point at ∞]. UTTER-SIMPLICITY!!!

  20. SEPHIROT: Scenario for Universe-Creation AUTOMATICALLY from Digits On-Average Euler-Bernoulli-Kummer-Riemann-Newcomb-Poincare-Weyl-Benford-Kac-Raimi-Hill-Antonoff-Siegel ``Digit-Physics'' Logarithm-Law: ``It's a Jack-in-the-Box Universe'': EMET/TRUTH!!!

    Science.gov (United States)

    Siegel, Edward Carl-Ludwig; Young, Frederic; Wignall, Janis

    2013-04-01

    SEPHIROT: Siegel[http://fqxi.org/community/forum/topic/1553]: Ten-[0->9]-Digits; Average Log-Law SCALE-Invariance; Utter-Simplicity: ``Complexity'' (vs. ``Complicatedness''); Zipf-law/Hyperbolicity/ Inevitability SCENARIO AUTOMATICALLY CREATES & EVOLVES a UNIVERSE: inflation, a big-bang, bosons(E)->Mellin-(c2)-tranform->fermions(m), hidden-dark-energy(HDE), hidden-dark-matter (HDM), cosmic-microwave-background(CMB), supersymmetry(SUSY), PURPOSELY NO: theories,models,mechanisms,processes, parameters,assumptions,WHATSOEVER: It's a ``Jack-in-the-Box'' Universe!!!: ONLY VIA: Newcomb [Am.J.Math.4(1),39(1881)]QUANTUM-discovery!!!-Benford-Siegel-Antonoff[AMS.Joint-Mtg.(02)-Abs.#973-60-124!!!] inversion to ONLY BEQS with d=0 BEC: ``Digit-Physics''!; Log fixed-point invariance(s): [base=units=SCALE] of digits classic (not classical!) average [CAUSING] log statistical-correlations =log(1+1/d), with physics-crucial d=0 BEC singularity/pole, permits SEPHIROT!!!: ``digits are quanta are bosons because bosons are and always were digits!!!'': Digits = Bosons with d=0 BEC(!!!) & expansion to Zipf-law Hyperbolicity INEVITABILITY CMB!

  1. SEPHIROT: SCENARIO for CREATION AUTOMATICALLY from DIGITS AVERAGED-PROBABILITY Newcomb-Benford Log-Law: Inflation, BosonS, a Maxwell-Boltzmann Big-Bang Fireball, FermionS, HDE, HDM, CMB; UTTER-SIMPLICITY PURPOSELY SANS ANYthing!!!: It's a Jack-in-the-Box Univers: A Consciousness? EMET/TRUTH!!!

    Science.gov (United States)

    Siegel, Edward Carl-Ludwig; Marvin Antonoff/Albert Overhauser(RIP)/Frederic Young/Adolph Smith/Irwin Wunderman(RIP/Janis Wignall Team

    2013-03-01

    Siegel[http://fqxi.org/community/forum/topic/1553]: TEN-DIGITS[0,...,9] PROBABILITY AVERAGE LOG-Law SCALE-INVARIANCE; Utter-Simplicity: ``Complexity'' (Versus ``Complicatedness'') Zipf-law/ Hyperbolicity/ Inevitability (Archimedes), vs, Pareto-law, SCENARIO AUTOMATICALLY CREATES a UNIVERSE: inflation, a big-bang, bosons(E) ->Mellin-(c⌃2)-tranform -> fermions (m) , hidden - dark - energy (HDE) , hidden - dark - matter (HDM) , cosmic-microwave-background(CMB), supersymmetry(SUSY) PURPOSELY SANS ANY: theories, models, mechanisms, processes, parameters, assumptions,...WHATSOEVER: It's a'' Jack-in-the-Box'' Universe!!! ONLY VIA: Bose-{Euler[(1732)] sum =product over-reals R-Riemann[Monats. Akad.,(1859)] sum =product over-complexs)-Bernoulli-Kummer}-Newcomb[Am. J. Math. 4(1), 39(1881) THE discovery of the QUANTUM!!!]-{Planck(1901)]-Einstein(1905)]-Sommerfeld}-Poincare[Calcul des Probabilités, 313 (1912)]-Weyl[Goett. Nach.(1914); Math. Ann. 77, 313(1916)]-(Bose(1924)-Einstein(1925)]-VS.

  2. Scalable and Detail-Preserving Ground Surface Reconstruction from Large 3D Point Clouds Acquired by Mobile Mapping Systems

    Science.gov (United States)

    Craciun, D.; Serna Morales, A.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.

    2014-08-01

    The currently existing mobile mapping systems equipped with active 3D sensors allow to acquire the environment with high sampling rates at high vehicle velocities. While providing an effective solution for environment sensing over large scale distances, such acquisition provides only a discrete representation of the geometry. Thus, a continuous map of the underlying surface must be built. Mobile acquisition introduces several constraints for the state-of-the-art surface reconstruction algorithms. Smoothing becomes a difficult task for recovering sharp depth features while avoiding mesh shrinkage. In addition, interpolation-based techniques are not suitable for noisy datasets acquired by Mobile Laser Scanning (MLS) systems. Furthermore, scalability is a major concern for enabling real-time rendering over large scale distances while preserving geometric details. This paper presents a fully automatic ground surface reconstruction framework capable to deal with the aforementioned constraints. The proposed method exploits the quasi-flat geometry of the ground throughout a morphological segmentation algorithm. Then, a planar Delaunay triangulation is applied in order to reconstruct the ground surface. A smoothing procedure eliminates high frequency peaks, while preserving geometric details in order to provide a regular ground surface. Finally, a decimation step is applied in order to cope with scalability constraints over large scale distances. Experimental results on real data acquired in large urban environments are presented and a performance evaluation with respect to ground truth measurements demonstrate the effectiveness of our method.

  3. The vital role of transcendental truth in science.

    Science.gov (United States)

    Charlton, Bruce G

    2009-04-01

    I have come to believe that science depends for its long-term success on an explicit and pervasive pursuit of the ideal of transcendental truth. 'Transcendental' implies that a value is ideal and ultimate - it is aimed-at but can only imperfectly be known, achieved or measured. So, transcendental truth is located outside of science; beyond scientific methods, processes and peer consensus. Although the ultimate scientific authority of a transcendental value of truth was a view held almost universally by the greatest scientists throughout recorded history, modern science has all-but banished references to truth from professional scientific discourse - these being regarded as wishful, mystical and embarrassing at best, and hypocritical or manipulative at worst. With truth excluded, the highest remaining evaluation mechanism is 'professional consensus' or peer review - beyond which there is no higher court of appeal. Yet in Human accomplishment, Murray argues that cultures which foster great achievement need transcendental values (truth, beauty and virtue) to be a live presence in the culture; such that great artists and thinkers compete to come closer to the ideal. So a scientific system including truth as a live presence apparently performs better than a system which excludes truth. Transcendental truth therefore seems to be real in the pragmatic sense that it makes a difference. To restore the primacy of truth to science a necessary step would be to ensure that only truth-seekers were recruited to the key scientific positions, and to exclude from leadership those who are untruthful or exhibit insufficient devotion to the pursuit of truth. In sum, to remain anchored in its proper role, science should through 'truth talk' frequently be referencing normal professional practice to transcendental truth values. Ultimately, science should be conducted at every level, from top to bottom, on the basis of what Bronowski termed the 'habit of truth'. Such a situation currently

  4. The laterality effect: myth or truth?

    Science.gov (United States)

    Cohen Kadosh, Roi

    2008-03-01

    Tzelgov and colleagues [Tzelgov, J., Meyer, J., and Henik, A. (1992). Automatic and intentional processing of numerical information. Journal of Experimental Psychology: Learning, Memory and Cognition, 18, 166-179.], offered the existence of the laterality effect as a post-hoc explanation for their results. According to this effect, numbers are classified automatically as small/large versus a standard point under autonomous processing of numerical information. However, the genuinity of the laterality effect was never examined, or was confounded with the numerical distance effect. In the current study, I controlled the numerical distance effect and observed that the laterality effect does exist, and affects the processing of automatic numerical information. The current results suggest that the laterality effect should be taken into account when using paradigms that require automatic numerical processing such as Stroop-like or priming tasks.

  5. Preferences for truthfulness: Heterogeneity among and within individuals

    OpenAIRE

    Gibson Brandon, Rajna Nicole; Tanner, Carmen; Alexander F. WAGNER

    2013-01-01

    We conduct an experiment assessing the extent to which people trade off the economic costs of truthfulness against the intrinsic costs of lying. The results allow us to reject a type-based model. People's preferences for truthfulness do not identify them as only either "economic types" (who care only about consequences) or "ethical types" (who care only about process). Instead, we find that preferences for truthfulness are heterogeneous among individuals. Moreover, when examining possible sou...

  6. Preferences for truthfulness: Heterogeneity among and within individuals

    OpenAIRE

    Rajna GIBSON; Tanner, Carmen; Alexander F. WAGNER

    2012-01-01

    We conduct an experiment assessing the extent to which people trade off the economic costs of truthfulness against the intrinsic costs of lying. The results allow us to reject a type-based model. People's preferences for truthfulness do not identify them as only either "economic types" (who care only about consequences) or "ethical types" (who care only about process). Instead, we find that preferences for truthfulness are heterogeneous among individuals. Moreover, when examining possible sou...

  7. Automatic design of digital synthetic gene circuits.

    Directory of Open Access Journals (Sweden)

    Mario A Marchisio

    2011-02-01

    Full Text Available De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input-output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions.

  8. Automatic design of digital synthetic gene circuits.

    Science.gov (United States)

    Marchisio, Mario A; Stelling, Jörg

    2011-02-01

    De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input-output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions.

  9. Automatic reconstruction of neural morphologies with multi-scale graph-based tracking

    Directory of Open Access Journals (Sweden)

    Anna eChoromanska

    2012-06-01

    Full Text Available Neurons have complicated axonal and dendritic morphologies and their peculiar structures probably reflect functional differences and thus have been traditionally used to classify neurons into different classes. Because of this, reconstruction of neural morphologies is an important step towards understanding the structure of the brain circuits. Manual reconstructions of 3D neural structure from image stacks obtained using confocal or bright-field microscopy are time-consuming and partly subjective, and, also given the large number and variety of neuronal cell types, it appears essential to develop automatic or semi-automatic reconstruction algorithms. Nevertheless, despite the fast development of new techniques in data acquisition and image processing, automatic reconstructions still remain a challenge. In this paper we present a novel and fast method for tracking neural morphologies in 3D space with simultaneous detection of branching processes. The method exploits some existing procedures and adds to them the machine vision technique of multiscaling. Specifically, the algorithm starts from a seed point and tracks the structure using a ball of a variable radius. In each step the algorithm moves the ball center to the new point on the ball’s surface with the shortest Dijkstra path. It detects the presence of the branching point by examining the spatial spread of points on the surface of the ball. The algorithm scales the ball size until branches are well separated and then continues tracking each branch. We evaluate the performance of our algorithm on synthetic data stacks obtained by manual reconstructions of neural cells, corrupted with different levels of noise. Additionally, we report results on real data sets. Our proposed algorithm is able to reconstruct 3D neural morphology that is highly similar to the ground truth and simultaneously achieves 90% average precision and 81% average recall in branching region detection. The introduction of

  10. N3DFix: an Algorithm for Automatic Removal of Swelling Artifacts in Neuronal Reconstructions.

    Science.gov (United States)

    Conde-Sousa, Eduardo; Szücs, Peter; Peng, Hanchuan; Aguiar, Paulo

    2017-01-01

    It is well established that not only electrophysiology but also morphology plays an important role in shaping the functional properties of neurons. In order to properly quantify morphological features it is first necessary to translate observational histological data into 3-dimensional geometric reconstructions of the neuronal structures. This reconstruction process, independently of being manual or (semi-)automatic, requires several preparation steps (e.g. histological processing) before data acquisition using specialized software. Unfortunately these processing steps likely produce artifacts which are then carried to the reconstruction, such as tissue shrinkage and formation of swellings. If not accounted for and corrected, these artifacts can change significantly the results from morphometric analysis and computer simulations. Here we present N3DFix, an open-source software which uses a correction algorithm to automatically find and fix swelling artifacts in neuronal reconstructions. N3DFix works as a post-processing tool and therefore can be used in either manual or (semi-)automatic reconstructions. The algorithm's internal parameters have been defined using a "ground truth" dataset produced by a neuroanatomist, involving two complementary manual reconstruction procedures: in the first, neuronal topology was faithfully reconstructed, including all swelling artifacts; in the second procedure a meticulous correction of the artifacts was manually performed directly during neuronal tracing. The internal parameters of N3DFix were set to minimize the differences between manual amendments and the algorithm's corrections. It is shown that the performance of N3DFix is comparable to careful manual correction of the swelling artifacts. To promote easy access and wide adoption, N3DFix is available in NEURON, Vaa3D and Py3DN.

  11. Freedom of Expression, Diversity, and Truth

    DEFF Research Database (Denmark)

    Kappel, Klemens; Hallsson, Bjørn Gunnar; Møller, Emil Frederik Lundbjerg

    2016-01-01

    The aim of this chapter is to examine how diversity benefits deliberation, information exchange and other socio-epistemic practices associated with free speech. We separate five distinct dimensions of diversity, and discuss a variety of distinct mechanisms by which various forms of diversity may...... be thought to have epistemically valuable outcomes. We relate these results to the moral justification of free speech. Finally, we characterise a collective action problem concerning the compliance with truth-conducive norms of deliberation, and suggest what may solve this problem....

  12. Logic and truth: Some logics without theorems

    Directory of Open Access Journals (Sweden)

    Jayanta Sen

    2008-08-01

    Full Text Available Two types of logical consequence are compared: one, with respect to matrix and designated elements and the other with respect to ordering in a suitable algebraic structure. Particular emphasis is laid on algebraic structures in which there is no top-element relative to the ordering. The significance of this special condition is discussed. Sequent calculi for a number of such structures are developed. As a consequence it is re-established that the notion of truth as such, not to speak of tautologies, is inessential in order to define validity of an argument.

  13. The truth on journalism: relations between its practice and discourse

    Directory of Open Access Journals (Sweden)

    Daiane Bertasso Ribeiro

    2011-04-01

    Full Text Available This article proposes a theoretically approach over the relations that journalism establishes with the concept of truth. The notion of truth in Foucault leads the debate. This reflection centers on how journalism builds discursive strategies that produces effects of truth on itsreports. The journalist discourse presents itselfas truthful, although its constructive discourse of the world is a result of rules, practices and values. The debate of “truth” allows us to comprehend the complexity and particularities of journalism as a discursive practice that has reflection in the social knowledge of reality.

  14. 78 FR 78299 - Proposed Establishment of Class E Airspace; Truth or Consequences, NM

    Science.gov (United States)

    2013-12-26

    ... Federal Aviation Administration 14 CFR Part 71 Proposed Establishment of Class E Airspace; Truth or... Truth or Consequences VHF Omni-Directional Radio Range Tactical Air Navigation Aid (VORTAC), Truth or... feet above the surface at the Truth or Consequences VORTAC navigation aid, Truth or Consequences, NM...

  15. Settling No Conflict in the Public Place: Truth in Education, and in Rancierean Scholarship

    Science.gov (United States)

    Bingham, Charles

    2010-01-01

    This essay offers an educational understanding of truth deriving from the work of Jacques Ranciere. Unlike other educational accounts--the traditional, progressive, and critical accounts--of truth that take education as a way of approaching pre-existing truths (or lack of pre-existing truths), this essay establishes an account of truth that is…

  16. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI.

    Science.gov (United States)

    Avendi, M R; Kheradvar, Arash; Jafarkhani, Hamid

    2016-05-01

    Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.

  17. Truthfulness in transplantation: non-heart-beating organ donation.

    Science.gov (United States)

    Potts, Michael

    2007-08-24

    The current practice of organ transplantation has been criticized on several fronts. The philosophical and scientific foundations for brain death criteria have been crumbling. In addition, donation after cardiac death, or non-heartbeating-organ donation (NHBD) has been attacked on grounds that it mistreats the dying patient and uses that patient only as a means to an end for someone else's benefit.Verheijde, Rady, and McGregor attack the deception involved in NHBD, arguing that the donors are not dead and that potential donors and their families should be told that is the case. Thus, they propose abandoning the dead donor rule and allowing NHBD with strict rules concerning adequate informed consent. Such honesty about NHBD should be welcomed.However, NHBD violates a fundamental end of medicine, nonmaleficience, "do no harm." Physicians should not be harming or killing patients, even if it is for the benefit of others. Thus, although Verheijde and his colleages should be congratulated for calling for truthfulness about NHBD, they do not go far enough and call for an elimination of such an unethical procedure from the practice of medicine.

  18. Truthfulness in transplantation: non-heart-beating organ donation

    Directory of Open Access Journals (Sweden)

    Potts Michael

    2007-08-01

    Full Text Available Abstract The current practice of organ transplantation has been criticized on several fronts. The philosophical and scientific foundations for brain death criteria have been crumbling. In addition, donation after cardiac death, or non-heartbeating-organ donation (NHBD has been attacked on grounds that it mistreats the dying patient and uses that patient only as a means to an end for someone else's benefit. Verheijde, Rady, and McGregor attack the deception involved in NHBD, arguing that the donors are not dead and that potential donors and their families should be told that is the case. Thus, they propose abandoning the dead donor rule and allowing NHBD with strict rules concerning adequate informed consent. Such honesty about NHBD should be welcomed. However, NHBD violates a fundamental end of medicine, nonmaleficience, "do no harm." Physicians should not be harming or killing patients, even if it is for the benefit of others. Thus, although Verheijde and his colleages should be congratulated for calling for truthfulness about NHBD, they do not go far enough and call for an elimination of such an unethical procedure from the practice of medicine.

  19. Validation of experts versus atlas-based and automatic registration methods for subthalamic nucleus targeting on MRI

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Castro, F.J.; Cuisenaire, O.; Thiran, J.P. [Ecole Polytechnique Federale de Lausanne (EPFL) (Switzerland). Signal Processing Inst.; Pollo, C. [Ecole Polytechnique Federale de Lausanne (EPFL) (Switzerland). Signal Processing Inst.; Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne (Switzerland). Dept. of Neurosurgery; Villemure, J.G. [Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne (Switzerland). Dept. of Neurosurgery

    2006-03-15

    Objects: In functional stereotactic neurosurgery, one of the cornerstones upon which the success and the operating time depends is an accurate targeting. The subthalamic nucleus (STN) is the usual target involved when applying deep brain stimulation for Parkinson's disease (PD). Unfortunately, STN is usually not clearly visible in common medical imaging modalities, which justifies the use of atlas-based segmentation techniques to infer the STN location. Materials and methods: Eight bilaterally implanted PD patients were included in this study. A three-dimensional T1-weighted sequence and inversion recovery T2-weighted coronal slices were acquired pre-operatively. We propose a methodology for the construction of a ground truth of the STN location and a scheme that allows both, to perform a comparison between different non-rigid registration algorithms and to evaluate their usability to locate the STN automatically. Results: The intra-expert variability in identifying the STN location is 1.06{+-}0.61 mm while the best non-rigid registration method gives an error of 1.80{+-}0.62 mm. On the other hand, statistical tests show that an affine registration with only 12 degrees of freedom is not enough for this application. Conclusions: Using our validation-evaluation scheme, we demonstrate that automatic STN localization is possible and accurate with non-rigid registration algorithms. (orig.)

  20. Some Arguments for the Operational Reading of Truth Expressions

    Directory of Open Access Journals (Sweden)

    Jakub Gomułka

    2013-12-01

    Full Text Available The main question of our article is: What is the logical form of statements containing expressions such as “… is true” and “it is true that …”? We claim that these expressions are generally not used in order to assign a certain property to sentences. We indicate that a predicative interpretation of these expressions was rejected by Frege and adherents to the prosentential conception of truth. We treat these expressions as operators. The main advantage of our operational reading is the fact that it adequately represents how the words, “true” and “truth,” function in everyday speech. Our approach confirms the intuition that so-called T-equivalences are not contingent truths, and explains why they seem to be—in some sense—necessary sentences. Moreover, our operational readingof truth expressions dissolves problems arising from the belief that there is some specific property—truth. The fact that we reject that truth is a certain property does not mean that we deny that the concept of truth plays a very important role in our language, and hence in our life. We indicate that the concept of truth is inseparable from the concept of sentence and vice versa—it is impossible to explicate one of these concepts without appeal to the other.

  1. 12 CFR 741.217 - Truth in savings.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Truth in savings. 741.217 Section 741.217 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS REQUIREMENTS FOR... Also Apply to Federally Insured State-Chartered Credit Unions § 741.217 Truth in savings. Any credit...

  2. How Does Telling the Truth Help Educational Action Research?

    Science.gov (United States)

    Blair, Erik

    2010-01-01

    A number of key constructs underpin educational action research. This paper focuses on the concept of "truth" and by doing so hopes to highlight some debate in this area. In reflecting upon what "truth" might mean to those involved in action research, I shall critically evaluate Thorndike's "Law of Effect" and Bruner's "Three Forms of…

  3. How Does Telling the Truth Help Educational Action Research?

    Science.gov (United States)

    Blair, Erik

    2010-01-01

    A number of key constructs underpin educational action research. This paper focuses on the concept of "truth" and by doing so hopes to highlight some debate in this area. In reflecting upon what "truth" might mean to those involved in action research, I shall critically evaluate Thorndike's "Law of Effect" and Bruner's "Three Forms of…

  4. Truthfulness in science teachers’ bodily and verbal actions

    DEFF Research Database (Denmark)

    Daugbjerg, Peer

    2013-01-01

    A dramaturgical approach to teacher’s personal bodily and verbal actions is applied through the vocabulary of truthfulness. Bodily and verbal actions have been investigated among Danish primary and lower secondary school science teachers based on their narratives and observations of their classroom...... be beneficial to address the truthfulness of science teachers’ narratives and actions....

  5. Truthfulness as a Standard for Speech in Ancient India.

    Science.gov (United States)

    Kirkwood, William G.

    1989-01-01

    Shows why truthfulness, because of its link to spirituality, was the foremost standard for speech in ancient India, and how its practice was defined, emphasizing the consequences of truthfulness and deceit for speakers themselves. Considers possible contributions to current rhetorical and ethical studies. (SR)

  6. 5 CFR 1650.4 - Certification of truthfulness.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Certification of truthfulness. 1650.4 Section 1650.4 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD METHODS OF WITHDRAWING FUNDS FROM THE THRIFT SAVINGS PLAN General § 1650.4 Certification of truthfulness. By signing a TSP...

  7. On authenticity: the question of truth in construction and autobiography.

    Science.gov (United States)

    Collins, Sara

    2011-12-01

    Freud was occupied with the question of truth and its verification throughout his work. He looked to archaeology for an evidence model to support his ideas on reconstruction. He also referred to literature regarding truth in reconstruction, where he saw shifts between historical fact and invention, and detected such swings in his own case histories. In his late work Freud pondered over the impossibility of truth in reconstruction by juxtaposing truth with 'probability'. Developments on the role of fantasy and myth in reconstruction and contemporary debates over objectivity have increasingly highlighted the question of 'truth' in psychoanalysis. I will argue that 'authenticity' is a helpful concept in furthering the discussion over truth in reconstruction. Authenticity denotes that which is genuine, trustworthy and emotionally accurate in a reconstruction, as observed within the immediacy of the analyst/patient interaction. As authenticity signifies genuineness in a contemporary context its origins are verifiable through the analyst's own observations of the analytic process itself. Therefore, authenticity is about the likelihood and approximation of historical truth rather than its certainty. In that respect it links with Freud's musings over 'probability'. Developments on writing 'truths' in autobiography mirror those in reconstruction, and lend corroborative support from another source.

  8. X-tra as a toolbox for truth maintenance

    OpenAIRE

    Charpillet, François; Marquis, Pierre; Haton, Jean-Paul

    1991-01-01

    X-TRA is a development tool for knowledge-based systems. It integrates a toolbox for truth maintenance based on both TMS and ATMS techniques. In this paper we present facilities provided in X-TRA for the truth maintenance task.

  9. Counseling without Truth: Toward a Neopragmatic Foundation for Counseling Practice

    Science.gov (United States)

    Hansen, James T.

    2007-01-01

    The author presents an overview of contemporary developments in philosophy regarding the status of truth and discusses the implications of these ideas for the practice of counseling. Counseling without truth is presented as a desirable option when a neopragmatic frame of reference is adopted.

  10. Institutions of Variable Truth Values:An Approach in the Ordered Style

    Institute of Scientific and Technical Information of China (English)

    应明生

    1995-01-01

    The concept of institution of variable truth values is introduced and some main results about institutions are generalized.In particular,some properties of institutions of variable truth values preserved by change of truth values are established.

  11. The socio-rhetorical force of 'truth talk' and lies: The case of 1 John

    African Journals Online (AJOL)

    Test

    2010-08-05

    Aug 5, 2010 ... participates in the research ... But even savvy viewers who realize that their favorite reality shows are ... imaginative truth, virtual truth, essential truth, messaged ..... resocialised reality, the linguistic habits of the group take on.

  12. The Semantics of "Truth": A Counter-argument to Some Postmodern Theories.

    Science.gov (United States)

    Lawson, Kenneth H.

    2000-01-01

    Postmodernists' denials of the existence of objective truth are made without supporting evidence and fail to account for semantic structures and linguistic usage. Analysis of semantic structure reveals truth embedded in language. Truth and socially constructed meaning are interrelated. (SK)

  13. A choice-semantical approach to theoretical truth.

    Science.gov (United States)

    Andreas, Holger; Schiemer, Georg

    2016-08-01

    A central topic in the logic of science concerns the proper semantic analysis of theoretical sentences, that is sentences containing theoretical terms. In this paper, we present a novel choice-semantical account of theoretical truth based on the epsilon-term definition of theoretical terms. Specifically, we develop two ways of specifying the truth conditions of theoretical statements in a choice functional semantics, each giving rise to a corresponding logic of such statements. In order to investigate the inferential strength of these logical systems, we provide a translation of each truth definition into a modal definition of theoretical truth. Based on this, we show that the stronger notion of choice-semantical truth captures more adequately our informal semantic understanding of scientific statements. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Academic Training: Telling the truth with statistics

    CERN Multimedia

    Françoise Benz

    2005-01-01

    2004-2005 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 21, 22, 23, 24 & 25 February from 11.00 to 12.00 hrs - Main Auditorium, bldg. 500 Telling the truth with statistics by G. D'Agostini / NFN, Roma, Italy The issue of evaluating and expressing the uncertainty in measurements, as well as that of testing hypotheses, is reviewed, with particular emphasis on the frontier cases typical of particle physics experiments. Fundamental aspects of probability will be addressed and the applications, solely based on probability theory, will cover several topics of practical interest, including counting experiments, upper/lower bounds, systematic errors, fits and comparison of hypotheses. ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch

  15. 76 FR 22947 - Truth in Lending

    Science.gov (United States)

    2011-04-25

    ... automatic payments or preferred rates otherwise offered as relationship rewards. Unlike employee preferred... of Congress, credit card issuers and their employees, consumer groups and individual consumers, trade... view--create an incentive for creditors to develop new products designed to circumvent the Credit Card...

  16. A referential theory of the repetition-induced truth effect.

    Science.gov (United States)

    Unkelbach, Christian; Rom, Sarah C

    2017-03-01

    People are more likely to judge repeated statements as true compared to new statements, a phenomenon known as the illusory truth effect. The currently dominant explanation is an increase in processing fluency caused by prior presentation. We present a new theory to explain this effect. We assume that people judge truth based on coherent references for statements in memory. Due to prior presentation, repeated statements have more coherently linked references; thus, a repetition-induced truth effect follows. Five experiments test this theory. Experiment 1-3 show that both the amount and the coherence of references for a repeated statement influence judged truth. Experiment 4 shows that people also judge new statements more likely "true" when they share references with previously presented statements. Experiment 5 realizes theoretically predicted conditions under which repetition should not influence judged truth. Based on these data, we discuss how the theory relates to other explanations of repetition-induced truth and how it may integrate other truth-related phenomena and belief biases. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Digital technologies as truth-bearers in health care.

    Science.gov (United States)

    Bartlett, Ruth; Balmer, Andrew; Brannelly, Petula

    2017-01-01

    In this paper, we explore the idea of digital technologies as truth-bearers in health care and argue that devices like SenseCam, which facilitate reflection and memory recall, have a potentially vital role in healthcare situations when questions of veracity are at stake (e.g., when best interest decisions are being made). We discuss the role of digital technologies as truth-bearers in the context of nursing people with dementia, as this is one area of health care in which the topic of truth-telling has been hotly debated. People with dementia have been excluded from research studies and decisions that affect their lives because they are not regarded as truth-bearers-that is, as being capable of giving truthful accounts of their experiences. Also, considerable research has focused on the ethics of lying to and deceiving people with dementia. Given their increasing prominence in healthcare settings, there has been surprisingly little discussion of what role digital technologies might play in relation to these questions of truth and deception. Drawing on theories from science and technology studies (STS), we explore their possible future role in some of the truth-making processes of health care. In particular, we discuss the potential value of constraints on use of SenseCam to support the accounts of people with dementia as part of their care. © 2016 John Wiley & Sons Ltd.

  18. Towards an automatic tool for resolution evaluation of mammographic images

    Energy Technology Data Exchange (ETDEWEB)

    De Oliveira, J. E. E. [FUMEC, Av. Alfonso Pena 3880, CEP 30130-009 Belo Horizonte - MG (Brazil); Nogueira, M. S., E-mail: juliae@fumec.br [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Pte. Antonio Carlos 6627, 31270-901, Belo Horizonte - MG (Brazil)

    2014-08-15

    Quality of Mammographies from the Public and Private Services of the State. With an essentially educational character, an evaluation of the image quality is monthly held from a breast phantom in each mammographic equipment. In face of this, this work proposes to develop a protocol for automatic evaluation of image quality of mammograms so that the radiological protection and image quality requirements are met in the early detection of breast cancer. Specifically, image resolution will be addressed and evaluated, as a part of the program of image quality evaluation. Results show that for the fourth resolution and using 28 phantom images with the ground truth settled, the computer analysis of the resolution is promising and may be used as a tool for the assessment of the image quality. (Author)

  19. Analysis of scientific truth status in controlled rehabilitation trials.

    Science.gov (United States)

    Kerry, Roger; Madouasse, Aurélien; Arthur, Antony; Mumford, Stephen D

    2013-08-01

    Systematic reviews, meta-analyses and clinical guidelines (reviews) are intended to inform clinical practice, and in this sense can be thought of as scientific truthmakers. High-quality controlled trials should align to this truth, and method quality markers should predict truth status. We sought to determine in what way controlled trial quality relates to scientific truth, and to determine predictive utility of trial quality and bibliographic markers. A sample of reviews in rehabilitation medicine was examined. Two scientific truth dimensions were established based on review outcomes. Quality and bibliographic markers were extracted from associated trials for use in a regression analysis of their predictive utility for trial truth status. Probability analysis was undertaken to examine judgments of future trial truth status. Of the 93 trials included in contemporaneous reviews, overall, n = 45 (48%) were true. Randomization was found more in true trials than false trials in one truth dimension (P = 0.03). Intention-to-treat analysis was close to significant in one truth dimension (P = 0.058), being more commonly used in false trials. There were no other significant differences in quality or bibliographic variables between true and false trials. Regression analysis revealed no significant predictors of trial truth status. Probability analysis reported that the reasonable chance of future trials being true was between 2 and 5%, based on a uniform prior. The findings are at odds with what is considered gold-standard research methods, but in line with previous reports. Further work should focus on scientific dynamics within healthcare research and evidence-based practice constructs. © 2012 John Wiley & Sons Ltd.

  20. Attack on the USS Liberty: A Stab at the Truth

    Science.gov (United States)

    2009-04-10

    St ra te gy Re se ar ch Pr oj ec t ATTACK ON THE USS LIBERTY: A STAB AT THE TRUTH BY COMMANDER MARK A. STROH United States Navy DISTRIBUTION...Liberty: A Stab at the Truth 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Commander Mark A. Stroh 5d. PROJECT NUMBER 5e...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 USAWC STRATEGY RESEARCH PROJECT ATTACK ON THE USS LIBERTY: A STAB AT THE TRUTH by Commander

  1. Automatic tracking of arbitrarily shaped implanted markers in kilovoltage projection images: A feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Regmi, Rajesh; Lovelock, D. Michael; Hunt, Margie; Zhang, Pengpeng; Pham, Hai; Xiong, Jianping; Yorke, Ellen D.; Mageras, Gig S., E-mail: magerasg@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States); Goodman, Karyn A.; Rimner, Andreas [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States); Mostafavi, Hassan [Ginzton Technology Center, Varian Medical Systems, Palo Alto, California 94304 (United States)

    2014-07-15

    Purpose: Certain types of commonly used fiducial markers take on irregular shapes upon implantation in soft tissue. This poses a challenge for methods that assume a predefined shape of markers when automatically tracking such markers in kilovoltage (kV) radiographs. The authors have developed a method of automatically tracking regularly and irregularly shaped markers using kV projection images and assessed its potential for detecting intrafractional target motion during rotational treatment. Methods: Template-based matching used a normalized cross-correlation with simplex minimization. Templates were created from computed tomography (CT) images for phantom studies and from end-expiration breath-hold planning CT for patient studies. The kV images were processed using a Sobel filter to enhance marker visibility. To correct for changes in intermarker relative positions between simulation and treatment that can introduce errors in automatic matching, marker offsets in three dimensions were manually determined from an approximately orthogonal pair of kV images. Two studies in anthropomorphic phantom were carried out, one using a gold cylindrical marker representing regular shape, another using a Visicoil marker representing irregular shape. Automatic matching of templates to cone beam CT (CBCT) projection images was performed to known marker positions in phantom. In patient data, automatic matching was compared to manual matching as an approximate ground truth. Positional discrepancy between automatic and manual matching of less than 2 mm was assumed as the criterion for successful tracking. Tracking success rates were examined in kV projection images from 22 CBCT scans of four pancreas, six gastroesophageal junction, and one lung cancer patients. Each patient had at least one irregularly shaped radiopaque marker implanted in or near the tumor. In addition, automatic tracking was tested in intrafraction kV images of three lung cancer patients with irregularly shaped

  2. AUTOMATIC EXTRACTION OF ROAD SURFACE AND CURBSTONE EDGES FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    A. Miraliakbari

    2015-05-01

    Full Text Available We present a procedure for automatic extraction of the road surface from geo-referenced mobile laser scanning data. The basic assumption of the procedure is that the road surface is smooth and limited by curbstones. Two variants of jump detection are investigated for detecting curbstone edges, one based on height differences the other one based on histograms of the height data. Region growing algorithms are proposed which use the irregular laser point cloud. Two- and four-neighbourhood growing strategies utilize the two height criteria for examining the neighborhood. Both height criteria rely on an assumption about the minimum height of a low curbstone. Road boundaries with lower or no jumps will not stop the region growing process. In contrast to this objects on the road can terminate the process. Therefore further processing such as bridging gaps between detected road boundary points and the removal of wrongly detected curbstone edges is necessary. Road boundaries are finally approximated by splines. Experiments are carried out with a ca. 2 km network of smalls streets located in the neighbourhood of University of Applied Sciences in Stuttgart. For accuracy assessment of the extracted road surfaces, ground truth measurements are digitized manually from the laser scanner data. For completeness and correctness of the region growing result values between 92% and 95% are achieved.

  3. DESIGN AND DEVELOP A COMPUTER AIDED DESIGN FOR AUTOMATIC EXUDATES DETECTION FOR DIABETIC RETINOPATHY SCREENING

    Directory of Open Access Journals (Sweden)

    C. A. SATHIYAMOORTHY

    2016-04-01

    Full Text Available Diabetic Retinopathy is a severe and widely spread eye disease which can lead to blindness. One of the main symptoms for vision loss is Exudates and it could be prevented by applying an early screening process. In the Existing systems, a Fuzzy C-Means Clustering technique is used for detecting the exudates for analyzation. The main objective of this paper is, to improve the efficiency of the Exudates detection in diabetic retinopathy images. To do this, a three Stage – [TS] approach is introduced for detecting and extracting the exudates automatically from the retinal images for screening the Diabetic retinopathy. TS functions on the image in three levels such as Pre-processing the image, enhancing the image and detecting the Exudates accurately. After successful detection, the detected exudates are classified using GLCM method for finding the accuracy. The TS approach is experimented using MATLAB software and the performance evaluation can be proved by comparing the results with the existing approach’s result and with the hand-drawn ground truths images from the expert ophthalmologist.

  4. Non-parametric iterative model constraint graph min-cut for automatic kidney segmentation.

    Science.gov (United States)

    Freiman, M; Kronman, A; Esses, S J; Joskowicz, L; Sosna, J

    2010-01-01

    We present a new non-parametric model constraint graph min-cut algorithm for automatic kidney segmentation in CT images. The segmentation is formulated as a maximum a-posteriori estimation of a model-driven Markov random field. A non-parametric hybrid shape and intensity model is treated as a latent variable in the energy functional. The latent model and labeling map that minimize the energy functional are then simultaneously computed with an expectation maximization approach. The main advantages of our method are that it does not assume a fixed parametric prior model, which is subjective to inter-patient variability and registration errors, and that it combines both the model and the image information into a unified graph min-cut based segmentation framework. We evaluated our method on 20 kidneys from 10 CT datasets with and without contrast agent for which ground-truth segmentations were generated by averaging three manual segmentations. Our method yields an average volumetric overlap error of 10.95%, and average symmetric surface distance of 0.79 mm. These results indicate that our method is accurate and robust for kidney segmentation.

  5. IceMap250—Automatic 250 m Sea Ice Extent Mapping Using MODIS Data

    Directory of Open Access Journals (Sweden)

    Charles Gignac

    2017-01-01

    Full Text Available The sea ice cover in the North evolves at a rapid rate. To adequately monitor this evolution, tools with high temporal and spatial resolution are needed. This paper presents IceMap250, an automatic sea ice extent mapping algorithm using MODIS reflective/emissive bands. Hybrid cloud-masking using both the MOD35 mask and a visibility mask, combined with downscaling of Bands 3–7 to 250 m, are utilized to delineate sea ice extent using a decision tree approach. IceMap250 was tested on scenes from the freeze-up, stable cover, and melt seasons in the Hudson Bay complex, in Northeastern Canada. IceMap250 first product is a daily composite sea ice presence map at 250 m. Validation based on comparisons with photo-interpreted ground-truth show the ability of the algorithm to achieve high classification accuracy, with kappa values systematically over 90%. IceMap250 second product is a weekly clear sky map that provides a synthesis of 7 days of daily composite maps. This map, produced using a majority filter, makes the sea ice presence map even more accurate by filtering out the effects of isolated classification errors. The synthesis maps show spatial consistency through time when compared to passive microwave and national ice services maps.

  6. Realism without truth: a review of Giere's science without laws and scientific perspectivism.

    Science.gov (United States)

    Hackenberg, Timothy D

    2009-05-01

    An increasingly popular view among philosophers of science is that of science as action-as the collective activity of scientists working in socially-coordinated communities. Scientists are seen not as dispassionate pursuers of Truth, but as active participants in a social enterprise, and science is viewed on a continuum with other human activities. When taken to an extreme, the science-as-social-process view can be taken to imply that science is no different from any other human activity, and therefore can make no privileged claims about its knowledge of the world. Such extreme views are normally contrasted with equally extreme views of classical science, as uncovering Universal Truth. In Science Without Laws and Scientific Perspectivism, Giere outlines an approach to understanding science that finds a middle ground between these extremes. He acknowledges that science occurs in a social and historical context, and that scientific models are constructions designed and created to serve human ends. At the same time, however, scientific models correspond to parts of the world in ways that can legitimately be termed objective. Giere's position, perspectival realism, shares important common ground with Skinner's writings on science, some of which are explored in this review. Perhaps most fundamentally, Giere shares with Skinner the view that science itself is amenable to scientific inquiry: scientific principles can and should be brought to bear on the process of science. The two approaches offer different but complementary perspectives on the nature of science, both of which are needed in a comprehensive understanding of science.

  7. Science, dullness and truth: a rejoinder.

    Science.gov (United States)

    Volchan, Sérgio B

    2010-03-01

    In a recent series of polemical editorials in this journal, a scathing and much needed criticism is made of many aspects of current scientific mores, detecting some worrying dysfunctions which threaten the integrity of the whole scientific enterprise. Although the tone is a bit hyperbolic, many important issues are addressed, such as honesty in research, the centrality of truth in science, the role of creativity, just to cite a few. Though agreeing with the overall diagnosis, the discussion still suffers from a lack of a clear and systemic view of science, from which a more precise analysis could be carried out. The presentation is also predicated on a too strong adherence to some popular notions of scientific progress and a somewhat romantic notion of genius. In this paper we address these shortcomings with the aim of contributing to a better understanding of this timely discussion. Though conceding that major structural, historical and cultural shifts might have caused irreversible changes on the way science now evolves, we make some suggestions to counter this trend. These include, among others, the need for an honest and careful dealing with the media and public, to prize and abide by the ethos of science and its underlying values, to cultivate an exact philosophy and to insist that disinterested curiosity and the desire to understand the world are the vital motivations of science. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  8. The Myth, the Truth, the NASA IRB

    Science.gov (United States)

    Covington, M. D.; Flores, M. P.; Neutzler, V. P.; Schlegel, T. T.; Platts, S. H.; Lioyd, C. W.

    2017-01-01

    The purpose of the NASA Institutional Review Board (IRB) is to review research activities involving human subjects to ensure that ethical standards for the care and protection of human subjects have been met and research activities are in compliance with all pertinent federal, state and local regulations as well as NASA policies. NASA IRB's primary role is the protection of human subjects in research studies. Protection of human subjects is the shared responsibility of NASA, the IRB, and the scientific investigators. Science investigators who plan to conduct NASA-funded human research involving NASA investigators, facilities, or funds must submit and coordinate their research studies for review and approval by the NASA IRB prior to initiation. The IRB has the authority to approve, require changes in, or disapprove research involving human subjects. Better knowledge of the NASA IRB policies, procedures and guidelines should help facilitate research protocol applications and approvals. In this presentation, the myths and truths of NASA IRB policies and procedures will be discussed. We will focus on the policies that guide a protocol through the NASA IRB and the procedures that principal investigators must take to obtain required IRB approvals for their research studies. In addition, tips to help ensure a more efficient IRB review will be provided. By understanding the requirements and processes, investigators will be able to more efficiently prepare their protocols and obtain the required NASA IRB approval in a timely manner.

  9. Truth, Representation and Interpretation: The Popper Case

    Directory of Open Access Journals (Sweden)

    Gerard Stan

    2009-06-01

    Full Text Available The aim of this study is to determine several points of reference regarding the way in which Karl Popper built up his philosophical discourse. I locate two specific ways in which Popper interpreted and used ideas belonging to other philosophers. Thus I distinguish in Popper between a projective hermeneutics (where the author uses a thesis that forms a part of his own philosophy in order to reconstruct and understand the ideas of another philosopher and anideological hermeneutics (where he uses a statement expressing an interest of the community whereof he is a member in order to interpret and reconstruct the text of another philosopher. In so doing I also highlight the considerable asymmetry between a representationalist hermeneutics, and a projective and, respectively, an ideological one. Whereas in the first case the interpreter wishes to unveil a truth about the philosophical text, in the other two he israther expressing a desire to talk about himself, his own beliefs and convictions, or about the beliefs of his community of reference.

  10. Characterizing Truthful Multi-Armed Bandit Mechanisms

    CERN Document Server

    Babaioff, Moshe; Slivkins, Aleksandrs

    2008-01-01

    We consider a multi-round auction setting motivated by pay-per-click auctions for Internet advertising. In each round the auctioneer selects an advertiser and shows her ad, which is then either clicked or not. An advertiser derives value from clicks; the value of a click is her private information. Initially, neither the auctioneer nor the advertisers have any information about the likelihood of clicks on the advertisements. The auctioneer's goal is to design a (dominant strategies) truthful mechanism that (approximately) maximizes the social welfare. If the advertisers bid their true private values, our problem is equivalent to the "multi-armed bandit problem", and thus can be viewed as a strategic version of the latter. In particular, for both problems the quality of an algorithm can be characterized by "regret", the difference in social welfare between the algorithm and the benchmark which always selects the same "best" advertisement. We investigate how the design of multi-armed bandit algorithms is affect...

  11. "Angels & Demons" - Distinguishing truth from fiction

    CERN Multimedia

    2005-01-01

    Dan Brown's best-selling novel "Angels & Demons" was published in French on 2 March. A web page on CERN's public site is dedicated to separating truth from fiction in this novel. After the extraordinary success of Dan Brown's "Da Vinci Code", one of his earlier novels "Angels & Demons", published in 2000, has now become a best seller and has generated a flood of questions about CERN. This detective story is about a secret society, the Illuminati, who wish to destroy the Vatican with an antimatter bomb stolen from - wait for it - CERN! Inevitably, CERN has been bombarded with calls about the technologies described in the novel that are supposed to be under development in the Laboratory. The Press Office has always explained that, even if the novel appears to be very informative, it is in fact a mixture of fact and fiction. For instance, according to the novel CERN is supposed to own a plane that can cover the distance between Massachusetts in the United States and Switzerland in just over an hour! ...

  12. Vital truths about managing your costs.

    Science.gov (United States)

    Ames, B C; Hlavacek, J D

    1990-01-01

    Four truths apply to every business situation: 1. It is essential to be a lower cost supplier. 2. To stay competitive, inflation-adjusted costs of producing and supplying products and services must trend downward. 3. The true cost and profit pictures for each product/market segment must always be known, and traditional accounting practices must not obscure them. 4. A business must concentrate as much on cash flow and balance-sheet strengths as it does on profits. In order to ascertain exactly what your costs are, you must carefully isolate and assign various costs to specific products, accounts, or markets. Managers often do this badly, working on the basis of "average" costs. This ignores important differences among products and the fact that different products, different markets, and different customers incur different overhead costs. Most manufacturing companies' most important expense categories are R&D, sales, and general and administrative costs, but surprisingly, they generally don't get the attention they should. Neither do two crucial ratios-gross margin and the percent of assets employed per dollar of sales. Gross margins should usually not be less than 40%, and for most manufacturing companies, assets should not be over 60% of annual sales. Wrong deviation from these ratios will undermine profit targets. Once your costs are known and clearly assigned to product lines, markets, and key customers, they should be widely shared in the organization so that everyone will feel committed to cost management and know when deviations occur.

  13. Heart Truth for Women: If You Have Heart Disease

    Science.gov (United States)

    ... of physical activity • Unhealthy diet • Diabetes and prediabetes • Metabolic syndrome Other conditions and factors also may contribute to ... The Heart Truth is a way of informing women about what they can do to prevent heart ...

  14. News, truth and crime: the Westray disaster and its aftermath

    Energy Technology Data Exchange (ETDEWEB)

    McMullan, J.L. [Saint Mary' s University, Halifax, NS (Canada). Department of Sociology and Criminology

    2005-07-01

    A study of the way the media portrayed the Westray Mine disaster and its aftermath over the period 1992 to 2002 is presented. The chapters titles are; power, discourse, and the production of news as truth; the explosion and its aftermath; studying the press and Westray; the press and the presentation of Westray's truth; and the politics of truth and the invisibility of corporate crime. News articles reporting the accident and outcome were sampled, coded, and evaluated by content analysis. It is concluded that the various media represented alternative truths, but did not label the corporation as criminal. This was missing from the media's reporting of the disaster.

  15. The source of the truth bias: Heuristic processing?

    Science.gov (United States)

    Street, Chris N H; Masip, Jaume

    2015-06-01

    People believe others are telling the truth more often than they actually are; this is called the truth bias. Surprisingly, when a speaker is judged at multiple points across their statement the truth bias declines. Previous claims argue this is evidence of a shift from (biased) heuristic processing to (reasoned) analytical processing. In four experiments we contrast the heuristic-analytic model (HAM) with alternative accounts. In Experiment 1, the decrease in truth responding was not the result of speakers appearing more deceptive, but was instead attributable to the rater's processing style. Yet contrary to HAMs, across three experiments we found the decline in bias was not related to the amount of processing time available (Experiments 1-3) or the communication channel (Experiment 2). In Experiment 4 we found support for a new account: that the bias reflects whether raters perceive the statement to be internally consistent. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  16. Traceable radiometry underpinning terrestrial- and helio-studies (TRUTHS)

    NARCIS (Netherlands)

    Fox, N.; Aiken, J.; Barnett, J.J.; Briottet, X.; Carvell, R.; Frohlich, C.; Groom, S.B.; Hagolle, O.; Haigh, J.D.; Kieffer, H.H.; Lean, J.; Pollock, D.B.; Quinn, T.; Sandford, M.C.W.; Schaepman, M.E.; Shine, K.P.; Schmutz, W.K.; Teillet, P.M.; Thome, K.J.; Verstraete, M.M.; Zalewski, E.

    2003-01-01

    The Traceable Radiometry Underpinning Terrestrial- and Helio- Studies (TRUTHS) mission offers a novel approach to the provision of key scientific data with unprecedented radiometric accuracy for Earth Observation (EO) and solar studies, which will also establish well-calibrated reference

  17. Women's Heart Disease: Join the Heart Truth Community

    Science.gov (United States)

    ... this page please turn JavaScript on. Feature: Women's Heart Disease Join The Heart Truth Community Past Issues / Winter ... introduced as the national symbol for women and heart disease awareness in 2002 by the NHLBI. The Red ...

  18. Belief Elicitation : A Horse Race among Truth Serums

    NARCIS (Netherlands)

    Trautmann, S.T.; van de Kuilen, G.

    2011-01-01

    In survey studies, probabilistic expectations about uncertain events are typically elicited by asking respondents for their introspective beliefs. If more complex procedures are feasible, beliefs can be elicited by incentive compatible revealed preference mechanisms (“truth serums”). Various

  19. Automatic segmentation of kidneys from non-contrast CT images using efficient belief propagation

    Science.gov (United States)

    Liu, Jianfei; Linguraru, Marius George; Wang, Shijun; Summers, Ronald M.

    2013-03-01

    CT colonography (CTC) can increase the chance of detecting high-risk lesions not only within the colon but anywhere in the abdomen with a low cost. Extracolonic findings such as calculi and masses are frequently found in the kidneys on CTC. Accurate kidney segmentation is an important step to detect extracolonic findings in the kidneys. However, noncontrast CTC images make the task of kidney segmentation substantially challenging because the intensity values of kidney parenchyma are similar to those of adjacent structures. In this paper, we present a fully automatic kidney segmentation algorithm to support extracolonic diagnosis from CTC data. It is built upon three major contributions: 1) localize kidney search regions by exploiting the segmented liver and spleen as well as body symmetry; 2) construct a probabilistic shape prior handling the issue of kidney touching other organs; 3) employ efficient belief propagation on the shape prior to extract the kidneys. We evaluated the accuracy of our algorithm on five non-contrast CTC datasets with manual kidney segmentation as the ground-truth. The Dice volume overlaps were 88%/89%, the root-mean-squared errors were 3.4 mm/2.8 mm, and the average surface distances were 2.1 mm/1.9 mm for the left/right kidney respectively. We also validated the robustness on 27 additional CTC cases, and 23 datasets were successfully segmented. In four problematic cases, the segmentation of the left kidney failed due to problems with the spleen segmentation. The results demonstrated that the proposed algorithm could automatically and accurately segment kidneys from CTC images, given the prior correct segmentation of the liver and spleen.

  20. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    Science.gov (United States)

    Bertholet, J.; Wan, H.; Toftegaard, J.; Schmidt, M. L.; Chotard, F.; Parikh, P. J.; Poulsen, P. R.

    2017-02-01

    Radio-opaque fiducial markers of different shapes are often implanted in or near abdominal or thoracic tumors to act as surrogates for the tumor position during radiotherapy. They can be used for real-time treatment adaptation, but this requires a robust, automatic segmentation method able to handle arbitrarily shaped markers in a rotational imaging geometry such as cone-beam computed tomography (CBCT) projection images and intra-treatment images. In this study, we propose a fully automatic dynamic programming (DP) assisted template-based (TB) segmentation method. Based on an initial DP segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated. The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio. For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations.

  1. Theatre as truth practice : Arthur Miller’s The Crucible - a play waiting for the occasion

    NARCIS (Netherlands)

    Aziz, Aamir

    2014-01-01

    In my reading of, and dealing with Arthur Miller’s play The Crucible, I found that it is concerned with truth, or different forms of truth. As the plural suggests, these are not absolute or objective truths, let alone universal ones, nor is there one ‘deep’ truth in the play, in a basic hermeneutica

  2. Measurement Theory in Deutsch's Algorithm Based on the Truth Values

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao

    2016-08-01

    We propose a new measurement theory, in qubits handling, based on the truth values, i.e., the truth T (1) for true and the falsity F (0) for false. The results of measurement are either 0 or 1. To implement Deutsch's algorithm, we need both observability and controllability of a quantum state. The new measurement theory can satisfy these two. Especially, we systematically describe our assertion based on more mathematical analysis using raw data in a thoughtful experiment.

  3. Descartes's Method and the Role of Eternal Truths

    OpenAIRE

    Bruce, Zachary MacKay

    2014-01-01

    I contend that Descartes's infamous commitment to God's free creation of the eternal truths plays an integral role in Descartes's philosophical program. Descartes's primary philosophical goal is to establish a method capable of yielding firm and lasting scientific knowledge. It isn't widely agreed that Descartes has been successful: first, his response to the threat of skepticism appears circular; and second, his account of God's free creation of eternal truths (e.g., mathematical and logical...

  4. Public relations and journalism: truth, trust, transparency and integrity

    OpenAIRE

    Davies, Frank

    2008-01-01

    Truth, trust, integrity and reputation are key concepts for understanding the relationship between journalists and public relations practitioners. This the paper: first, considers the current debate on the inter-relationship between journalism and public relations; second distinguishes varieties of public relations and journalism; third, analyses the Editorial Intelligence controversy; fourth, deconstructs aspects of "truth" and "trust" in the context of that debate; fifth, considers why the ...

  5. Truth Space Method for Caching Database Queries

    Directory of Open Access Journals (Sweden)

    S. V. Mosin

    2015-01-01

    Full Text Available We propose a new method of client-side data caching for relational databases with a central server and distant clients. Data are loaded into the client cache based on queries executed on the server. Every query has the corresponding DB table – the result of the query execution. These queries have a special form called "universal relational query" based on three fundamental Relational Algebra operations: selection, projection and natural join. We have to mention that such a form is the closest one to the natural language and the majority of database search queries can be expressed in this way. Besides, this form allows us to analyze query correctness by checking lossless join property. A subsequent query may be executed in a client’s local cache if we can determine that the query result is entirely contained in the cache. For this we compare truth spaces of the logical restrictions in a new user’s query and the results of the queries execution in the cache. Such a comparison can be performed analytically , without need in additional Database queries. This method may be used to define lacking data in the cache and execute the query on the server only for these data. To do this the analytical approach is also used, what distinguishes our paper from the existing technologies. We propose four theorems for testing the required conditions. The first and the third theorems conditions allow us to define the existence of required data in cache. The second and the fourth theorems state conditions to execute queries with cache only. The problem of cache data actualizations is not discussed in this paper. However, it can be solved by cataloging queries on the server and their serving by triggers in background mode. The article is published in the author’s wording.

  6. Semi-automatic segmentation and modeling of the cervical spinal cord for volume quantification in multiple sclerosis patients from magnetic resonance images

    Science.gov (United States)

    Sonkova, Pavlina; Evangelou, Iordanis E.; Gallo, Antonio; Cantor, Fredric K.; Ohayon, Joan; McFarland, Henry F.; Bagnato, Francesca

    2008-03-01

    Spinal cord (SC) tissue loss is known to occur in some patients with multiple sclerosis (MS), resulting in SC atrophy. Currently, no measurement tools exist to determine the magnitude of SC atrophy from Magnetic Resonance Images (MRI). We have developed and implemented a novel semi-automatic method for quantifying the cervical SC volume (CSCV) from Magnetic Resonance Images (MRI) based on level sets. The image dataset consisted of SC MRI exams obtained at 1.5 Tesla from 12 MS patients (10 relapsing-remitting and 2 secondary progressive) and 12 age- and gender-matched healthy volunteers (HVs). 3D high resolution image data were acquired using an IR-FSPGR sequence acquired in the sagittal plane. The mid-sagittal slice (MSS) was automatically located based on the entropy calculation for each of the consecutive sagittal slices. The image data were then pre-processed by 3D anisotropic diffusion filtering for noise reduction and edge enhancement before segmentation with a level set formulation which did not require re-initialization. The developed method was tested against manual segmentation (considered ground truth) and intra-observer and inter-observer variability were evaluated.

  7. Truth and reconciliation: Confronting the past in Death and the Maiden (Ariel Dorfman and Playland (Athol Fugard

    Directory of Open Access Journals (Sweden)

    C. Maree

    1995-05-01

    Full Text Available Both plays deal with the devastating effects of the sociopolitical on the individual and point to the ways that factuality enters fiction, either to defictionalize it or refictionalize it. The characters in each play confront the past by seeking the truth, either to tell it or have it told to them. In Fugard's play, written in the middle o f a transition period, the confession is complete and this resolution places the play in the generally utopian world of protest theatre. Dorfman's play, written after the redemocratization of Chile, is grounded in uncertainties, half-truths and deceit. The confession is incomplete and thus there is no resolution or final harmony, placing this play within the operative dilemmas of the theatre of crisis.

  8. 16 CFR 14.16 - Interpretation of Truth-in-Lending Orders consistent with amendments to the Truth-in-Lending Act...

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Interpretation of Truth-in-Lending Orders consistent with amendments to the Truth-in-Lending Act and Regulation Z. 14.16 Section 14.16 Commercial... INTERPRETATIONS, GENERAL POLICY STATEMENTS, AND ENFORCEMENT POLICY STATEMENTS § 14.16 Interpretation of Truth-in...

  9. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  10. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (comp.)

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  11. Verdade e veracidade na sociologia do conhecimento educacional Truth and truthfulness in the sociology educational knowledge

    Directory of Open Access Journals (Sweden)

    Michael Young

    2007-06-01

    Full Text Available Reflete-se, neste artigo, sobre questões de verdade e objetividade na sociologia do conhecimento educacional, explorando-as. Inicia-se revendo os problemas levantados pelos enfoques socioconstrucionistas do conhecimento, associados com a Nova Sociologia da Educação da década de 1970. Sugere-se que tais problemas apresentam paralelos significativos com as idéias pragmatistas de James e Dewey, analisadas por Durkheim, de modo tão acurado, em suas palestras sobre o pragmatismo. Enfoca-se, então, o desenvolvimento das idéias de Durkheim, formulado por Basil Bernstein. Argumenta-se que, a despeito de seus avanços conceituais originais, Bernstein permanece preso à crença de que as ciências naturais constituem o único modelo para o conhecimento objetivo. Discute-se, então, a idéia de formas simbólicas, de Ernst Cassirer, como uma base mais adequada para a sociologia do conhecimento. Concluindo, argumenta-se que um enfoque do conhecimento, nos estudos educacionais, que se fundamenta na idéia de objetividade simbólica, de Cassirer, pode enfrentar a tensão entre o conceito de verdade e o compromisso com "ser verdadeiro", que não foi resolvido, talvez mesmo nem enfrentado, pela Nova Sociologia da Educação da década de 1970.The aim of this paper is to reflect on and explore questions of truth and objectivity in the sociology of educational knowledge. It begins by reviewing the problems raised by the social constructivist approaches to knowledge associated with the "new sociology of education" of the 1970s. It suggests that they have significant parallels with the pragmatist ideas of James and Dewey that Durkheim analysed so perceptively in his lectures on pragmatism. The paper then considers Basil Bernstein's development of Durkheim's ideas. The paper argues that despite his highly original conceptual advances Bernstein remains trapped in the belief that the natural sciences remain the only model for objective knowledge. This leads us

  12. Finding "truth" across different data sources.

    Science.gov (United States)

    Rein, Alison; Simpson, Lisa A

    2017-01-01

    The proliferation of new technology platforms and tools is dramatically advancing our ability to capture, integrate and use clinical and other health related data for research and care. Another critical and increasingly common source of data comes directly from patients - often in the form of Patient Reported Outcomes (PRO). As more providers and payers recognize that patient experiences reflect a critical dimension of the value proposition, these data are informing broader strategies to achieve performance improvement and accountability in health systems. Combined with other traditional (e.g., claims) and more recent (e.g., Electronic Health Record) data assets, PROs can help to examine experiences and outcomes that convey a more complete picture of both individual and population health. One of the areas of research where this is most evident is cancer survivorship, including long-term adverse effects, as the population of survivors is increasing given advances in detection and treatment. Key questions remain as to how and under what conditions these new data resources can be used for research, and which are the best "sources of truth" for specific types of information. A recent IJHPR validation study by Hamood et al. reflects important progress in this regard, and establishes the necessary groundwork for a larger planned study. There are some important limitations worth noting, such as a small sample size (which does not support adequate subgroup analysis); a relatively narrow focus on women with only early stage or regionally advanced breast cancer; and a limited focus on outcomes that are primarily clinical and relatively severe in nature (e.g., cardiovascular disease). Finally, as use of EHRs becomes ubiquitous, as patient perspectives and outcome measures are considered, and as more types of data are systematically collected via electronic systems, further comparison and validation of non-clinical data elements captured via such tools will become

  13. Degrees of Truthfulness in Accepted Scientific Claims.

    Directory of Open Access Journals (Sweden)

    Ahmed Hassan Mabrouk

    2008-12-01

    Full Text Available Normal 0 false false false EN-MY X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Abstract: Sciences adopt different methodologies in deriving claims and establishing theories. As a result, two accepted claims or theories belonging to two different sciences may not necessarily carry the same degree of truthfulness. Examining the different methodologies of deriving claims in the sciences of ʿaqīdah (Islamic Creed, fiqh (Islamic Jurisprudence and physics, the study shows that ʿaqīdah provides a holistic understanding of the universe. Physics falls short of interpreting physical phenomena unless these phenomena are looked at through the ʿaqīdah holistic view. Left to itself, error may creep into laws of physics due to the methodology of conducting the physical experiments, misinterpreting the experimental results, or accepting invalid assumptions. As for fiqh, it is found that apart from apparent errors, fiqh views cannot be falsified. It is, therefore, useful to consider ʿaqīdah as a master science which would permit all other sciences to live in harmony.

  14. The challenge of truth telling across cultures: a case study.

    Science.gov (United States)

    Zahedi, Farzaneh

    2011-01-01

    Accompanied with various opinions across cultures, truth telling is a major debate in bioethics. Many studies have focused on attitudes toward truth disclosure. We intend to review several relevant research studies, and discuss the issue through a clinical case consultation. It seems that while "the right to know" is emphasized in bioethics, in some cultural contexts, health professionals fear communicating bad news. The patients may not receive information directly, because it is believed that the truth may make the patient feel hopeless and unable to cope with the problem. Nevertheless, some believe that sharing information may strengthen a trusting relationship between patients and medical professionals. Extensive efforts are in process in some societies to make patient rights to know the truth as a natural part of medical practice. However, in some cases, the principles of respect for patient autonomy require us to accept patient's refusal to know the truth, with the provision that he assigns someone to receive information and make medical decisions on his behalf. In conclusion, it is suggested that healthcare professionals should not act on a unique presumption in all cases and they should explore what the real interest of patient is, in order to respect individual autonomy.

  15. The Medawar Lecture 2004 the truth about science.

    Science.gov (United States)

    Lipton, Peter

    2005-06-29

    The attitudes of scientists towards the philosophy of science is mixed and includes considerable indifference and some hostility. This may be due in part to unrealistic expectation and to misunderstanding. Philosophy is unlikely directly to improve scientific practices, but scientists may find the attempt to explain how science works and what it achieves of considerable interest nevertheless. The present state of the philosophy of science is illustrated by recent work on the 'truth hypothesis', according to which, science is generating increasingly accurate representations of a mind-independent and largely unobservable world. According to Karl Popper, although truth is the aim of science, it is impossible to justify the truth hypothesis. According to Thomas Kuhn, the truth hypothesis is false, because scientists can only describe a world that is partially constituted by their own theories and hence not mind-independent. The failure of past scientific theories has been used to argue against the truth hypothesis; the success of the best current theories has been used to argue for it. Neither argument is sound.

  16. Grounded theory.

    Science.gov (United States)

    Harris, Tina

    2015-04-29

    Grounded theory is a popular research approach in health care and the social sciences. This article provides a description of grounded theory methodology and its key components, using examples from published studies to demonstrate practical application. It aims to demystify grounded theory for novice nurse researchers, by explaining what it is, when to use it, why they would want to use it and how to use it. It should enable nurse researchers to decide if grounded theory is an appropriate approach for their research, and to determine the quality of any grounded theory research they read.

  17. Mediation and Automatization.

    Science.gov (United States)

    Hutchins, Edwin

    This paper discusses the relationship between the mediation of task performance by some structure that is not inherent in the task domain itself and the phenomenon of automatization, in which skilled performance becomes effortless or phenomenologically "automatic" after extensive practice. The use of a common simple explicit mediating…

  18. Automatic Differentiation Package

    Energy Technology Data Exchange (ETDEWEB)

    2007-03-01

    Sacado is an automatic differentiation package for C++ codes using operator overloading and C++ templating. Sacado provide forward, reverse, and Taylor polynomial automatic differentiation classes and utilities for incorporating these classes into C++ codes. Users can compute derivatives of computations arising in engineering and scientific applications, including nonlinear equation solving, time integration, sensitivity analysis, stability analysis, optimization and uncertainity quantification.

  19. Digital automatic gain control

    Science.gov (United States)

    Uzdy, Z.

    1980-01-01

    Performance analysis, used to evaluated fitness of several circuits to digital automatic gain control (AGC), indicates that digital integrator employing coherent amplitude detector (CAD) is best device suited for application. Circuit reduces gain error to half that of conventional analog AGC while making it possible to automatically modify response of receiver to match incoming signal conditions.

  20. Focusing Automatic Code Inspections

    NARCIS (Netherlands)

    Boogerd, C.J.

    2010-01-01

    Automatic Code Inspection tools help developers in early detection of defects in software. A well-known drawback of many automatic inspection approaches is that they yield too many warnings and require a clearer focus. In this thesis, we provide such focus by proposing two methods to prioritize

  1. Health effects of the Chernobyl accident: fears, rumours and the truth.

    Science.gov (United States)

    Rahu, Mati

    2003-02-01

    The impact of the world's worst nuclear disaster at Chernobyl in 1986 is reviewed within a framework of a triad of fear, rumour and truth. The scope of the accident, Soviet secrecy about it, and the lack of general awareness of, or disregard for, the effects of radiation created a fertile ground for persistent fears and rumours attributing any health problem to Chernobyl. Scientifically correct answers to health issues have been the means to combat disinformation, and to replace interconnected fears, misconceptions and rumours. To date, according to the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) 2000 Report, based on a review of epidemiological and radiobiological studies, the main radiation-related effect of the Chernobyl accident is an increased risk of childhood thyroid cancer. In addition, the accident has had serious non-radiation-related psychological consequences on the residents of the contaminated territories, resettled populations and clean-up workers. Researchers in search of the truth through epidemiological reasoning are facing serious challenges which are reviewed within this article.

  2. Automatic detection of coronaries ostia in computed tomography angiography volume data

    Directory of Open Access Journals (Sweden)

    Noha A. Seada

    2016-11-01

    Conclusions: Thus the proposed algorithm gives accurate results in comparison with the ground truth, which proves the efficiency of the proposed algorithm and its applicability to be extended as a seed for heart coronaries segmentation. [Int J Res Med Sci 2016; 4(11.000: 4747-4752

  3. Truth and falsehood in Judith: A Greimassian contribution

    Directory of Open Access Journals (Sweden)

    Risimati Synod Hobyane

    2016-04-01

    Full Text Available Narratives are never meant to be neutral in their rhetorical intent. They have power not onlyto reveal realities and prevail worldviews but also to create new realities and new worldviewsby refuting illusions and falsehood, and affirming the truth. The Judith narrative is a goodexample for the exploration of this claim. This article contributes by employing the thematiclevel of analysis, the veridictory square in particular, of the Greimassian approach to narratives,to map out the possible illusions and affirming the truth within the second temple Judaism.The study of the veridictory square as informed by the level of analysis, mentioned above,seems to persuade the reader by first, extracting the truth from illusion and thereafter exposingand shaming falsehood in Judith. Subsequently, the article asserts that Judith is not neutral in itsintent but was designed to deal with illusive ideas that might have been impacting the wellbeingof the second temple Judaism.

  4. Love the Truth in the Franciscan School (XIIIth century

    Directory of Open Access Journals (Sweden)

    Manuel Lázaro Pulido

    2013-11-01

    Full Text Available Love to the truth is a fundamental question in the Franciscan School. It has your origin on the Franciscan practical needs to transmit the evangelical message to all the men. The universality of the message inspires the concept of wisdom as a base to love the truth. The truth appears as occasion of reference to God, the significatio never subordinates to the res. The article exposes the fundamental milestones of this construction from the origins of the Franciscan School to the ends of the 13th century with Gonzalo Hispano, indicating the common points and the internal discussions of a School according Anthony of Lisbon/Padua, Alexander of Hales, Odo Rigaldus, William of Melitona, Robert Grosseteste, Roger Bacon, Bonaventure, Matthew of Aquasparta, Peter John Olivi and Gonsalvus of Spain

  5. Automatic metastatic brain tumor segmentation for stereotactic radiosurgery applications

    Science.gov (United States)

    Liu, Yan; Stojadinovic, Strahinja; Hrycushko, Brian; Wardak, Zabi; Lu, Weiguo; Yan, Yulong; Jiang, Steve B.; Timmerman, Robert; Abdulrahman, Ramzi; Nedzi, Lucien; Gu, Xuejun

    2016-12-01

    .8  ±  12.6 mm, an MSSD of 1.5  ±  3.2 mm, and an SDSSD of 1.8  ±  3.4 mm when comparing to the physician drawn ground truth. The result indicated that the developed automatic segmentation strategy yielded accurate brain tumor delineation and presented as a useful clinical tool for SRS applications.

  6. Trauma and truth: representations of madness in Chinese literature.

    Science.gov (United States)

    Linder, Birgit

    2011-12-01

    With only a few exceptions, the literary theme of madness has long been a domain of Western cultural studies. Much of Western writing represents madness as an inquiry into the deepest recesses of the mind, while the comparatively scarce Chinese tradition is generally defined by madness as a voice of social truth. This paper looks at five works of twentieth-century Chinese fiction that draw on socio-somatic aspects of madness to reflect upon social truths, suggesting that the inner voice of subjectivity is perhaps not the only true voice of the self.

  7. Fibroadenoma in the male breast: Truth or Myth?

    Science.gov (United States)

    Agarwal, Puneet; Kohli, Gaurav

    2016-01-01

    Truth or myth is seldom encountered in the practice of surgery, especially in cases of breast diseases. Yet, even after thousands of years of treating breast disease by surgeons/healers, fibroadenoma in the male breast seems to be a myth, due to the absence of fibro-glandular tissue. We wish to break this myth by our own experience as well as other studies by others all over the world, and unveil the truth that fibroadenoma in the male breast is a definitive entity and has a prevalence among the vast spectrum of breast diseases.

  8. Automatic Exudate Detection from Non-dilated Diabetic Retinopathy Retinal Images Using Fuzzy C-means Clustering

    Directory of Open Access Journals (Sweden)

    Akara Sopharak

    2009-03-01

    Full Text Available Exudates are the primary sign of Diabetic Retinopathy. Early detection can potentially reduce the risk of blindness. An automatic method to detect exudates from low-contrast digital images of retinopathy patients with non-dilated pupils using a Fuzzy C-Means (FCM clustering is proposed. Contrast enhancement preprocessing is applied before four features, namely intensity, standard deviation on intensity, hue and a number of edge pixels, are extracted to supply as input parameters to coarse segmentation using FCM clustering method. The first result is then fine-tuned with morphological techniques. The detection results are validated by comparing with expert ophthalmologists’ hand-drawn ground-truths. Sensitivity, specificity, positive predictive value (PPV, positive likelihood ratio (PLR and accuracy are used to evaluate overall performance. It is found that the proposed method detects exudates successfully with sensitivity, specificity, PPV, PLR and accuracy of 87.28%, 99.24%, 42.77%, 224.26 and 99.11%, respectively.

  9. A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images.

    Science.gov (United States)

    Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P

    2017-01-01

    Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository.

  10. Automatically Identifying Fusion Events between GLUT4 Storage Vesicles and the Plasma Membrane in TIRF Microscopy Image Sequences

    Directory of Open Access Journals (Sweden)

    Jian Wu

    2015-01-01

    Full Text Available Quantitative analysis of the dynamic behavior about membrane-bound secretory vesicles has proven to be important in biological research. This paper proposes a novel approach to automatically identify the elusive fusion events between VAMP2-pHluorin labeled GLUT4 storage vesicles (GSVs and the plasma membrane. The differentiation is implemented to detect the initiation of fusion events by modified forward subtraction of consecutive frames in the TIRFM image sequence. Spatially connected pixels in difference images brighter than a specified adaptive threshold are grouped into a distinct fusion spot. The vesicles are located at the intensity-weighted centroid of their fusion spots. To reveal the true in vivo nature of a fusion event, 2D Gaussian fitting for the fusion spot is used to derive the intensity-weighted centroid and the spot size during the fusion process. The fusion event and its termination can be determined according to the change of spot size. The method is evaluated on real experiment data with ground truth annotated by expert cell biologists. The evaluation results show that it can achieve relatively high accuracy comparing favorably to the manual analysis, yet at a small fraction of time.

  11. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  12. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-08

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of

  13. Automatic segmentation of meningioma from non-contrasted brain MRI integrating fuzzy clustering and region growing

    Directory of Open Access Journals (Sweden)

    Liao Chun-Chih

    2011-08-01

    Full Text Available Abstract Background In recent years, magnetic resonance imaging (MRI has become important in brain tumor diagnosis. Using this modality, physicians can locate specific pathologies by analyzing differences in tissue character presented in different types of MR images. This paper uses an algorithm integrating fuzzy-c-mean (FCM and region growing techniques for automated tumor image segmentation from patients with menigioma. Only non-contrasted T1 and T2 -weighted MR images are included in the analysis. The study's aims are to correctly locate tumors in the images, and to detect those situated in the midline position of the brain. Methods The study used non-contrasted T1- and T2-weighted MR images from 29 patients with menigioma. After FCM clustering, 32 groups of images from each patient group were put through the region-growing procedure for pixels aggregation. Later, using knowledge-based information, the system selected tumor-containing images from these groups and merged them into one tumor image. An alternative semi-supervised method was added at this stage for comparison with the automatic method. Finally, the tumor image was optimized by a morphology operator. Results from automatic segmentation were compared to the "ground truth" (GT on a pixel level. Overall data were then evaluated using a quantified system. Results The quantified parameters, including the "percent match" (PM and "correlation ratio" (CR, suggested a high match between GT and the present study's system, as well as a fair level of correspondence. The results were compatible with those from other related studies. The system successfully detected all of the tumors situated at the midline of brain. Six cases failed in the automatic group. One also failed in the semi-supervised alternative. The remaining five cases presented noticeable edema inside the brain. In the 23 successful cases, the PM and CR values in the two groups were highly related. Conclusions Results indicated

  14. Grounded cognition.

    Science.gov (United States)

    Barsalou, Lawrence W

    2008-01-01

    Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.

  15. Bibliographic Entity Automatic Recognition and Disambiguation

    CERN Document Server

    AUTHOR|(SzGeCERN)766022

    This master thesis reports an applied machine learning research internship done at digital library of the European Organization for Nuclear Research (CERN). The way an author’s name may vary in its representation across scientific publications creates ambiguity when it comes to uniquely identifying an author; In the database of any scientific digital library, the same full name variation can be used by more than one author. This may occur even between authors from the same research affiliation. In this work, we built a machine learning based author name disambiguation solution. The approach consists in learning a distance function from a ground-truth data, blocking publications of broadly similar author names, and clustering the publications using a semi-supervised strategy within each of the blocks. The main contributions of this work are twofold; first, improving the distance model by taking into account the (estimated) ethnicity of the author’s full name. Indeed, names from different ethnicities, for e...

  16. Issues Related to Spirituality and the Search for Truth in Sectarian Institutions of Higher Education

    Science.gov (United States)

    Poe, Harry Lee

    2005-01-01

    The foundational spiritual truths of sectarian institutions, often criticized by the secular academy, may provide a basis for a vigorous search for truth in the transition from modern to postmodern society.

  17. The Metaphysical Assumptions of the Conception of Truth in Martin Smiglecki’s Logic

    Directory of Open Access Journals (Sweden)

    Tomasz Pawlikowski

    2015-06-01

    Full Text Available The central element of the concept of truth in Smiglecki’s Logica (1618 is his approach to formulating definitions. Where the establishing of the truth is concerned, he always points to compliance at the level of the community (conformitas in respect of whether the intellectual recognition of a thing or things is in accordance with its intellectual equivalent, or the principles behind the latter, where these are understood as designating the corresponding idea inherent in the intellect of God. This is a form of the classical definition of truth --- similar to that used by St. Thomas Aquinas --- with a wide scope of applicability: to the field of existence (transcendental truth, to cognition and language (logical truth, and even to moral beliefs (moral rightness. Smiglecki distinguishes three types of truth: truth assigned to being, truth assigned to cognition, and truth assigned to moral convictions. Of these, the first is identified with transcendental truth, while the second is attributed not only to propositions and sentences, but also to concepts. The truth of concepts results from compliance with things by way of representation, while the truth of propositions and sentences issues from a compliance with things involving the implementation of some form of expression or other. Logical truth pertains to propositions rather than concepts. The kind of moral truth he writes about is what we would now be more likely to call “truthfulness”. With the exception of moral truth, which he defined as compliance of a statement with someone’s internal thoughts, Smiglecki considers every kind of truth to be a conditioned state of the object of knowledge. He says (a that the ultimate object of reference of human cognitive functioning is a real being, absolutely true by virtue of compliance with its internal principles and their idea as present in the intellect of God, and (b that the compatibility of human cognition with a real being is the ultimate

  18. An eigenvalue approach for the automatic scaling of unknowns in model-based reconstructions: Application to real-time phase-contrast flow MRI.

    Science.gov (United States)

    Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens

    2017-09-28

    The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.

  19. A Gentzen Calculus for Nothing but the Truth

    NARCIS (Netherlands)

    S. Wintein (Stefan); R. Muskens (Reinhard)

    2016-01-01

    textabstractIn their paper Nothing but the Truth Andreas Pietz and Umberto Rivieccio present Exactly True Logic (ETL), an interesting variation upon the four-valued logic for first-degree entailment FDE that was given by Belnap and Dunn in the 1970s. Pietz & Rivieccio provide this logic with a

  20. Structural elements of the right to the truth

    Directory of Open Access Journals (Sweden)

    Luis Andrés Fajardo Arturo

    2012-06-01

    Full Text Available This article refers specifically to the current state of the discussion on the basic legal structure of the right to truth, its size and degree of legality and reports initial results on the evaluation of the models adopted for peace in Colombia, in light of the international standards and practices.

  1. Steiner versus Wittgenstein: Remarks on Differing Views of Mathematical Truth

    Directory of Open Access Journals (Sweden)

    Charles SAYWARD

    2010-01-01

    Full Text Available Mark Steiner criticizes some remarks Wittgenstein makes about Gödel. Steiner takes Wittgenstein to be disputing a mathematical result. The paper argues that Wittgenstein does no such thing. The contrast between the realist and the demonstrativist concerning mathematical truth is examined. Wittgenstein is held to side with neither camp. Rather, his point is that a realist argument is inconclusive.

  2. Accessing Imagined Communities and Reinscribing Regimes of Truth

    Science.gov (United States)

    Carroll, Sherrie; Motha, Suhanthie; Price, Jeremy N.

    2008-01-01

    In this article, we explore the complex and nebulous terrain between two theoretical concepts, imagined communities (Norton, 2000, 2001), that is, individuals' imagined affiliations with certain groups, and regimes of truth (Foucault, 1980), dominant images inscribed and reinscribed into individual consciousness until they become normative. Using…

  3. Heart Health: Learn the Truth About Your Heart

    Science.gov (United States)

    ... Bar Home Current Issue Past Issues Cover Story Heart Health Learn the Truth About Your Heart Past Issues / Winter 2009 Table of Contents For ... turn Javascript on. Photo: iStock February is American Heart Month. Now is the time to make sure ...

  4. Views from the field Truth seeking and gender: The Liberian ...

    African Journals Online (AJOL)

    to gender influence truth-seeking in a post-conflict situation? Following Liberia's ... mandated to investigate the causes, nature, patterns and impact of human rights violations, as well as identify .... based violence and particularly to the experiences of children and women during armed conflicts in Liberia… This articulation of ...

  5. 75 FR 58469 - Regulation Z; Truth in Lending

    Science.gov (United States)

    2010-09-24

    ... From the Federal Register Online via the Government Publishing Office ] Part II Federal Reserve System 12 CFR Part 226 Regulation Z; Truth in Lending; Proposed Rules, Interim Rule, Final Rules #0;#0...-shopping behaviors, including first-time mortgage shoppers, prime and subprime borrowers, and consumers...

  6. Introduction: myth, truth, and narrative in Herodotus' Histories

    NARCIS (Netherlands)

    Baragwanath, E.; de Bakker, M.; Baragwanath, E.; de Bakker, M.

    2012-01-01

    This introductory chapter focuses on myth and its multiple relationships with the concepts of truth and narrative, both within Herodotus' Histories and between the work and its context. First, it discusses the problematic reception in modern history of the material deemed mythical in Herodotus'

  7. Restorative Justice and the South African Truth and Reconciliation Process

    DEFF Research Database (Denmark)

    Gade, Christian B.N.

    2013-01-01

    It has frequently been argued that the post-apartheid Truth and Reconciliation Commission (TRC) was committed to restorative justice (RJ), and that RJ has deep historical roots in African indigenous cultures by virtue of its congruence both with ubuntu and with African indigenous justice systems...

  8. Truth Control of Dublicate Measurings under Uncertainty Conditions

    Directory of Open Access Journals (Sweden)

    V. A. Anischenko

    2010-01-01

    Full Text Available The paper considers a problem pertaining to truth control of dublicate measurings of technological variables under conditions of data deficit on characteristics of measuring facilities and controlled variables.The proposed control method improves a probability to detect and identify untrue dublicate measurings.

  9. 78 FR 70194 - Truth in Lending (Regulation Z)

    Science.gov (United States)

    2013-11-25

    ... inflation by the annual percentage increase in the Consumer Price Index for Urban Wage Earners and Clerical... credit transactions be adjusted annually by any annual percentage increase in the Consumer Price Index... CFR Part 226 BUREAU OF CONSUMER FINANCIAL PROTECTION 12 CFR Part 1026 Truth in Lending (Regulation Z...

  10. In defense of correspondence truth : A reply to Markus

    NARCIS (Netherlands)

    Borsboom, D.; Haig, B.D.

    2013-01-01

    In his response to our article, Keith Markus argues that our recommendation that psychologists adopt correspondence truth is not supported by sound argument. In our rejoinder, we show that Markus's critique only has force against a minor part of our article. Additionally, we show that he does not ac

  11. Ernst von Glasersfeld's Radical Constructivism and Truth as Disclosure

    Science.gov (United States)

    Joldersma, Clarence W.

    2011-01-01

    In this essay Clarence Joldersma explores radical constructivism through the work of its most well-known advocate, Ernst von Glasersfeld, who combines a sophisticated philosophical discussion of knowledge and truth with educational practices. Joldersma uses Joseph Rouse's work in philosophy of science to criticize the antirealism inherent in…

  12. What is the Point? Ethics, Truth and the Tractatus

    DEFF Research Database (Denmark)

    Christensen, Anne-Marie Søndergaard

    2007-01-01

    discourse is shaped by both subjective and objective concerns. Moving on, I unfold the subjective side of ethics by drawing on Stanley Cavell's notion of the point of an utterance, while the objective side will be presented via Diamond's writing on the importance of truth in ethics. My goal is to argue...

  13. The Hard Truth: Problems and Issues in Urban School Reform

    Science.gov (United States)

    Yisrael, Sean

    2012-01-01

    "The Hard Truth" is a book written for principals and school administrators who want to implement effective change. The topics of the book candidly discuss the problems, people, and issues that get in the way of true school reform; and what building level principals can personally do attain the best possible outcomes.

  14. Truth, Transparency and Trust: Treasured Values in Higher Education

    Science.gov (United States)

    Gross, Karen

    2015-01-01

    The words "truth," "transparency," and "trust" recently have taken on renewed importance in higher education. The reporting and handling of sexual assaults, athletic cheating scandals, Muslim student deaths, the intrusion into the admissions process by college/university presidents forcing acceptance of new students…

  15. On the Analogy between Cognitive Representation and Truth

    Directory of Open Access Journals (Sweden)

    Mauricio SUÁREZ

    2010-01-01

    Full Text Available We defend the claim that the inferential conception of representation is generally deflationary; and we show that deflationism is consistent with pluralism regarding the notion of cognitive representation. The nature and extent of this pluralism is explored by means of an analogy with the minimalist conception of truth.

  16. Green gold : on variations of truth in plantation forestry

    NARCIS (Netherlands)

    Romeijn, P.

    1999-01-01

    The "variations of truth in plantation forestry" is a study on the Teakwood investment program. Teakwood offered the general public in The Netherlands the opportunity to directly invest in a teak plantation in Costa Rica. The program was pioneered in 1989 and truly gained momentum when it

  17. Truth approximation by concretization in capital structure theory

    NARCIS (Netherlands)

    Kuipers, Theo A.F.; Cools, Kees; Hamminga, Bert

    1994-01-01

    This paper supplies a structuralist reconstruction of the Modigliani-Miller theory and shows that the economic literature following their results reports on research with an implicit strategy to come "closer-to-the-truth" in the modern technical sense in philosophy of science.

  18. In defense of correspondence truth : A reply to Markus

    NARCIS (Netherlands)

    Borsboom, D.; Haig, B.D.

    2013-01-01

    In his response to our article, Keith Markus argues that our recommendation that psychologists adopt correspondence truth is not supported by sound argument. In our rejoinder, we show that Markus's critique only has force against a minor part of our article. Additionally, we show that he does not

  19. Ernst von Glasersfeld's Radical Constructivism and Truth as Disclosure

    Science.gov (United States)

    Joldersma, Clarence W.

    2011-01-01

    In this essay Clarence Joldersma explores radical constructivism through the work of its most well-known advocate, Ernst von Glasersfeld, who combines a sophisticated philosophical discussion of knowledge and truth with educational practices. Joldersma uses Joseph Rouse's work in philosophy of science to criticize the antirealism inherent in…

  20. The Hard Truth: Problems and Issues in Urban School Reform

    Science.gov (United States)

    Yisrael, Sean

    2012-01-01

    "The Hard Truth" is a book written for principals and school administrators who want to implement effective change. The topics of the book candidly discuss the problems, people, and issues that get in the way of true school reform; and what building level principals can personally do attain the best possible outcomes.