WorldWideScience

Sample records for corrected satellite-based quantitative

  1. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  2. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Directory of Open Access Journals (Sweden)

    Haris Akram Bhatti

    2016-06-01

    Full Text Available With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA Climate Prediction Centre (CPC morphing technique (CMORPH satellite rainfall product (CMORPH in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW sizes and for sequential windows (SW’s of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE. To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r and standard deviation (SD. Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  3. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  4. Absorbing Aerosols Above Cloud: Detection, Quantitative Retrieval, and Radiative Forcing from Satellite-based Passive Sensors

    Science.gov (United States)

    Jethva, H.; Torres, O.; Remer, L. A.; Bhartia, P. K.

    2012-12-01

    Light absorbing particles such as carbonaceous aerosols generated from biomass burning activities and windblown dust particles can exert a net warming effect on climate; the strength of which depends on the absorption capacity of the particles and brightness of the underlying reflecting background. When advected over low-level bright clouds, these aerosols absorb the cloud reflected radiation from ultra-violet (UV) to shortwave-IR (SWIR) and makes cloud scene darker-a phenomenon commonly known as "cloud darkening". The apparent "darkening" effect can be seen by eyes in satellite images as well as quantitatively in the spectral reflectance measurements made by space borne sensors over regions where light absorbing carbonaceous and dust aerosols overlay low-level cloud decks. Theoretical radiative transfer simulations support the observational evidence, and further reveal that the strength of the cloud darkening and its spectral signature (or color ratio) between measurements at two wavelengths are a bi-function of aerosol and cloud optical thickness (AOT and COT); both are measures of the total amount of light extinction caused by aerosols and cloud, respectively. Here, we developed a retrieval technique, named as the "color ratio method" that uses the satellite measurements at two channels, one at shorter wavelength in the visible and one at longer wavelength in the shortwave-IR for the simultaneous retrieval of AOT and COT. The present technique requires assumptions on the aerosol single-scattering albedo and aerosol-cloud separation which are supplemented by the Aerosol Robotic Network (AERONET) and space borne CALIOP lidar measurements. The retrieval technique has been tested making use of the near-UV and visible reflectance observations made by the Ozone Monitoring Instrument (OMI) and Moderate Resolution Imaging Spectroradiometer (MODIS) for distinct above-cloud smoke and dust aerosol events observed seasonally over the southeast and tropical Atlantic Ocean

  5. Case study of atmospheric correction on CCD data of HJ-1 satellite based on 6S model

    International Nuclear Information System (INIS)

    Xue, Xiaoiuan; Meng, Oingyan; Xie, Yong; Sun, Zhangli; Wang, Chang; Zhao, Hang

    2014-01-01

    In this study, atmospheric radiative transfer model 6S was used to simulate the radioactive transfer process in the surface-atmosphere-sensor. An algorithm based on the look-up table (LUT) founded by 6S model was used to correct (HJ-1) CCD image pixel by pixel. Then, the effect of atmospheric correction on CCD data of HJ-1 satellite was analyzed in terms of the spectral curves and evaluated against the measured reflectance acquired during HJ-1B satellite overpass, finally, the normalized difference vegetation index (NDVI) before and after atmospheric correction were compared. The results showed: (1) Atmospheric correction on CCD data of HJ-1 satellite can reduce the ''increase'' effect of the atmosphere. (2) Apparent reflectance are higher than those of surface reflectance corrected by 6S model in band1∼band3, but they are lower in the near-infrared band; the surface reflectance values corrected agree with the measured reflectance values well. (3)The NDVI increases significantly after atmospheric correction, which indicates the atmospheric correction can highlight the vegetation information

  6. Sensitivity of Satellite-Based Skin Temperature to Different Surface Emissivity and NWP Reanalysis Sources Demonstrated Using a Single-Channel, Viewing-Angle-Corrected Retrieval Algorithm

    Science.gov (United States)

    Scarino, B. R.; Minnis, P.; Yost, C. R.; Chee, T.; Palikonda, R.

    2015-12-01

    station, and NOAA ESRL high-resolution Optimum Interpolation SST (OISST). Precise understanding of the influence these auxiliary inputs have on final satellite-based Ts retrievals may help guide refinement in ɛs characterization and NWP development, e.g., future Goddard Earth Observing System Data Assimilation System versions.

  7. Correcting satellite-based precipitation products through SMOS soil moisture data assimilation in two land-surface models of different complexity: API and SURFEX

    Science.gov (United States)

    Real-time rainfall accumulation estimates at the global scale is useful for many applications. However, the real-time versions of satellite-based rainfall products are known to contain errors relative to real rainfall observed in situ. Recent studies have demonstrated how information about rainfall ...

  8. A quantitative comparison of corrective and perfective maintenance

    Science.gov (United States)

    Henry, Joel; Cain, James

    1994-01-01

    This paper presents a quantitative comparison of corrective and perfective software maintenance activities. The comparison utilizes basic data collected throughout the maintenance process. The data collected are extensive and allow the impact of both types of maintenance to be quantitatively evaluated and compared. Basic statistical techniques test relationships between and among process and product data. The results show interesting similarities and important differences in both process and product characteristics.

  9. Absorption correction factor in X-ray fluorescent quantitative analysis

    International Nuclear Information System (INIS)

    Pimjun, S.

    1994-01-01

    An experiment on absorption correction factor in X-ray fluorescent quantitative analysis were carried out. Standard samples were prepared from the mixture of Fe 2 O 3 and tapioca flour at various concentration of Fe 2 O 3 ranging from 5% to 25%. Unknown samples were kaolin containing 3.5% to-50% of Fe 2 O 3 Kaolin samples were diluted with tapioca flour in order to reduce the absorption of FeK α and make them easy to prepare. Pressed samples with 0.150 /cm 2 and 2.76 cm in diameter, were used in the experiment. Absorption correction factor is related to total mass absorption coefficient (χ) which varied with sample composition. In known sample, χ can be calculated by conveniently the formula. However in unknown sample, χ can be determined by Emission-Transmission method. It was found that the relationship between corrected FeK α intensity and contents of Fe 2 O 3 in these samples was linear. This result indicate that this correction factor can be used to adjust the accuracy of X-ray intensity. Therefore, this correction factor is essential in quantitative analysis of elements comprising in any sample by X-ray fluorescent technique

  10. Field nonuniformity correction for quantitative analysis of digitized mammograms

    International Nuclear Information System (INIS)

    Pawluczyk, Olga; Yaffe, Martin J.

    2001-01-01

    Several factors, including the heel effect, variation in distance from the x-ray source to points in the image and path obliquity contribute to the signal nonuniformity of mammograms. To best use digitized mammograms for quantitative image analysis, these field non-uniformities must be corrected. An empirically based correction method, which uses a bowl-shaped calibration phantom, has been developed. Due to the annular spherical shape of the phantom, its attenuation is constant over the entire image. Remaining nonuniformities are due only to the heel and inverse square effects as well as the variable path through the beam filter, compression plate and image receptor. In logarithmic space, a normalized image of the phantom can be added to mammograms to correct for these effects. Then, an analytical correction for path obliquity in the breast can be applied to the images. It was found that the correction causes the errors associated with field nonuniformity to be reduced from 14% to 2% for a 4 cm block of material corresponding to a combination of 50% fibroglandular and 50% fatty breast tissue. A repeatability study has been conducted to show that in regions as far as 20 cm away from the chest wall, variations due to imaging conditions and phantom alignment contribute to <2% of overall corrected signal

  11. Magnetic Resonance-based Motion Correction for Quantitative PET in Simultaneous PET-MR Imaging.

    Science.gov (United States)

    Rakvongthai, Yothin; El Fakhri, Georges

    2017-07-01

    Motion degrades image quality and quantitation of PET images, and is an obstacle to quantitative PET imaging. Simultaneous PET-MR offers a tool that can be used for correcting the motion in PET images by using anatomic information from MR imaging acquired concurrently. Motion correction can be performed by transforming a set of reconstructed PET images into the same frame or by incorporating the transformation into the system model and reconstructing the motion-corrected image. Several phantom and patient studies have validated that MR-based motion correction strategies have great promise for quantitative PET imaging in simultaneous PET-MR. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. 14 CFR 141.91 - Satellite bases.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Satellite bases. 141.91 Section 141.91... OTHER CERTIFICATED AGENCIES PILOT SCHOOLS Operating Rules § 141.91 Satellite bases. The holder of a... assistant chief instructor is designated for each satellite base, and that assistant chief instructor is...

  13. Multi-spectral band selection for satellite-based systems

    International Nuclear Information System (INIS)

    Clodius, W.B.; Weber, P.G.; Borel, C.C.; Smith, B.W.

    1998-01-01

    The design of satellite based multispectral imaging systems requires the consideration of a number of tradeoffs between cost and performance. The authors have recently been involved in the design and evaluation of a satellite based multispectral sensor operating from the visible through the long wavelength IR. The criteria that led to some of the proposed designs and the modeling used to evaluate and fine tune the designs will both be discussed. These criteria emphasized the use of bands for surface temperature retrieval and the correction of atmospheric effects. The impact of cost estimate changes on the final design will also be discussed

  14. Stability of Gradient Field Corrections for Quantitative Diffusion MRI

    OpenAIRE

    Rogers, Baxter P.; Blaber, Justin; Welch, E. Brian; Ding, Zhaohua; Anderson, Adam W.; Landman, Bennett A.

    2017-01-01

    In magnetic resonance diffusion imaging, gradient nonlinearity causes significant bias in the estimation of quantitative diffusion parameters such as diffusivity, anisotropy, and diffusion direction in areas away from the magnet isocenter. This bias can be substantially reduced if the scanner- and coil-specific gradient field nonlinearities are known. Using a set of field map calibration scans on a large (29 cm diameter) phantom combined with a solid harmonic approximation of the gradient fie...

  15. Improving quantitative dosimetry in (177)Lu-DOTATATE SPECT by energy window-based scatter corrections

    DEFF Research Database (Denmark)

    de Nijs, Robin; Lagerburg, Vera; Klausen, Thomas L

    2014-01-01

    and the activity, which depends on the collimator type, the utilized energy windows and the applied scatter correction techniques. In this study, energy window subtraction-based scatter correction methods are compared experimentally and quantitatively. MATERIALS AND METHODS: (177)Lu SPECT images of a phantom...... technique, the measured ratio was close to the real ratio, and the differences between spheres were small. CONCLUSION: For quantitative (177)Lu imaging MEGP collimators are advised. Both energy peaks can be utilized when the ESSE correction technique is applied. The difference between the calculated...

  16. Satellite-based laser windsounder

    International Nuclear Information System (INIS)

    Schultz, J.F.; Czuchlewski, S.J.; Quick, C.R.

    1997-01-01

    This is the final report of a one-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The project''s primary objective is to determine the technical feasibility of using satellite-based laser wind sensing systems for detailed study of winds, aerosols, and particulates around and downstream of suspected proliferation facilities. Extensive interactions with the relevant operational organization resulted in enthusiastic support and useful guidance with respect to measurement requirements and priorities. Four candidate wind sensing techniques were evaluated, and the incoherent Doppler technique was selected. A small satellite concept design study was completed to identify the technical issues inherent in a proof-of-concept small satellite mission. Use of a Mach-Zehnder interferometer instead of a Fabry-Perot would significantly simplify the optical train and could reduce weight, and possibly power, requirements with no loss of performance. A breadboard Mach-Zehnder interferometer-based system has been built to verify these predictions. Detailed plans were made for resolving other issues through construction and testing of a ground-based lidar system in collaboration with the University of Wisconsin, and through numerical lidar wind data assimilation studies

  17. Satellite based wind resource assessment over the South China Sea

    DEFF Research Database (Denmark)

    Badger, Merete; Astrup, Poul; Hasager, Charlotte Bay

    2014-01-01

    variations are clearly visible across the domain; for instance sheltering effects caused by the land masses. The satellite based wind resource maps have two shortcomings. One is the lack of information at the higher vertical levels where wind turbines operate. The other is the limited number of overlapping...... years of WRF data – specifically the parameters heat flux, air temperature, and friction velocity – are used to calculate a long-term correction for atmospheric stability effects. The stability correction is applied to the satellite based wind resource maps together with a vertical wind profile...... from satellite synthetic aperture radar (SAR) data are particularly suitable for offshore wind energy applications because they offer a spatial resolution up to 500 m and include coastal seas. In this presentation, satellite wind maps are used in combination with mast observations and numerical...

  18. A quantitative approach to diagnosis and correction of organizational and programmatic issues

    International Nuclear Information System (INIS)

    Chiu, C.; Johnson, K.

    1997-01-01

    A integrated approach to diagnosis and correction of critical Organizational and Programmatic (O and P) issues is summarized, and the quantitative special evaluations that ar used to confirm the O and P issues identified by the periodic common cause analysis and integrated safety assessments

  19. Improvement of quantitation in SPECT: Attenuation and scatter correction using non-uniform attenuation data

    International Nuclear Information System (INIS)

    Mukai, T.; Torizuka, K.; Douglass, K.H.; Wagner, H.N.

    1985-01-01

    Quantitative assessment of tracer distribution with single photon emission computed tomography (SPECT) is difficult because of attenuation and scattering of gamma rays within the object. A method considering the source geometry was developed, and effects of attenuation and scatter on SPECT quantitation were studied using phantoms with non-uniform attenuation. The distribution of attenuation coefficients (μ) within the source were obtained by transmission CT. The attenuation correction was performed by an iterative reprojection technique. The scatter correction was done by convolution of the attenuation corrected image and an appropriate filter made by line source studies. The filter characteristics depended on μ and SPEC measurement at each pixel. The SPECT obtained by this method showed the most reasonable results than the images reconstructed by other methods. The scatter correction could compensate completely for a 28% scatter components from a long line source, and a 61% component for thick and extended source. Consideration of source geometries was necessary for effective corrections. The present method is expected to be valuable for the quantitative assessment of regional tracer activity

  20. Quantitative studies with the gamma-camera: correction for spatial and energy distortion

    International Nuclear Information System (INIS)

    Soussaline, F.; Todd-Pokropek, A.E.; Raynaud, C.

    1977-01-01

    The gamma camera sensitivity distribution is an important source of error in quantitative studies. In addition, spatial distortion produces apparent variations in count density which degrades quantitative studies. The flood field image takes into account both effects and is influenced by the pile-up of the tail distribution. It is essential to measure separately each of these parameters. These were investigated using a point source displaced by a special scanning table with two X, Y stepping motors of 10 micron precision. The spatial distribution of the sensitivity, spatial distortion and photopeak in the field of view were measured and compared for different setting-up of the camera and PM gains. For well-tuned cameras, the sensitivity is fairly constant, while the variations appearing in the flood field image are primarily due to spatial distortion, the former more dependent than the latter on the energy window setting. This indicates why conventional flood field uniformity correction must not be applied. A correction technique to improve the results in quantitative studies has been tested using a continuously matched energy window at every point within the field. A method for correcting spatial distortion is also proposed, where, after an adequately sampled measurement of this error, a transformation can be applied to calculate the true position of events. The knowledge of the magnitude of these parameters is essential in the routine use and design of detector systems

  1. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    Science.gov (United States)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  2. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.

    Science.gov (United States)

    Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas

    2009-03-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.

  3. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    International Nuclear Information System (INIS)

    Hofmann, Matthias; Pichler, Bernd; Schoelkopf, Bernhard; Beyer, Thomas

    2009-01-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  4. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hofmann, Matthias [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); University of Oxford, Wolfson Medical Vision Laboratory, Department of Engineering Science, Oxford (United Kingdom); Pichler, Bernd [University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); Schoelkopf, Bernhard [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); Beyer, Thomas [University Hospital Duisburg-Essen, Department of Nuclear Medicine, Essen (Germany); Cmi-Experts GmbH, Zurich (Switzerland)

    2009-03-15

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  5. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  6. Leo satellite-based telecommunication network concepts

    Science.gov (United States)

    Aiken, John G.; Swan, Peter A.; Leopold, Ray J.

    1991-01-01

    Design considerations are discussed for Low Earth Orbit (LEO) satellite based telecommunications networks. The satellites are assumed to be connected to each other via intersatellite links. They are connected to the end user either directly or through gateways to other networks. Frequency reuse, circuit switching, packet switching, call handoff, and routing for these systems are discussed by analogy with terrestrial cellular (mobile radio) telecommunication systems.

  7. Assessment of satellite-based precipitation estimates over Paraguay

    Science.gov (United States)

    Oreggioni Weiberlen, Fiorella; Báez Benítez, Julián

    2018-04-01

    Satellite-based precipitation estimates represent a potential alternative source of input data in a plethora of meteorological and hydrological applications, especially in regions characterized by a low density of rain gauge stations. Paraguay provides a good example of a case where the use of satellite-based precipitation could be advantageous. This study aims to evaluate the version 7 of the Tropical Rainfall Measurement Mission Multi-Satellite Precipitation Analysis (TMPA V7; 3B42 V7) and the version 1.0 of the purely satellite-based product of the Climate Prediction Center Morphing Technique (CMORPH RAW) through their comparison with daily in situ precipitation measurements from 1998 to 2012 over Paraguay. The statistical assessment is conducted with several commonly used indexes. Specifically, to evaluate the accuracy of daily precipitation amounts, mean error (ME), root mean square error (RMSE), BIAS, and coefficient of determination (R 2) are used, and to analyze the capability to correctly detect different precipitation intensities, false alarm ratio (FAR), frequency bias index (FBI), and probability of detection (POD) are applied to various rainfall rates (0, 0.1, 0.5, 1, 2, 5, 10, 20, 40, 60, and 80 mm/day). Results indicate that TMPA V7 has a better performance than CMORPH RAW over Paraguay. TMPA V7 has higher accuracy in the estimation of daily rainfall volumes and greater precision in the detection of wet days (> 0 mm/day). However, both satellite products show a lower ability to appropriately detect high intensity precipitation events.

  8. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    Science.gov (United States)

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  9. Quantitative analysis by X-ray fluorescence using first principles for matrix correction

    International Nuclear Information System (INIS)

    Hulett, L.D.; Dunn, H.W.; Tarter, J.G.

    1978-01-01

    The quantitative interpretation of X-ray fluorescence (XRF) data is often difficult because of matrix effects. The intensity of fluorescence measured for a given element is not only dependent on the element's concentration, but also on the mass absorption coefficients of the sample for the excitation and fluorescence radiation. Also, there are interelement effects in which high-energy fluorescence from heavier elements is absorbed by lighter elements with a resulting enhancement of their fluorescence. Recent theoretical treatments of this problem have shown that X-ray fluorescence data can be corrected for these matrix effects by calculations based on first principles. Fundamental constants, available in atomic physics data tables, are the only parameters needed. It is not necessary to make empirical calibrations. The application of this correctional procedure to alloys and alumina-supported catalysts is described. A description is given of a low-background spectrometer which uses monochromatic Ag Ksub(α) radiation for excitation. Matrix corrections by first principles can be easily applied to data from instruments of this type because fluorescence excitation cross-sections and mass absorption coefficients can be accurately defined for monochromatic radiation. (author)

  10. Quantitative study of FORC diagrams in thermally corrected Stoner– Wohlfarth nanoparticles systems

    International Nuclear Information System (INIS)

    De Biasi, E.; Curiale, J.; Zysler, R.D.

    2016-01-01

    The use of FORC diagrams is becoming increasingly popular among researchers devoted to magnetism and magnetic materials. However, a thorough interpretation of this kind of diagrams, in order to achieve quantitative information, requires an appropriate model of the studied system. For that reason most of the FORC studies are used for a qualitative analysis. In magnetic systems thermal fluctuations 'blur' the signatures of the anisotropy, volume and particle interactions distributions, therefore thermal effects in nanoparticles systems conspire against a proper interpretation and analysis of these diagrams. Motivated by this fact, we have quantitatively studied the degree of accuracy of the information extracted from FORC diagrams for the special case of single-domain thermal corrected Stoner– Wohlfarth (easy axes along the external field orientation) nanoparticles systems. In this work, the starting point is an analytical model that describes the behavior of a magnetic nanoparticles system as a function of field, anisotropy, temperature and measurement time. In order to study the quantitative degree of accuracy of our model, we built FORC diagrams for different archetypical cases of magnetic nanoparticles. Our results show that from the quantitative information obtained from the diagrams, under the hypotheses of the proposed model, is possible to recover the features of the original system with accuracy above 95%. This accuracy is improved at low temperatures and also it is possible to access to the anisotropy distribution directly from the FORC coercive field profile. Indeed, our simulations predict that the volume distribution plays a secondary role being the mean value and its deviation the only important parameters. Therefore it is possible to obtain an accurate result for the inversion and interaction fields despite the features of the volume distribution. - Highlights: • Quantify the degree of accuracy of the information obtained using the FORC diagrams.

  11. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    Science.gov (United States)

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.

  12. Satellite-Based Sunshine Duration for Europe

    Directory of Open Access Journals (Sweden)

    Bodo Ahrens

    2013-06-01

    Full Text Available In this study, two different methods were applied to derive daily and monthly sunshine duration based on high-resolution satellite products provided by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT Satellite Application Facility on Climate Monitoring using data from Meteosat Second Generation (MSG SEVIRI (Spinning Enhanced Visible and Infrared Imager. The satellite products were either hourly cloud type or hourly surface incoming direct radiation. The satellite sunshine duration estimates were not found to be significantly different using the native 15-minute temporal resolution of SEVIRI. The satellite-based sunshine duration products give additional spatial information over the European continent compared with equivalent in situ-based products. An evaluation of the satellite sunshine duration by product intercomparison and against station measurements was carried out to determine their accuracy. The satellite data were found to be within ±1 h/day compared to high-quality Baseline Surface Radiation Network or surface synoptic observations (SYNOP station measurements. The satellite-based products differ more over the oceans than over land, mainly because of the treatment of fractional clouds in the cloud type-based sunshine duration product. This paper presents the methods used to derive the satellite sunshine duration products and the performance of the different retrievals. The main benefits and disadvantages compared to station-based products are also discussed.

  13. Correction for FDG PET dose extravasations: Monte Carlo validation and quantitative evaluation of patient studies

    Energy Technology Data Exchange (ETDEWEB)

    Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Santiago de Compostela, Galicia (Spain); Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Cortés, Julia; Garrido, Miguel [Servicio de Medicina Nuclear, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia, Spain and Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Pombar, Miguel [Servicio de Radiofísica y Protección Radiológica, Complexo Hospitalario Universitario de Santiago de Compostela, 15706, Galicia (Spain); Ruibal, Álvaro [Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela (USC), 15782, Galicia (Spain); Grupo de Imaxe Molecular, Instituto de Investigación Sanitarias (IDIS), Santiago de Compostela, 15706, Galicia (Spain); Fundación Tejerina, 28003, Madrid (Spain)

    2014-05-15

    Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.

  14. Fine-tuning satellite-based rainfall estimates

    Science.gov (United States)

    Harsa, Hastuadi; Buono, Agus; Hidayat, Rahmat; Achyar, Jaumil; Noviati, Sri; Kurniawan, Roni; Praja, Alfan S.

    2018-05-01

    Rainfall datasets are available from various sources, including satellite estimates and ground observation. The locations of ground observation scatter sparsely. Therefore, the use of satellite estimates is advantageous, because satellite estimates can provide data on places where the ground observations do not present. However, in general, the satellite estimates data contain bias, since they are product of algorithms that transform the sensors response into rainfall values. Another cause may come from the number of ground observations used by the algorithms as the reference in determining the rainfall values. This paper describe the application of bias correction method to modify the satellite-based dataset by adding a number of ground observation locations that have not been used before by the algorithm. The bias correction was performed by utilizing Quantile Mapping procedure between ground observation data and satellite estimates data. Since Quantile Mapping required mean and standard deviation of both the reference and the being-corrected data, thus the Inverse Distance Weighting scheme was applied beforehand to the mean and standard deviation of the observation data in order to provide a spatial composition of them, which were originally scattered. Therefore, it was possible to provide a reference data point at the same location with that of the satellite estimates. The results show that the new dataset have statistically better representation of the rainfall values recorded by the ground observation than the previous dataset.

  15. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    Science.gov (United States)

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET

  17. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection

    OpenAIRE

    Kwan, Johnny S. H.; Kung, Annie W. C.; Sham, Pak C.

    2011-01-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias. © The Author(s) 2011.

  18. Using satellite-based measurements to explore ...

    Science.gov (United States)

    New particle formation (NPF) can potentially alter regional climate by increasing aerosol particle (hereafter particle) number concentrations and ultimately cloud condensation nuclei. The large scales on which NPF is manifest indicate potential to use satellite-based (inherently spatially averaged) measurements of atmospheric conditions to diagnose the occurrence of NPF and NPF characteristics. We demonstrate the potential for using satellite-measurements of insolation (UV), trace gas concentrations (sulfur dioxide (SO2), nitrogen dioxide (NO2), ammonia (NH3), formaldehyde (HCHO), ozone (O3)), aerosol optical properties (aerosol optical depth (AOD), Ångström exponent (AE)), and a proxy of biogenic volatile organic compound emissions (leaf area index (LAI), temperature (T)) as predictors for NPF characteristics: formation rates, growth rates, survival probabilities, and ultrafine particle (UFP) concentrations at five locations across North America. NPF at all sites is most frequent in spring, exhibits a one-day autocorrelation, and is associated with low condensational sink (AOD×AE) and HCHO concentrations, and high UV. However, there are important site-to-site variations in NPF frequency and characteristics, and in which of the predictor variables (particularly gas concentrations) significantly contribute to the explanatory power of regression models built to predict those characteristics. This finding may provide a partial explanation for the reported spatia

  19. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  20. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  1. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  2. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  3. Effect of attenuation by the cranium on quantitative SPECT measurements of cerebral blood flow and a correction method

    International Nuclear Information System (INIS)

    Iwase, Mikio; Kurono, Kenji; Iida, Akihiko.

    1998-01-01

    Attenuation correction for cerebral blood flow SPECT image reconstruction is usually performed by considering the head as a whole to be equivalent to water, and the effects of differences in attenuation between subjects produced by the cranium have not been taken into account. We determined the differences in attenuation between subjects and assessed a method of correcting quantitative cerebral blood flow values. Attenuations by head on the right and left sides were measured before intravenous injection of 123 I-IMP, and water-converted diameters of both sides (Ta) were calculated from the measurements obtained. After acquiring SPECT images, attenuation correction was conducted according to the method of Sorenson, and images were reconstructed. The diameters of the right and left sides in the same position as the Ta (Tt) were calculated from the contours determined by threshold values. Using Ts given by 2 Ts=Ta-Tt, the correction factor λ=exp(μ 1 Ts) was calculated and multiplied as the correction factor when rCBF was determined. The results revealed significant differences between Tt and Ta. Although no gender differences were observed in Tt, they were seen in both Ta and Ts. Thus, interindividual differences in attenuation by the cranium were found to have an influence that cannot be ignored. Inter-subject correlation is needed to obtain accurate quantitative values. (author)

  4. Matrix effect and correction by standard addition in quantitative liquid chromatographic-mass spectrometric analysis of diarrhetic shellfish poisoning toxins.

    Science.gov (United States)

    Ito, Shinya; Tsukada, Katsuo

    2002-01-11

    An evaluation of the feasibility of liquid chromatography-mass spectrometry (LC-MS) with atmospheric pressure ionization was made for quantitation of four diarrhetic shellfish poisoning toxins, okadaic acid, dinophysistoxin-1, pectenotoxin-6 and yessotoxin in scallops. When LC-MS was applied to the analysis of scallop extracts, large signal suppressions were observed due to coeluting substances from the column. To compensate for these matrix signal suppressions, the standard addition method was applied. First, the sample was analyzed and then the sample involving the addition of calibration standards is analyzed. Although this method requires two LC-MS runs per analysis, effective correction of quantitative errors was found.

  5. Improving the quantitative accuracy of optical-emission computed tomography by incorporating an attenuation correction: application to HIF1 imaging

    Science.gov (United States)

    Kim, E.; Bowsher, J.; Thomas, A. S.; Sakhalkar, H.; Dewhirst, M.; Oldham, M.

    2008-10-01

    Optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT) are new techniques for imaging the 3D structure and function (including gene expression) of whole unsectioned tissue samples. This work presents a method of improving the quantitative accuracy of optical-ECT by correcting for the 'self'-attenuation of photons emitted within the sample. The correction is analogous to a method commonly applied in single-photon-emission computed tomography reconstruction. The performance of the correction method was investigated by application to a transparent cylindrical gelatin phantom, containing a known distribution of attenuation (a central ink-doped gelatine core) and a known distribution of fluorescing fibres. Attenuation corrected and uncorrected optical-ECT images were reconstructed on the phantom to enable an evaluation of the effectiveness of the correction. Significant attenuation artefacts were observed in the uncorrected images where the central fibre appeared ~24% less intense due to greater attenuation from the surrounding ink-doped gelatin. This artefact was almost completely removed in the attenuation-corrected image, where the central fibre was within ~4% of the others. The successful phantom test enabled application of attenuation correction to optical-ECT images of an unsectioned human breast xenograft tumour grown subcutaneously on the hind leg of a nude mouse. This tumour cell line had been genetically labelled (pre-implantation) with fluorescent reporter genes such that all viable tumour cells expressed constitutive red fluorescent protein and hypoxia-inducible factor 1 transcription-produced green fluorescent protein. In addition to the fluorescent reporter labelling of gene expression, the tumour microvasculature was labelled by a light-absorbing vasculature contrast agent delivered in vivo by tail-vein injection. Optical-CT transmission images yielded high-resolution 3D images of the absorbing contrast agent, and

  6. Improving the quantitative accuracy of optical-emission computed tomography by incorporating an attenuation correction: application to HIF1 imaging

    International Nuclear Information System (INIS)

    Kim, E; Bowsher, J; Thomas, A S; Sakhalkar, H; Dewhirst, M; Oldham, M

    2008-01-01

    Optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT) are new techniques for imaging the 3D structure and function (including gene expression) of whole unsectioned tissue samples. This work presents a method of improving the quantitative accuracy of optical-ECT by correcting for the 'self'-attenuation of photons emitted within the sample. The correction is analogous to a method commonly applied in single-photon-emission computed tomography reconstruction. The performance of the correction method was investigated by application to a transparent cylindrical gelatin phantom, containing a known distribution of attenuation (a central ink-doped gelatine core) and a known distribution of fluorescing fibres. Attenuation corrected and uncorrected optical-ECT images were reconstructed on the phantom to enable an evaluation of the effectiveness of the correction. Significant attenuation artefacts were observed in the uncorrected images where the central fibre appeared ∼24% less intense due to greater attenuation from the surrounding ink-doped gelatin. This artefact was almost completely removed in the attenuation-corrected image, where the central fibre was within ∼4% of the others. The successful phantom test enabled application of attenuation correction to optical-ECT images of an unsectioned human breast xenograft tumour grown subcutaneously on the hind leg of a nude mouse. This tumour cell line had been genetically labelled (pre-implantation) with fluorescent reporter genes such that all viable tumour cells expressed constitutive red fluorescent protein and hypoxia-inducible factor 1 transcription-produced green fluorescent protein. In addition to the fluorescent reporter labelling of gene expression, the tumour microvasculature was labelled by a light-absorbing vasculature contrast agent delivered in vivo by tail-vein injection. Optical-CT transmission images yielded high-resolution 3D images of the absorbing contrast agent

  7. Satellite Based Cropland Carbon Monitoring System

    Science.gov (United States)

    Bandaru, V.; Jones, C. D.; Sedano, F.; Sahajpal, R.; Jin, H.; Skakun, S.; Pnvr, K.; Kommareddy, A.; Reddy, A.; Hurtt, G. C.; Izaurralde, R. C.

    2017-12-01

    Agricultural croplands act as both sources and sinks of atmospheric carbon dioxide (CO2); absorbing CO2 through photosynthesis, releasing CO2 through autotrophic and heterotrophic respiration, and sequestering CO2 in vegetation and soils. Part of the carbon captured in vegetation can be transported and utilized elsewhere through the activities of food, fiber, and energy production. As well, a portion of carbon in soils can be exported somewhere else by wind, water, and tillage erosion. Thus, it is important to quantify how land use and land management practices affect the net carbon balance of croplands. To monitor the impacts of various agricultural activities on carbon balance and to develop management strategies to make croplands to behave as net carbon sinks, it is of paramount importance to develop consistent and high resolution cropland carbon flux estimates. Croplands are typically characterized by fine scale heterogeneity; therefore, for accurate carbon flux estimates, it is necessary to account for the contribution of each crop type and their spatial distribution. As part of NASA CMS funded project, a satellite based Cropland Carbon Monitoring System (CCMS) was developed to estimate spatially resolved crop specific carbon fluxes over large regions. This modeling framework uses remote sensing version of Environmental Policy Integrated Climate Model and satellite derived crop parameters (e.g. leaf area index (LAI)) to determine vertical and lateral carbon fluxes. The crop type LAI product was developed based on the inversion of PRO-SAIL radiative transfer model and downscaled MODIS reflectance. The crop emergence and harvesting dates were estimated based on MODIS NDVI and crop growing degree days. To evaluate the performance of CCMS framework, it was implemented over croplands of Nebraska, and estimated carbon fluxes for major crops (i.e. corn, soybean, winter wheat, grain sorghum, alfalfa) grown in 2015. Key findings of the CCMS framework will be presented

  8. QIN DAWG Validation of Gradient Nonlinearity Bias Correction Workflow for Quantitative Diffusion-Weighted Imaging in Multicenter Trials.

    Science.gov (United States)

    Malyarenko, Dariya I; Wilmes, Lisa J; Arlinghaus, Lori R; Jacobs, Michael A; Huang, Wei; Helmer, Karl G; Taouli, Bachir; Yankeelov, Thomas E; Newitt, David; Chenevert, Thomas L

    2016-12-01

    Previous research has shown that system-dependent gradient nonlinearity (GNL) introduces a significant spatial bias (nonuniformity) in apparent diffusion coefficient (ADC) maps. Here, the feasibility of centralized retrospective system-specific correction of GNL bias for quantitative diffusion-weighted imaging (DWI) in multisite clinical trials is demonstrated across diverse scanners independent of the scanned object. Using corrector maps generated from system characterization by ice-water phantom measurement completed in the previous project phase, GNL bias correction was performed for test ADC measurements from an independent DWI phantom (room temperature agar) at two offset locations in the bore. The precomputed three-dimensional GNL correctors were retrospectively applied to test DWI scans by the central analysis site. The correction was blinded to reference DWI of the agar phantom at magnet isocenter where the GNL bias is negligible. The performance was evaluated from changes in ADC region of interest histogram statistics before and after correction with respect to the unbiased reference ADC values provided by sites. Both absolute error and nonuniformity of the ADC map induced by GNL (median, 12%; range, -35% to +10%) were substantially reduced by correction (7-fold in median and 3-fold in range). The residual ADC nonuniformity errors were attributed to measurement noise and other non-GNL sources. Correction of systematic GNL bias resulted in a 2-fold decrease in technical variability across scanners (down to site temperature range). The described validation of GNL bias correction marks progress toward implementation of this technology in multicenter trials that utilize quantitative DWI.

  9. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    Science.gov (United States)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  10. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    International Nuclear Information System (INIS)

    Burdet, Pierre; Saghi, Z.; Filippin, A.N.; Borrás, A.; Midgley, P.A.

    2016-01-01

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  11. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    Energy Technology Data Exchange (ETDEWEB)

    Burdet, Pierre, E-mail: pierre.burdet@a3.epfl.ch [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Saghi, Z. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Filippin, A.N.; Borrás, A. [Nanotechnology on Surfaces Laboratory, Materials Science Institute of Seville (ICMS), CSIC-University of Seville, C/ Americo Vespucio 49, 41092 Seville (Spain); Midgley, P.A. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom)

    2016-01-15

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  12. A Movable Phantom Design for Quantitative Evaluation of Motion Correction Studies on High Resolution PET Scanners

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Svarer, C.; Sibomana, M.

    2010-01-01

    maximization algorithm with modeling of the point spread function (3DOSEM-PSF), and they were corrected for motions based on external tracking information using the Polaris Vicra real-time stereo motion-tracking system. The new automatic, movable phantom has a robust design and is a potential quality......Head movements during brain imaging using high resolution positron emission tomography (PET) impair the image quality which, along with the improvement of the spatial resolution of PET scanners, in general, raises the importance of motion correction. Here, we present a new design for an automatic...

  13. Quantitative evaluation of SIMS spectra including spectrum interpretation and Saha-Eggert correction

    International Nuclear Information System (INIS)

    Steiger, W.; Ruedenauer, F.G.

    1978-01-01

    A spectrum identification program is described, using a computer algorithm which solely relies on the natural isotopic abundances for identification of elemental, molecular and cluster ions. The thermodynamic approach to the quantitative interpretation of SIMS spectra, through the use of the Saha-Eggert equation, is discussed, and a computer program is outlined. (U.K.)

  14. Multiple testing corrections in quantitative proteomics: A useful but blunt tool.

    Science.gov (United States)

    Pascovici, Dana; Handler, David C L; Wu, Jemma X; Haynes, Paul A

    2016-09-01

    Multiple testing corrections are a useful tool for restricting the FDR, but can be blunt in the context of low power, as we demonstrate by a series of simple simulations. Unfortunately, in proteomics experiments low power can be common, driven by proteomics-specific issues like small effects due to ratio compression, and few replicates due to reagent high cost, instrument time availability and other issues; in such situations, most multiple testing corrections methods, if used with conventional thresholds, will fail to detect any true positives even when many exist. In this low power, medium scale situation, other methods such as effect size considerations or peptide-level calculations may be a more effective option, even if they do not offer the same theoretical guarantee of a low FDR. Thus, we aim to highlight in this article that proteomics presents some specific challenges to the standard multiple testing corrections methods, which should be employed as a useful tool but not be regarded as a required rubber stamp. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  16. 3-D visualization and quantitation of microvessels in transparent human colorectal carcinoma [corrected].

    Directory of Open Access Journals (Sweden)

    Yuan-An Liu

    Full Text Available Microscopic analysis of tumor vasculature plays an important role in understanding the progression and malignancy of colorectal carcinoma. However, due to the geometry of blood vessels and their connections, standard microtome-based histology is limited in providing the spatial information of the vascular network with a 3-dimensional (3-D continuum. To facilitate 3-D tissue analysis, we prepared transparent human colorectal biopsies by optical clearing for in-depth confocal microscopy with CD34 immunohistochemistry. Full-depth colons were obtained from colectomies performed for colorectal carcinoma. Specimens were prepared away from (control and at the tumor site. Taking advantage of the transparent specimens, we acquired anatomic information up to 200 μm in depth for qualitative and quantitative analyses of the vasculature. Examples are given to illustrate: (1 the association between the tumor microstructure and vasculature in space, including the perivascular cuffs of tumor outgrowth, and (2 the difference between the 2-D and 3-D quantitation of microvessels. We also demonstrate that the optically cleared mucosa can be retrieved after 3-D microscopy to perform the standard microtome-based histology (H&E staining and immunohistochemistry for systematic integration of the two tissue imaging methods. Overall, we established a new tumor histological approach to integrate 3-D imaging, illustration, and quantitation of human colonic microvessels in normal and cancerous specimens. This approach has significant promise to work with the standard histology to better characterize the tumor microenvironment in colorectal carcinoma.

  17. Disturbed Intracardiac Flow Organization After Atrioventricular Septal Defect Correction as Assessed With 4D Flow Magnetic Resonance Imaging and Quantitative Particle Tracing

    NARCIS (Netherlands)

    Calkoen, Emmeline E.; de Koning, Patrick J. H.; Blom, Nico A.; Kroft, Lucia J. M.; de Roos, Albert; Wolterbeek, Ron; Roest, Arno A. W.; Westenberg, Jos J. M.

    2015-01-01

    Objectives Four-dimensional (3 spatial directions and time) velocity-encoded flow magnetic resonance imaging with quantitative particle tracing analysis allows assessment of left ventricular (LV) blood flow organization. Corrected atrioventricular septal defect (AVSD) patients have an abnormal left

  18. The Asian Correction Can Be Quantitatively Forecasted Using a Statistical Model of Fusion-Fission Processes.

    Science.gov (United States)

    Teh, Boon Kin; Cheong, Siew Ann

    2016-01-01

    The Global Financial Crisis of 2007-2008 wiped out US$37 trillions across global financial markets, this value is equivalent to the combined GDPs of the United States and the European Union in 2014. The defining moment of this crisis was the failure of Lehman Brothers, which precipitated the October 2008 crash and the Asian Correction (March 2009). Had the Federal Reserve seen these crashes coming, they might have bailed out Lehman Brothers, and prevented the crashes altogether. In this paper, we show that some of these market crashes (like the Asian Correction) can be predicted, if we assume that a large number of adaptive traders employing competing trading strategies. As the number of adherents for some strategies grow, others decline in the constantly changing strategy space. When a strategy group grows into a giant component, trader actions become increasingly correlated and this is reflected in the stock price. The fragmentation of this giant component will leads to a market crash. In this paper, we also derived the mean-field market crash forecast equation based on a model of fusions and fissions in the trading strategy space. By fitting the continuous returns of 20 stocks traded in Singapore Exchange to the market crash forecast equation, we obtain crash predictions ranging from end October 2008 to mid-February 2009, with early warning four to six months prior to the crashes.

  19. The Asian Correction Can Be Quantitatively Forecasted Using a Statistical Model of Fusion-Fission Processes.

    Directory of Open Access Journals (Sweden)

    Boon Kin Teh

    Full Text Available The Global Financial Crisis of 2007-2008 wiped out US$37 trillions across global financial markets, this value is equivalent to the combined GDPs of the United States and the European Union in 2014. The defining moment of this crisis was the failure of Lehman Brothers, which precipitated the October 2008 crash and the Asian Correction (March 2009. Had the Federal Reserve seen these crashes coming, they might have bailed out Lehman Brothers, and prevented the crashes altogether. In this paper, we show that some of these market crashes (like the Asian Correction can be predicted, if we assume that a large number of adaptive traders employing competing trading strategies. As the number of adherents for some strategies grow, others decline in the constantly changing strategy space. When a strategy group grows into a giant component, trader actions become increasingly correlated and this is reflected in the stock price. The fragmentation of this giant component will leads to a market crash. In this paper, we also derived the mean-field market crash forecast equation based on a model of fusions and fissions in the trading strategy space. By fitting the continuous returns of 20 stocks traded in Singapore Exchange to the market crash forecast equation, we obtain crash predictions ranging from end October 2008 to mid-February 2009, with early warning four to six months prior to the crashes.

  20. Determination of avermectins by the internal standard recovery correction - high performance liquid chromatography - quantitative Nuclear Magnetic Resonance method.

    Science.gov (United States)

    Zhang, Wei; Huang, Ting; Li, Hongmei; Dai, Xinhua; Quan, Can; He, Yajuan

    2017-09-01

    Quantitative Nuclear Magnetic Resonance (qNMR) is widely used to determine the purity of organic compounds. For the compounds with lower purity especially molecular weight more than 500, qNMR is at risk of error for the purity, because the impurity peaks are likely to be incompletely separated from the peak of major component. In this study, an offline ISRC-HPLC-qNMR (internal standard recovery correction - high performance liquid chromatography - qNMR) was developed to overcome this problem. It is accurate by excluding the influence of impurity; it is low-cost by using common mobile phase; and it extends the applicable scope of qNMR. In this method, a mix solution of the sample and an internal standard was separated by HPLC with common mobile phases, and only the eluents of the analyte and the internal standard were collected in the same tube. After evaporation and re-dissolution, it was determined by qNMR. A recovery correction factor was determined by comparison of the solutions before and after these procedures. After correction, the mass fraction of analyte was constant and it was accurate and precise, even though the sample loss varied during these procedures, or even in bad resolution of HPLC. Avermectin B 1 a with the purity of ~93% and the molecular weight of 873 was analyzed. Moreover, the homologues of avermectin B 1 a were determined based on the identification and quantitative analysis by tandem mass spectrometry and HPLC, and the results were consistent with the results of traditional mass balance method. The result showed that the method could be widely used for the organic compounds, and could further promote qNMR to become a primary method in the international metrological systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Correction of Gradient Nonlinearity Bias in Quantitative Diffusion Parameters of Renal Tissue with Intra Voxel Incoherent Motion.

    Science.gov (United States)

    Malyarenko, Dariya I; Pang, Yuxi; Senegas, Julien; Ivancevic, Marko K; Ross, Brian D; Chenevert, Thomas L

    2015-12-01

    Spatially non-uniform diffusion weighting bias due to gradient nonlinearity (GNL) causes substantial errors in apparent diffusion coefficient (ADC) maps for anatomical regions imaged distant from magnet isocenter. Our previously-described approach allowed effective removal of spatial ADC bias from three orthogonal DWI measurements for mono-exponential media of arbitrary anisotropy. The present work evaluates correction feasibility and performance for quantitative diffusion parameters of the two-component IVIM model for well-perfused and nearly isotropic renal tissue. Sagittal kidney DWI scans of a volunteer were performed on a clinical 3T MRI scanner near isocenter and offset superiorly. Spatially non-uniform diffusion weighting due to GNL resulted both in shift and broadening of perfusion-suppressed ADC histograms for off-center DWI relative to unbiased measurements close to isocenter. Direction-average DW-bias correctors were computed based on the known gradient design provided by vendor. The computed bias maps were empirically confirmed by coronal DWI measurements for an isotropic gel-flood phantom. Both phantom and renal tissue ADC bias for off-center measurements was effectively removed by applying pre-computed 3D correction maps. Comparable ADC accuracy was achieved for corrections of both b -maps and DWI intensities in presence of IVIM perfusion. No significant bias impact was observed for IVIM perfusion fraction.

  2. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    Energy Technology Data Exchange (ETDEWEB)

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.; Crowell, Kevin L.; Monroe, Matthew E.; Ibrahim, Yehia M.; Smith, Richard D.; Payne, Samuel H.; Baker, Erin S.

    2018-04-01

    The mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can easily cause problems if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flight MS. In this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases with highly saturated species and dynamic range increased by 1-2 orders of magnitude for peptides in a blood serum sample.

  3. Validation of an Innovative Satellite-Based UV Dosimeter

    Science.gov (United States)

    Morelli, Marco; Masini, Andrea; Simeone, Emilio; Khazova, Marina

    2016-08-01

    We present an innovative satellite-based UV (ultraviolet) radiation dosimeter with a mobile app interface that has been validated by exploiting both ground-based measurements and an in-vivo assessment of the erythemal effects on some volunteers having a controlled exposure to solar radiation.Both validations showed that the satellite-based UV dosimeter has a good accuracy and reliability needed for health-related applications.The app with this satellite-based UV dosimeter also includes other related functionalities such as the provision of safe sun exposure time updated in real-time and end exposure visual/sound alert. This app will be launched on the global market by siHealth Ltd in May 2016 under the name of "HappySun" and available both for Android and for iOS devices (more info on http://www.happysun.co.uk).Extensive R&D activities are on-going for further improvement of the satellite-based UV dosimeter's accuracy.

  4. Quantitation of regional cerebral blood flow corrected for partial volume effect using O-15 water and PET

    DEFF Research Database (Denmark)

    IIda, H.; Law, I.; Pakkenberg, B.

    2000-01-01

    Limited spatial resolution of positron emission tomography (PET) can cause significant underestimation in the observed regional radioactivity concentration (so-called partial volume effect or PVE) resulting in systematic errors in estimating quantitative physiologic parameters. The authors have...... formulated four mathematical models that describe the dynamic behavior of a freely diffusible tracer (H215O) in a region of interest (ROI) incorporating estimates of regional tissue flow that are independent of PVE. The current study was intended to evaluate the feasibility of these models and to establish...... a methodology to accurately quantify regional cerebral blood flow (CBF) corrected for PVE in cortical gray matter regions. Five monkeys were studied with PET after IV H2(15)O two times (n = 3) or three times (n = 2) in a row. Two ROIs were drawn on structural magnetic resonance imaging (MRI) scans and projected...

  5. Quantitative MR thermometry based on phase-drift correction PRF shift method at 0.35 T.

    Science.gov (United States)

    Chen, Yuping; Ge, Mengke; Ali, Rizwan; Jiang, Hejun; Huang, Xiaoyan; Qiu, Bensheng

    2018-04-10

    Noninvasive magnetic resonance thermometry (MRT) at low-field using proton resonance frequency shift (PRFS) is a promising technique for monitoring ablation temperature, since low-field MR scanners with open-configuration are more suitable for interventional procedures than closed systems. In this study, phase-drift correction PRFS with first-order polynomial fitting method was proposed to investigate the feasibility and accuracy of quantitative MR thermography during hyperthermia procedures in a 0.35 T open MR scanner. Unheated phantom and ex vivo porcine liver experiments were performed to evaluate the optimal polynomial order for phase-drift correction PRFS. The temperature estimation approach was tested in brain temperature experiments of three healthy volunteers at room temperature, and in ex vivo porcine liver microwave ablation experiments. The output power of the microwave generator was set at 40 W for 330 s. In the unheated experiments, the temperature root mean square error (RMSE) in the inner region of interest was calculated to assess the best-fitting order for polynomial fit. For ablation experiments, relative temperature difference profile measured by the phase-drift correction PRFS was compared with the temperature changes recorded by fiber optic temperature probe around the microwave ablation antenna within the target thermal region. The phase-drift correction PRFS using first-order polynomial fitting could achieve the smallest temperature RMSE in unheated phantom, ex vivo porcine liver and in vivo human brain experiments. In the ex vivo porcine liver microwave ablation procedure, the temperature error between MRT and fiber optic probe of all but six temperature points were less than 2 °C. Overall, the RMSE of all temperature points was 1.49 °C. Both in vivo and ex vivo experiments showed that MR thermometry based on the phase-drift correction PRFS with first-order polynomial fitting could be applied to monitor temperature changes during

  6. A near real-time satellite-based global drought climate data record

    International Nuclear Information System (INIS)

    AghaKouchak, Amir; Nakhjiri, Navid

    2012-01-01

    Reliable drought monitoring requires long-term and continuous precipitation data. High resolution satellite measurements provide valuable precipitation information on a quasi-global scale. However, their short lengths of records limit their applications in drought monitoring. In addition to this limitation, long-term low resolution satellite-based gauge-adjusted data sets such as the Global Precipitation Climatology Project (GPCP) one are not available in near real-time form for timely drought monitoring. This study bridges the gap between low resolution long-term satellite gauge-adjusted data and the emerging high resolution satellite precipitation data sets to create a long-term climate data record of droughts. To accomplish this, a Bayesian correction algorithm is used to combine GPCP data with real-time satellite precipitation data sets for drought monitoring and analysis. The results showed that the combined data sets after the Bayesian correction were a significant improvement compared to the uncorrected data. Furthermore, several recent major droughts such as the 2011 Texas, 2010 Amazon and 2010 Horn of Africa droughts were detected in the combined real-time and long-term satellite observations. This highlights the potential application of satellite precipitation data for regional to global drought monitoring. The final product is a real-time data-driven satellite-based standardized precipitation index that can be used for drought monitoring especially over remote and/or ungauged regions. (letter)

  7. Study of Six Energy-Window Settings for Scatter Correction in Quantitative 111In Imaging: Comparative analysis Using SIMIND

    International Nuclear Information System (INIS)

    Gomez Facenda, A.; Castillo Lopez, J. P.; Torres Aroche, L. A.; Coca Perez, M. A.

    2013-01-01

    Activity quantification in nuclear medicine imaging is highly desirable, particularly for dosimetry and biodistribution studies of radiopharmaceuticals. Quantitative 111 In imaging is increasingly important with the current interest in therapy using 90 Y-radiolabeled compounds. Photons scattered in the patient are one of the major problems in quantification, which leads to degradation of image quality. The aim of this work was to assess the configuration of energy windows and the best weight factor for the scatter correction in 111 In images. All images were obtained using the Monte Carlo simulation code, Simind, configured to emulate the gamma camera Nucline SPIRIT DH-V. Simulations were validated by a positive agreement between experimental and simulated line-spread functions (LSF) of 99 mTc. It was examined the sensitivity, the scatter-to-total ratio, the contrast and the spatial resolution for scatter-compensated images obtained from six different multi-windows scatter corrections. Taking into consideration the results, the best energy-window setting was two 20% windows centered at 171 and 245keV, together with a 10% scatter window located between the photo peaks at 209keV. (Author)

  8. Quantitative micro-Raman analysis of volcanic glasses: influence and correction of matrix effects

    Science.gov (United States)

    Di Muro, Andrea

    2014-05-01

    Micro-Raman spectroscopy, even though a very promising micro-analytical technique, is still not used to routinely quantify volatile elements dissolved in glasses. Following an original idea of Galeener and Mikkelsen (1981) for the quantification of hydroxyl (OH) in silica glass, several quantitative procedures have been recently proposed for the analysis of water, sulphur and carbon in natural glasses (obsidians, pumices, melt inclusions). The quantification of a single analyte requires the calibration of the correlation between the intensity I (height or area) of the related Raman band, normalized or not to a reference band RB, and the analyte concentration. For the analysis of alumino-silicate glasses, RB corresponds to one of the two main envelopes (LF and HF) related to the vibration of the glass network. Calibrations are linear, provided the increase in the analyte concentration does not dramatically affect RB intensity. Much attention has been paid to identify the most appropriate spectral treatment (spectra reduction; baseline subtraction; etc) to achieve accurate measurement of band intensities. I here show that the accuracy of Raman procedures for volatile quantification critically depends on the capability in predicting and in taking into account the influence of multiple matrix effects, which are often correlated with the average polymerization degree of the glass network. A general model has been developed to predict matrix effects affecting micro-Raman analysis of natural glasses. The specific and critical influence of iron redox state and pressure are discussed. The approach has been extensively validated for the study of melt inclusions and matrices spanning a broad range of compositions and dissolved volatile contents. References Analytical procedures Mercier, M, Di Muro, A., Métrich, N., Giordano, D., Belhadj, O., Mandeville, C.W. (2010) Spectroscopic analysis (FTIR, Raman) of water in mafic and intermediate glasses and glass inclusions

  9. Global trends in satellite-based emergency mapping

    Science.gov (United States)

    Voigt, Stefan; Giulio-Tonolo, Fabio; Lyons, Josh; Kučera, Jan; Jones, Brenda; Schneiderhan, Tobias; Platzeck, Gabriel; Kaku, Kazuya; Hazarika, Manzul Kumar; Czaran, Lorant; Li, Suju; Pedersen, Wendi; James, Godstime Kadiri; Proy, Catherine; Muthike, Denis Macharia; Bequignon, Jerome; Guha-Sapir, Debarati

    2016-01-01

    Over the past 15 years, scientists and disaster responders have increasingly used satellite-based Earth observations for global rapid assessment of disaster situations. We review global trends in satellite rapid response and emergency mapping from 2000 to 2014, analyzing more than 1000 incidents in which satellite monitoring was used for assessing major disaster situations. We provide a synthesis of spatial patterns and temporal trends in global satellite emergency mapping efforts and show that satellite-based emergency mapping is most intensively deployed in Asia and Europe and follows well the geographic, physical, and temporal distributions of global natural disasters. We present an outlook on the future use of Earth observation technology for disaster response and mitigation by putting past and current developments into context and perspective.

  10. Trellis-coded CPM for satellite-based mobile communications

    Science.gov (United States)

    Abrishamkar, Farrokh; Biglieri, Ezio

    1988-01-01

    Digital transmission for satellite-based land mobile communications is discussed. To satisfy the power and bandwidth limitations imposed on such systems, a combination of trellis coding and continuous-phase modulated signals are considered. Some schemes based on this idea are presented, and their performance is analyzed by computer simulation. The results obtained show that a scheme based on directional detection and Viterbi decoding appears promising for practical applications.

  11. Use of scatter correction in quantitative I-123 MIBG scintigraphy for differentiating patients with Parkinsonism: Results from Phantom experiment and clinical study

    International Nuclear Information System (INIS)

    Bai, J.; Hashimoto, J.; Suzuki, T.; Nakahara, T.; Kubo, A.; Ohira, M.; Takao, M.; Ogawa, K.

    2007-01-01

    The aims of this study were to elucidate the feasibility of scatter correction in improving the quantitative accuracy of the Heart-to-Mediastinum (H/M) ratio in I-123 MIBG imaging and to clarify whether the H/M ratio calculated from the scatter corrected image improves the accuracy of differentiating patients with Parkinsonism from other neurological disorders. The H/M ratio was calculated using the counts from planar images processed with and without scatter correction in the phantom and on patients. The triple energy window (TEW) method was used for scatter correction. Fifty five patients were enrolled in the clinical study. The Receiver Operating Characteristic (ROC) Curve analysis was used to evaluate diagnostic performance. The H/M ratio was found to be increased after scatter correction in the phantom simulating normal cardiac uptake, while no changes were observed in the phantom simulating no uptake. It was observed that scatter correction stabilized the H/M ratio by eliminating the influence of scatter photons originating from the liver, especially in the condition of no cardiac uptake. Similarly, scatter correction increased the H/M ratio in conditions other than Parkinson's disease but did not show any change in Parkinson's disease itself to widen the differences in the H/M ratios between the two groups. The overall power of the test did not show any significant improvement after scatter correction in differentiating patients with Parkinsonism. Based on the results of this study it has been concluded that scatter correction improves the quantitative accuracy of H/M ratio in MIBG imaging, but it does not offer any significant incremental diagnostic value over conventional imaging (without scatter correction). Nevertheless it is felt that the scatter correction technique deserves special consideration in order to make the test more robust and obtain stable H/M ratios. (author)

  12. Full correction of scattering effects by using the radiative transfer theory for improved quantitative analysis of absorbing species in suspensions.

    Science.gov (United States)

    Steponavičius, Raimundas; Thennadil, Suresh N

    2013-05-01

    Sample-to-sample photon path length variations that arise due to multiple scattering can be removed by decoupling absorption and scattering effects by using the radiative transfer theory, with a suitable set of measurements. For samples where particles both scatter and absorb light, the extracted bulk absorption spectrum is not completely free from nonlinear particle effects, since it is related to the absorption cross-section of particles that changes nonlinearly with particle size and shape. For the quantitative analysis of absorbing-only (i.e., nonscattering) species present in a matrix that contains a particulate species that absorbs and scatters light, a method to eliminate particle effects completely is proposed here, which utilizes the particle size information contained in the bulk scattering coefficient extracted by using the Mie theory to carry out an additional correction step to remove particle effects from bulk absorption spectra. This should result in spectra that are equivalent to spectra collected with only the liquid species in the mixture. Such an approach has the potential to significantly reduce the number of calibration samples as well as improve calibration performance. The proposed method was tested with both simulated and experimental data from a four-component model system.

  13. From extended integrity monitoring to the safety evaluation of satellite-based localisation system

    International Nuclear Information System (INIS)

    Legrand, Cyril; Beugin, Julie; Marais, Juliette; Conrard, Blaise; El-Koursi, El-Miloudi; Berbineau, Marion

    2016-01-01

    Global Navigation Satellite Systems (GNSS) such as GPS, already used in aeronautics for safety-related applications, can play a major role in railway safety by allowing a train to locate itself safely. However, in order to implement this positioning solution in any embedded system, its performances must be evaluated according to railway standards. The evaluation of GNSS performances is not based on the same attributes class than RAMS evaluation. Face to these diffculties, we propose to express the integrity attribute, performance of satellite-based localisation. This attribute comes from aeronautical standards and for a hybridised GNSS with inertial system. To achieve this objective, the integrity attribute must be extended to this kind of system and algorithms initially devoted to GNSS integrity monitoring only must be adapted. Thereafter, the formalisation of this integrity attribute permits us to analyse the safety quantitatively through the probabilities of integrity risk and wrong-side failure. In this paper, after an introductory discussion about the use of localisation systems in railway safety context together with integrity issues, a particular integrity monitoring is proposed and described. The detection events of this algorithm permit us to conclude about safety level of satellite-based localisation system.

  14. Quantitative risk assessment: is more complex always better? Simple is not stupid and complex is not always more correct.

    Science.gov (United States)

    Zwietering, Marcel H

    2009-08-31

    In quantitative risk assessments a large variety of complexities can be found, from simple and deterministic to very extensive and stochastic. This publication advocates that both simple and complex approaches have their value and should be done in parallel. The simple analysis gives much insight and can help to detect main factors and potential errors in the complex analysis. Extensive analysis with increased complexity suggests better precision but might not increase the accuracy, due to the uncertainty in the additional parameters. However, complex analysis supplies more confidence in certain phenomena and might also increase insight. This is shown with two examples. The first is the effectiveness of sampling plans for powdered infant formula, for factories operating at various levels of contamination. The results of a simple determination, an analysis including a within batch variability and an analysis including both within batch and between batch variability will be compared. The last approach has as advantage that apart from determining the probability of rejection of a batch, it can determine also the reduction of the health risk in the population following a certain sampling plan; it is more complex but it also does bring additional information. However the conclusions still contain large uncertainty, due to the difficulty of obtaining realistic values of the within batch and between batch variability. The second example is dose-response relations comparing the exponential model (one parameter), the beta-Poisson model (two parameters) and the Weibull-gamma model (three parameters). The conclusion is not that simple is best, but that simple is not stupid, and provides valuable information. Complex, on the other hand, is not always by definition more correct, but also does have its merits.

  15. Impact of CT attenuation correction method on quantitative respiratory-correlated (4D) PET/CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nyflot, Matthew J., E-mail: nyflot@uw.edu [Department of Radiation Oncology, University of Washington, Seattle, Washington 98195-6043 (United States); Lee, Tzu-Cheng [Department of Bioengineering, University of Washington, Seattle, Washington 98195-6043 (United States); Alessio, Adam M.; Kinahan, Paul E. [Department of Radiology, University of Washington, Seattle, Washington 98195-6043 (United States); Wollenweber, Scott D.; Stearns, Charles W. [GE Healthcare, Waukesha, Wisconsin 53188 (United States); Bowen, Stephen R. [Department of Radiation Oncology, University of Washington, Seattle, Washington 98195-6043 and Department of Radiology, University of Washington, Seattle, Washington 98195-6043 (United States)

    2015-01-15

    Purpose: Respiratory-correlated positron emission tomography (PET/CT) 4D PET/CT is used to mitigate errors from respiratory motion; however, the optimal CT attenuation correction (CTAC) method for 4D PET/CT is unknown. The authors performed a phantom study to evaluate the quantitative performance of CTAC methods for 4D PET/CT in the ground truth setting. Methods: A programmable respiratory motion phantom with a custom movable insert designed to emulate a lung lesion and lung tissue was used for this study. The insert was driven by one of five waveforms: two sinusoidal waveforms or three patient-specific respiratory waveforms. 3DPET and 4DPET images of the phantom under motion were acquired and reconstructed with six CTAC methods: helical breath-hold (3DHEL), helical free-breathing (3DMOT), 4D phase-averaged (4DAVG), 4D maximum intensity projection (4DMIP), 4D phase-matched (4DMATCH), and 4D end-exhale (4DEXH) CTAC. Recovery of SUV{sub max}, SUV{sub mean}, SUV{sub peak}, and segmented tumor volume was evaluated as RC{sub max}, RC{sub mean}, RC{sub peak}, and RC{sub vol}, representing percent difference relative to the static ground truth case. Paired Wilcoxon tests and Kruskal–Wallis ANOVA were used to test for significant differences. Results: For 4DPET imaging, the maximum intensity projection CTAC produced significantly more accurate recovery coefficients than all other CTAC methods (p < 0.0001 over all metrics). Over all motion waveforms, ratios of 4DMIP CTAC recovery were 0.2 ± 5.4, −1.8 ± 6.5, −3.2 ± 5.0, and 3.0 ± 5.9 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. In comparison, recovery coefficients for phase-matched CTAC were −8.4 ± 5.3, −10.5 ± 6.2, −7.6 ± 5.0, and −13.0 ± 7.7 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. When testing differences between phases over all CTAC methods and waveforms, end-exhale phases were significantly more accurate (p = 0.005). However, these differences were driven by

  16. [Surveying a zoological facility through satellite-based geodesy].

    Science.gov (United States)

    Böer, M; Thien, W; Tölke, D

    2000-06-01

    In the course of a thesis submitted for a diploma degree within the Fachhochschule Oldenburg the Serengeti Safaripark was surveyed in autumn and winter 1996/97 laying in the planning foundations for the application for licences from the controlling authorities. Taking into consideration the special way of keeping animals in the Serengeti Safaripark (game ranching, spacious walk-through-facilities) the intention was to employ the outstanding satellite based geodesy. This technology relies on special aerials receiving signals from 24 satellites which circle around the globe. These data are being gathered and examined. This examination produces the exact position of this aerial in a system of coordinates which allows depicting this point on a map. This procedure was used stationary (from a strictly defined point) as well as in the movement (in a moving car). Additionally conventional procedures were used when the satellite based geodesy came to its limits. Finally a detailed map of the Serengeti Safaripark was created which shows the position and size of stables and enclosures as well as wood and water areas and the sectors of the leisure park. Furthermore the established areas of the enclosures together with an already existing animal databank have flown into an information system with the help of which the stock of animals can be managed enclosure-orientated.

  17. Satellite-based detection of volcanic sulphur dioxide from recent eruptions in Central and South America

    Directory of Open Access Journals (Sweden)

    D. Loyola

    2008-01-01

    Full Text Available Volcanic eruptions can emit large amounts of rock fragments and fine particles (ash into the atmosphere, as well as several gases, including sulphur dioxide (SO2. These ejecta and emissions are a major natural hazard, not only to the local population, but also to the infrastructure in the vicinity of volcanoes and to aviation. Here, we describe a methodology to retrieve quantitative information about volcanic SO2 plumes from satellite-borne measurements in the UV/Visible spectral range. The combination of a satellite-based SO2 detection scheme and a state-of-the-art 3D trajectory model enables us to confirm the volcanic origin of trace gas signals and to estimate the plume height and the effective emission height. This is demonstrated by case-studies for four selected volcanic eruptions in South and Central America, using the GOME, SCIAMACHY and GOME-2 instruments.

  18. Programmable Ultra-Lightweight System Adaptable Radio Satellite Base Station

    Science.gov (United States)

    Varnavas, Kosta; Sims, Herb

    2015-01-01

    With the explosion of the CubeSat, small sat, and nanosat markets, the need for a robust, highly capable, yet affordable satellite base station, capable of telemetry capture and relay, is significant. The Programmable Ultra-Lightweight System Adaptable Radio (PULSAR) is NASA Marshall Space Flight Center's (MSFC's) software-defined digital radio, developed with previous Technology Investment Programs and Technology Transfer Office resources. The current PULSAR will have achieved a Technology Readiness Level-6 by the end of FY 2014. The extensibility of the PULSAR will allow it to be adapted to perform the tasks of a mobile base station capable of commanding, receiving, and processing satellite, rover, or planetary probe data streams with an appropriate antenna.

  19. Detecting weather radar clutter using satellite-based nowcasting products

    DEFF Research Database (Denmark)

    Jensen, Thomas B.S.; Gill, Rashpal S.; Overgaard, Søren

    2006-01-01

    This contribution presents the initial results from experiments with detection of weather radar clutter by information fusion with satellite based nowcasting products. Previous studies using information fusion of weather radar data and first generation Meteosat imagery have shown promising results...... for the detecting and removal of clutter. Naturally, the improved spatio-temporal resolution of the Meteosat Second Generation sensors, coupled with its increased number of spectral bands, is expected to yield even better detection accuracies. Weather radar data from three C-band Doppler weather radars...... Application Facility' of EUMETSAT and is based on multispectral images from the SEVIRI sensor of the Meteosat-8 platform. Of special interest is the 'Precipitating Clouds' product, which uses the spectral information coupled with surface temperatures from Numerical Weather Predictions to assign probabilities...

  20. Impact of subject head motion on quantitative brain 15O PET and its correction by image-based registration algorithm

    International Nuclear Information System (INIS)

    Matsubara, Keisuke; Ibaraki, Masanobu; Nakamura, Kazuhiro; Yamaguchi, Hiroshi; Umetsu, Atsushi; Kinoshita, Fumiko; Kinoshita, Toshibumi

    2013-01-01

    Subject head motion during sequential 15 O positron emission tomography (PET) scans can result in artifacts in cerebral blood flow (CBF) and oxygen metabolism maps. However, to our knowledge, there are no systematic studies examining this issue. Herein, we investigated the effect of head motion on quantification of CBF and oxygen metabolism, and proposed an image-based motion correction method dedicated to 15 O PET study, correcting for transmission-emission mismatch and inter-scan mismatch of emission scans. We analyzed 15 O PET data for patients with major arterial steno-occlusive disease (n=130) to determine the occurrence frequency of head motion during 15 O PET examination. Image-based motion correction without and with realignment between transmission and emission scans, termed simple and 2-step method, respectively, was applied to the cases that showed severe inter-scan motion. Severe inter-scan motion (>3 mm translation or >5deg rotation) was observed in 27 of 520 adjacent scan pairs (5.2%). In these cases, unrealistic values of oxygen extraction fraction (OEF) or cerebrovascular reactivity (CVR) were observed without motion correction. Motion correction eliminated these artifacts. The volume-of-interest (VOI) analysis demonstrated that the motion correction changed the OEF on the middle cerebral artery territory by 17.3% at maximum. The inter-scan motion also affected cerebral blood volume (CBV), cerebral metabolism rate of oxygen (CMRO 2 ) and CBF, which were improved by the motion correction. A difference of VOI values between the simple and 2-step method was also observed. These data suggest that image-based motion correction is useful for accurate measurement of CBF and oxygen metabolism by 15 O PET. (author)

  1. SAMIRA - SAtellite based Monitoring Initiative for Regional Air quality

    Science.gov (United States)

    Schneider, Philipp; Stebel, Kerstin; Ajtai, Nicolae; Diamandi, Andrei; Horalek, Jan; Nicolae, Doina; Stachlewska, Iwona; Zehner, Claus

    2016-04-01

    Here, we present a new ESA-funded project entitled Satellite based Monitoring Initiative for Regional Air quality (SAMIRA), which aims at improving regional and local air quality monitoring through synergetic use of data from present and upcoming satellites, traditionally used in situ air quality monitoring networks and output from chemical transport models. Through collaborative efforts in four countries, namely Romania, Poland, the Czech Republic and Norway, all with existing air quality problems, SAMIRA intends to support the involved institutions and associated users in their national monitoring and reporting mandates as well as to generate novel research in this area. Despite considerable improvements in the past decades, Europe is still far from achieving levels of air quality that do not pose unacceptable hazards to humans and the environment. Main concerns in Europe are exceedances of particulate matter (PM), ground-level ozone, benzo(a)pyrene (BaP) and nitrogen dioxide (NO2). While overall sulfur dioxide (SO2) emissions have decreased in recent years, regional concentrations can still be high in some areas. The objectives of SAMIRA are to improve algorithms for the retrieval of hourly aerosol optical depth (AOD) maps from SEVIRI, and to develop robust methods for deriving column- and near-surface PM maps for the study area by combining satellite AOD with information from regional models. The benefit to existing monitoring networks (in situ, models, satellite) by combining these datasets using data fusion methods will be tested for satellite-based NO2, SO2, and PM/AOD. Furthermore, SAMIRA will test and apply techniques for downscaling air quality-related EO products to a spatial resolution that is more in line with what is generally required for studying urban and regional scale air quality. This will be demonstrated for a set of study sites that include the capitals of the four countries and the highly polluted areas along the border of Poland and the

  2. Evaluating the hydrological consistency of satellite based water cycle components

    KAUST Repository

    Lopez Valencia, Oliver Miguel

    2016-06-15

    Advances in multi-satellite based observations of the earth system have provided the capacity to retrieve information across a wide-range of land surface hydrological components and provided an opportunity to characterize terrestrial processes from a completely new perspective. Given the spatial advantage that space-based observations offer, several regional-to-global scale products have been developed, offering insights into the multi-scale behaviour and variability of hydrological states and fluxes. However, one of the key challenges in the use of satellite-based products is characterizing the degree to which they provide realistic and representative estimates of the underlying retrieval: that is, how accurate are the hydrological components derived from satellite observations? The challenge is intrinsically linked to issues of scale, since the availability of high-quality in-situ data is limited, and even where it does exist, is generally not commensurate to the resolution of the satellite observation. Basin-scale studies have shown considerable variability in achieving water budget closure with any degree of accuracy using satellite estimates of the water cycle. In order to assess the suitability of this type of approach for evaluating hydrological observations, it makes sense to first test it over environments with restricted hydrological inputs, before applying it to more hydrological complex basins. Here we explore the concept of hydrological consistency, i.e. the physical considerations that the water budget impose on the hydrologic fluxes and states to be temporally and spatially linked, to evaluate the reproduction of a set of large-scale evaporation (E) products by using a combination of satellite rainfall (P) and Gravity Recovery and Climate Experiment (GRACE) observations of storage change, focusing on arid and semi-arid environments, where the hydrological flows can be more realistically described. Our results indicate no persistent hydrological

  3. Satellite-based emission constraint for nitrogen oxides: Capability and uncertainty

    Science.gov (United States)

    Lin, J.; McElroy, M. B.; Boersma, F.; Nielsen, C.; Zhao, Y.; Lei, Y.; Liu, Y.; Zhang, Q.; Liu, Z.; Liu, H.; Mao, J.; Zhuang, G.; Roozendael, M.; Martin, R.; Wang, P.; Spurr, R. J.; Sneep, M.; Stammes, P.; Clemer, K.; Irie, H.

    2013-12-01

    Vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) retrieved from satellite remote sensing have been employed widely to constrain emissions of nitrogen oxides (NOx). A major strength of satellite-based emission constraint is analysis of emission trends and variability, while a crucial limitation is errors both in satellite NO2 data and in model simulations relating NOx emissions to NO2 columns. Through a series of studies, we have explored these aspects over China. We separate anthropogenic from natural sources of NOx by exploiting their different seasonality. We infer trends of NOx emissions in recent years and effects of a variety of socioeconomic events at different spatiotemporal scales including the general economic growth, global financial crisis, Chinese New Year, and Beijing Olympics. We further investigate the impact of growing NOx emissions on particulate matter (PM) pollution in China. As part of recent developments, we identify and correct errors in both satellite NO2 retrieval and model simulation that ultimately affect NOx emission constraint. We improve the treatments of aerosol optical effects, clouds and surface reflectance in the NO2 retrieval process, using as reference ground-based MAX-DOAS measurements to evaluate the improved retrieval results. We analyze the sensitivity of simulated NO2 to errors in the model representation of major meteorological and chemical processes with a subsequent correction of model bias. Future studies will implement these improvements to re-constrain NOx emissions.

  4. The attitude inversion method of geostationary satellites based on unscented particle filter

    Science.gov (United States)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  5. SU-G-IeP1-06: Estimating Relative Tissue Density From Quantitative MR Images: A Novel Perspective for MRI-Only Heterogeneity Corrected Dose Calculation

    International Nuclear Information System (INIS)

    Soliman, A; Hashemi, M; Safigholi, H; Tchistiakova, E; Song, W

    2016-01-01

    Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation times and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T_2* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R"2 = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R"2=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.

  6. SU-G-IeP1-06: Estimating Relative Tissue Density From Quantitative MR Images: A Novel Perspective for MRI-Only Heterogeneity Corrected Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Soliman, A; Hashemi, M; Safigholi, H [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Centre, Toronto, ON (Canada); Tchistiakova, E [Sunnybrook Health Sciences Centre, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada); Song, W [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Centre, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada)

    2016-06-15

    Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation times and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T{sub 2}* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R{sup 2} = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R{sup 2}=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.

  7. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    Science.gov (United States)

    Wuhrer, R.; Moran, K.

    2014-03-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.

  8. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    International Nuclear Information System (INIS)

    Wuhrer, R; Moran, K

    2014-01-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper

  9. Satellite-based Drought Reporting on the Navajo Nation

    Science.gov (United States)

    McCullum, A. J. K.; Schmidt, C.; Ly, V.; Green, R.; McClellan, C.

    2017-12-01

    The Navajo Nation (NN) is the largest reservation in the US, and faces challenges related to water management during long-term and widespread drought episodes. The Navajo Nation is a federally recognized tribe, which has boundaries within Arizona, New Mexico, and Utah. The Navajo Nation has a land area of over 70,000 square kilometers. The Navajo Nation Department of Water Resources (NNDWR) reports on drought and climatic conditions through the use of regional Standardized Precipitation Index (SPI) values and a network of in-situ rainfall, streamflow, and climate data. However, these data sources lack the spatial detail and consistent measurements needed to provide a coherent understanding of the drought regime within the Nation's regional boundaries. This project, as part of NASA's Western Water Applications Office (WWAO), improves upon the recently developed Drought Severity Assessment Tool (DSAT) to ingest satellite-based precipitation data to generate SPI values for specific administrative boundaries within the reservation. The tool aims to: (1) generate SPI values and summary statistics for regions of interest on various timescales, (2) to visualize SPI values within a web-map application, and (3) produce maps and comparative statistical outputs in the format required for annual drought reporting. The co-development of the DSAT with NN partners is integral to increasing the sustained use of Earth Observations for water management applications. This tool will provide data to support the NN in allocation of drought contingency dollars to the regions most adversely impacted by declines in water availability.

  10. Development and validation of satellite based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2015-10-01

    A satellite based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5% for classifying Clear (V ≥ 30 km), Moderate (10 km ≤ V United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network, and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  11. Dissemination of satellite-based river discharge and flood data

    Science.gov (United States)

    Kettner, A. J.; Brakenridge, G. R.; van Praag, E.; de Groeve, T.; Slayback, D. A.; Cohen, S.

    2014-12-01

    In collaboration with NASA Goddard Spaceflight Center and the European Commission Joint Research Centre, the Dartmouth Flood Observatory (DFO) daily measures and distributes: 1) river discharges, and 2) near real-time flood extents with a global coverage. Satellite-based passive microwave sensors and hydrological modeling are utilized to establish 'remote-sensing based discharge stations', and observed time series cover 1998 to the present. The advantages over in-situ gauged discharges are: a) easy access to remote or due to political reasons isolated locations, b) relatively low maintenance costs to maintain a continuous observational record, and c) the capability to obtain measurements during floods, hazardous conditions that often impair or destroy in-situ stations. Two MODIS instruments aboard the NASA Terra and Aqua satellites provide global flood extent coverage at a spatial resolution of 250m. Cloud cover hampers flood extent detection; therefore we ingest 6 images (the Terra and Aqua images of each day, for three days), in combination with a cloud shadow filter, to provide daily global flood extent updates. The Flood Observatory has always made it a high priority to visualize and share its data and products through its website. Recent collaborative efforts with e.g. GeoSUR have enhanced accessibility of DFO data. A web map service has been implemented to automatically disseminate geo-referenced flood extent products into client-side GIS software. For example, for Latin America and the Caribbean region, the GeoSUR portal now displays current flood extent maps, which can be integrated and visualized with other relevant geographical data. Furthermore, the flood state of satellite-observed river discharge sites are displayed through the portal as well. Additional efforts include implementing Open Geospatial Consortium (OGC) standards to incorporate Water Markup Language (WaterML) data exchange mechanisms to further facilitate the distribution of the satellite

  12. Mixed model phase evolution for correction of magnetic field inhomogeneity effects in 3D quantitative gradient echo-based MRI

    DEFF Research Database (Denmark)

    Fatnassi, Chemseddine; Boucenna, Rachid; Zaidi, Habib

    2017-01-01

    PURPOSE: In 3D gradient echo magnetic resonance imaging (MRI), strong field gradients B0macro are visually observed at air/tissue interfaces. At low spatial resolution in particular, the respective field gradients lead to an apparent increase in intravoxel dephasing, and subsequently, to signal...... loss or inaccurate R2* estimates. If the strong field gradients are measured, their influence can be removed by postprocessing. METHODS: Conventional corrections usually assume a linear phase evolution with time. For high macroscopic gradient inhomogeneities near the edge of the brain...

  13. Improved Satellite-based Photosysnthetically Active Radiation (PAR) for Air Quality Studies

    Science.gov (United States)

    Pour Biazar, A.; McNider, R. T.; Cohan, D. S.; White, A.; Zhang, R.; Dornblaser, B.; Doty, K.; Wu, Y.; Estes, M. J.

    2015-12-01

    One of the challenges in understanding the air quality over forested regions has been the uncertainties in estimating the biogenic hydrocarbon emissions. Biogenic volatile organic compounds, BVOCs, play a critical role in atmospheric chemistry, particularly in ozone and particulate matter (PM) formation. In southeastern United States, BVOCs (mostly as isoprene) are the dominant summertime source of reactive hydrocarbon. Despite significant efforts in improving BVOC estimates, the errors in emission inventories remain a concern. Since BVOC emissions are particularly sensitive to the available photosynthetically active radiation (PAR), model errors in PAR result in large errors in emission estimates. Thus, utilization of satellite observations to estimate PAR can help in reducing emission uncertainties. Satellite-based PAR estimates rely on the technique used to derive insolation from satellite visible brightness measurements. In this study we evaluate several insolation products against surface pyranometer observations and offer a bias correction to generate a more accurate PAR product. The improved PAR product is then used in biogenic emission estimates. The improved biogenic emission estimates are compared to the emission inventories over Texas and used in air quality simulation over the period of August-September 2013 (NASA's Discover-AQ field campaign). A series of sensitivity simulations will be performed and evaluated against Discover-AQ observations to test the impact of satellite-derived PAR on air quality simulations.

  14. Statistical correction of the Winner's Curse explains replication variability in quantitative trait genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Cameron Palmer

    2017-07-01

    Full Text Available Genome-wide association studies (GWAS have identified hundreds of SNPs responsible for variation in human quantitative traits. However, genome-wide-significant associations often fail to replicate across independent cohorts, in apparent inconsistency with their apparent strong effects in discovery cohorts. This limited success of replication raises pervasive questions about the utility of the GWAS field. We identify all 332 studies of quantitative traits from the NHGRI-EBI GWAS Database with attempted replication. We find that the majority of studies provide insufficient data to evaluate replication rates. The remaining papers replicate significantly worse than expected (p < 10-14, even when adjusting for regression-to-the-mean of effect size between discovery- and replication-cohorts termed the Winner's Curse (p < 10-16. We show this is due in part to misreporting replication cohort-size as a maximum number, rather than per-locus one. In 39 studies accurately reporting per-locus cohort-size for attempted replication of 707 loci in samples with similar ancestry, replication rate matched expectation (predicted 458, observed 457, p = 0.94. In contrast, ancestry differences between replication and discovery (13 studies, 385 loci cause the most highly-powered decile of loci to replicate worse than expected, due to difference in linkage disequilibrium.

  15. Operational Satellite-based Surface Oil Analyses (Invited)

    Science.gov (United States)

    Streett, D.; Warren, C.

    2010-12-01

    During the Deepwater Horizon spill, NOAA imagery analysts in the Satellite Analysis Branch (SAB) issued more than 300 near-real-time satellite-based oil spill analyses. These analyses were used by the oil spill response community for planning, issuing surface oil trajectories and tasking assets (e.g., oil containment booms, skimmers, overflights). SAB analysts used both Synthetic Aperture Radar (SAR) and high resolution visible/near IR multispectral satellite imagery as well as a variety of ancillary datasets. Satellite imagery used included ENVISAT ASAR (ESA), TerraSAR-X (DLR), Cosmo-Skymed (ASI), ALOS (JAXA), Radarsat (MDA), ENVISAT MERIS (ESA), SPOT (SPOT Image Corp.), Aster (NASA), MODIS (NASA), and AVHRR (NOAA). Ancillary datasets included ocean current information, wind information, location of natural oil seeps and a variety of in situ oil observations. The analyses were available as jpegs, pdfs, shapefiles and through Google, KML files and also available on a variety of websites including Geoplatform and ERMA. From the very first analysis issued just 5 hours after the rig sank through the final analysis issued in August, the complete archive is still publicly available on the NOAA/NESDIS website http://www.ssd.noaa.gov/PS/MPS/deepwater.html SAB personnel also served as the Deepwater Horizon International Disaster Charter Project Manager (at the official request of the USGS). The Project Manager’s primary responsibility was to acquire and oversee the processing and dissemination of satellite data generously donated by numerous private companies and nations in support of the oil spill response including some of the imagery described above. SAB has begun to address a number of goals that will improve our routine oil spill response as well as help assure that we are ready for the next spill of national significance. We hope to (1) secure a steady, abundant and timely stream of suitable satellite imagery even in the absence of large-scale emergencies such as

  16. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  17. New perspectives for satellite-based archaeological research in the ancient territory of Hierapolis (Turkey

    Directory of Open Access Journals (Sweden)

    R. Lasaponara

    2008-11-01

    Full Text Available This paper deals with the use of satellite QuickBird images to find traces of past human activity in the ancient territory of Hierapolis (Turkey. This is one of the most important archaeological sites in Turkey, and in 1988 it was inscribed in the UNESCO World Heritage list. Although over the years the archaeological site of Hierapolis has been excavated, restored and well documented, up to now the territory around the ancient urban area is still largely unknown. The current research project, still in progress, aims to search the area neighbouring Hierapolis believed to have been under the control of the city for a long time and, therefore, expected to be very rich in archaeological evidence. In order to investigate a large area around the ancient Hierapolis and discover potential archaeological remains, QuickBird images were adopted.

    Results from satellite-based analysis allowed us to find several unknown rural settlements dating back to early Imperial Roman and the Byzantine age. Two significant test sites were focused on in this paper in order to characterize the different spectral responses observed for different types of archaeological features (shadow and soil marks. Principal Component Analysis and spectral indices were computed to enhance archaeological marks and make identification easier. The capability of the QuickBird data set (panchromatic, multispectral channel, PCA and spectral indices in searching for archaeological marks was assessed in a quantitative way by using a specific indicator.

  18. Quantitative Evaluation of Segmentation- and Atlas-Based Attenuation Correction for PET/MR on Pediatric Patients.

    Science.gov (United States)

    Bezrukov, Ilja; Schmidt, Holger; Gatidis, Sergios; Mantlik, Frédéric; Schäfer, Jürgen F; Schwenzer, Nina; Pichler, Bernd J

    2015-07-01

    Pediatric imaging is regarded as a key application for combined PET/MR imaging systems. Because existing MR-based attenuation-correction methods were not designed specifically for pediatric patients, we assessed the impact of 2 potentially influential factors: inter- and intrapatient variability of attenuation coefficients and anatomic variability. Furthermore, we evaluated the quantification accuracy of 3 methods for MR-based attenuation correction without (SEGbase) and with bone prediction using an adult and a pediatric atlas (SEGwBONEad and SEGwBONEpe, respectively) on PET data of pediatric patients. The variability of attenuation coefficients between and within pediatric (5-17 y, n = 17) and adult (27-66 y, n = 16) patient collectives was assessed on volumes of interest (VOIs) in CT datasets for different tissue types. Anatomic variability was assessed on SEGwBONEad/pe attenuation maps by computing mean differences to CT-based attenuation maps for regions of bone tissue, lungs, and soft tissue. PET quantification was evaluated on VOIs with physiologic uptake and on 80% isocontour VOIs with elevated uptake in the thorax and abdomen/pelvis. Inter- and intrapatient variability of the bias was assessed for each VOI group and method. Statistically significant differences in mean VOI Hounsfield unit values and linear attenuation coefficients between adult and pediatric collectives were found in the lungs and femur. The prediction of attenuation maps using the pediatric atlas showed a reduced error in bone tissue and better delineation of bone structure. Evaluation of PET quantification accuracy showed statistically significant mean errors in mean standardized uptake values of -14% ± 5% and -23% ± 6% in bone marrow and femur-adjacent VOIs with physiologic uptake for SEGbase, which could be reduced to 0% ± 4% and -1% ± 5% using SEGwBONEpe attenuation maps. Bias in soft-tissue VOIs was less than 5% for all methods. Lung VOIs showed high SDs in the range of 15% for

  19. Potassium-based algorithm allows correction for the hematocrit bias in quantitative analysis of caffeine and its major metabolite in dried blood spots.

    Science.gov (United States)

    De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P

    2014-10-01

    Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.

  20. Application of bias correction methods to improve U3Si2 sample preparation for quantitative analysis by WDXRF

    International Nuclear Information System (INIS)

    Scapin, Marcos A.; Guilhen, Sabine N.; Azevedo, Luciana C. de; Cotrim, Marycel E.B.; Pires, Maria Ap. F.

    2017-01-01

    The determination of silicon (Si), total uranium (U) and impurities in uranium-silicide (U 3 Si 2 ) samples by wavelength dispersion X-ray fluorescence technique (WDXRF) has been already validated and is currently implemented at IPEN's X-Ray Fluorescence Laboratory (IPEN-CNEN/SP) in São Paulo, Brazil. Sample preparation requires the use of approximately 3 g of H 3 BO 3 as sample holder and 1.8 g of U 3 Si 2 . However, because boron is a neutron absorber, this procedure precludes U 3 Si 2 sample's recovery, which, in time, considering routinely analysis, may account for significant unusable uranium waste. An estimated average of 15 samples per month are expected to be analyzed by WDXRF, resulting in approx. 320 g of U 3 Si 2 that would not return to the nuclear fuel cycle. This not only impacts in production losses, but generates another problem: radioactive waste management. The purpose of this paper is to present the mathematical models that may be applied for the correction of systematic errors when H 3 BO 3 sample holder is substituted by cellulose-acetate {[C 6 H 7 O 2 (OH) 3-m (OOCCH 3 )m], m = 0∼3}, thus enabling U 3 Si 2 sample’s recovery. The results demonstrate that the adopted mathematical model is statistically satisfactory, allowing the optimization of the procedure. (author)

  1. Highlights of satellite-based forest change recognition and tracking using the ForWarn System

    Science.gov (United States)

    Steven P. Norman; William W. Hargrove; Joseph P. Spruce; William M. Christie; Sean W. Schroeder

    2013-01-01

    For a higher resolution version of this file, please use the following link: www.geobabble.orgSatellite-based remote sensing can assist forest managers with their need to recognize disturbances and track recovery. Despite the long...

  2. Goddard Satellite-Based Surface Turbulent Fluxes Climatology, Yearly Grid V3

    Data.gov (United States)

    National Aeronautics and Space Administration — These data are the Goddard Satellite-based Surface Turbulent Fluxes Version-3 Dataset recently produced through a MEaSUREs funded project led by Dr. Chung-Lin Shie...

  3. Goddard Satellite-Based Surface Turbulent Fluxes Climatology, Seasonal Grid V3

    Data.gov (United States)

    National Aeronautics and Space Administration — These data are the Goddard Satellite-based Surface Turbulent Fluxes Version-3 Dataset recently produced through a MEaSUREs funded project led by Dr. Chung-Lin Shie...

  4. Accurate quantitative CF-LIBS analysis of both major and minor elements in alloys via iterative correction of plasma temperature and spectral intensity

    Science.gov (United States)

    Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA

    2018-03-01

    The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.

  5. Quantitation of regional cerebral blood flow corrected for partial volume effect using O-15 water and PET: I. Theory, error analysis, and stereologic comparison

    DEFF Research Database (Denmark)

    Lida, H; Law, I; Pakkenberg, B

    2000-01-01

    Limited spatial resolution of positron emission tomography (PET) can cause significant underestimation in the observed regional radioactivity concentration (so-called partial volume effect or PVE) resulting in systematic errors in estimating quantitative physiologic parameters. The authors have...... formulated four mathematical models that describe the dynamic behavior of a freely diffusible tracer (H215O) in a region of interest (ROI) incorporating estimates of regional tissue flow that are independent of PVE. The current study was intended to evaluate the feasibility of these models and to establish...... a methodology to accurately quantify regional cerebral blood flow (CBF) corrected for PVE in cortical gray matter regions. Five monkeys were studied with PET after IV H2(15)O two times (n = 3) or three times (n = 2) in a row. Two ROIs were drawn on structural magnetic resonance imaging (MRI) scans and projected...

  6. Quantitative chemical exchange saturation transfer (qCEST) MRI--RF spillover effect-corrected omega plot for simultaneous determination of labile proton fraction ratio and exchange rate.

    Science.gov (United States)

    Sun, Phillip Zhe; Wang, Yu; Dai, ZhuoZhi; Xiao, Gang; Wu, Renhua

    2014-01-01

    Chemical exchange saturation transfer (CEST) MRI is sensitive to dilute proteins and peptides as well as microenvironmental properties. However, the complexity of the CEST MRI effect, which varies with the labile proton content, exchange rate and experimental conditions, underscores the need for developing quantitative CEST (qCEST) analysis. Towards this goal, it has been shown that omega plot is capable of quantifying paramagnetic CEST MRI. However, the use of the omega plot is somewhat limited for diamagnetic CEST (DIACEST) MRI because it is more susceptible to direct radio frequency (RF) saturation (spillover) owing to the relatively small chemical shift. Recently, it has been found that, for dilute DIACEST agents that undergo slow to intermediate chemical exchange, the spillover effect varies little with the labile proton ratio and exchange rate. Therefore, we postulated that the omega plot analysis can be improved if RF spillover effect could be estimated and taken into account. Specifically, simulation showed that both labile proton ratio and exchange rate derived using the spillover effect-corrected omega plot were in good agreement with simulated values. In addition, the modified omega plot was confirmed experimentally, and we showed that the derived labile proton ratio increased linearly with creatine concentration (p plot for quantitative analysis of DIACEST MRI. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Regional geology mapping using satellite-based remote sensing approach in Northern Victoria Land, Antarctica

    Science.gov (United States)

    Pour, Amin Beiranvand; Park, Yongcheol; Park, Tae-Yoon S.; Hong, Jong Kuk; Hashim, Mazlan; Woo, Jusun; Ayoobi, Iman

    2018-06-01

    Satellite remote sensing imagery is especially useful for geological investigations in Antarctica because of its remoteness and extreme environmental conditions that constrain direct geological survey. The highest percentage of exposed rocks and soils in Antarctica occurs in Northern Victoria Land (NVL). Exposed Rocks in NVL were part of the paleo-Pacific margin of East Gondwana during the Paleozoic time. This investigation provides a satellite-based remote sensing approach for regional geological mapping in the NVL, Antarctica. Landsat-8 and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) datasets were used to extract lithological-structural and mineralogical information. Several spectral-band ratio indices were developed using Landsat-8 and ASTER bands and proposed for Antarctic environments to map spectral signatures of snow/ice, iron oxide/hydroxide minerals, Al-OH-bearing and Fe, Mg-OH and CO3 mineral zones, and quartz-rich felsic and mafic-to-ultramafic lithological units. The spectral-band ratio indices were tested and implemented to Level 1 terrain-corrected (L1T) products of Landsat-8 and ASTER datasets covering the NVL. The surface distribution of the mineral assemblages was mapped using the spectral-band ratio indices and verified by geological expeditions and laboratory analysis. Resultant image maps derived from spectral-band ratio indices that developed in this study are fairly accurate and correspond well with existing geological maps of the NVL. The spectral-band ratio indices developed in this study are especially useful for geological investigations in inaccessible locations and poorly exposed lithological units in Antarctica environments.

  8. Quantitative Evaluation of Atlas-based Attenuation Correction for Brain PET in an Integrated Time-of-Flight PET/MR Imaging System.

    Science.gov (United States)

    Yang, Jaewon; Jian, Yiqiang; Jenkins, Nathaniel; Behr, Spencer C; Hope, Thomas A; Larson, Peder E Z; Vigneron, Daniel; Seo, Youngho

    2017-07-01

    Purpose To assess the patient-dependent accuracy of atlas-based attenuation correction (ATAC) for brain positron emission tomography (PET) in an integrated time-of-flight (TOF) PET/magnetic resonance (MR) imaging system. Materials and Methods Thirty recruited patients provided informed consent in this institutional review board-approved study. All patients underwent whole-body fluorodeoxyglucose PET/computed tomography (CT) followed by TOF PET/MR imaging. With use of TOF PET data, PET images were reconstructed with four different attenuation correction (AC) methods: PET with patient CT-based AC (CTAC), PET with ATAC (air and bone from an atlas), PET with ATAC patientBone (air and tissue from the atlas with patient bone), and PET with ATAC boneless (air and tissue from the atlas without bone). For quantitative evaluation, PET mean activity concentration values were measured in 14 1-mL volumes of interest (VOIs) distributed throughout the brain and statistical significance was tested with a paired t test. Results The mean overall difference (±standard deviation) of PET with ATAC compared with PET with CTAC was -0.69 kBq/mL ± 0.60 (-4.0% ± 3.2) (P PET with ATAC boneless (-9.4% ± 3.7) was significantly worse than that of PET with ATAC (-4.0% ± 3.2) (P PET with ATAC patientBone (-1.5% ± 1.5) improved over that of PET with ATAC (-4.0% ± 3.2) (P PET/MR imaging achieves similar quantification accuracy to that from CTAC by means of atlas-based bone compensation. However, patient-specific anatomic differences from the atlas causes bone attenuation differences and misclassified sinuses, which result in patient-dependent performance variation of ATAC. © RSNA, 2017 Online supplemental material is available for this article.

  9. Quantitative Analysis of First-Pass Contrast-Enhanced Myocardial Perfusion Multidetector CT Using a Patlak Plot Method and Extraction Fraction Correction During Adenosine Stress

    Science.gov (United States)

    Ichihara, Takashi; George, Richard T.; Silva, Caterina; Lima, Joao A. C.; Lardo, Albert C.

    2011-02-01

    The purpose of this study was to develop a quantitative method for myocardial blood flow (MBF) measurement that can be used to derive accurate myocardial perfusion measurements from dynamic multidetector computed tomography (MDCT) images by using a compartment model for calculating the first-order transfer constant (K1) with correction for the capillary transit extraction fraction (E). Six canine models of left anterior descending (LAD) artery stenosis were prepared and underwent first-pass contrast-enhanced MDCT perfusion imaging during adenosine infusion (0.14-0.21 mg/kg/min). K1 , which is the first-order transfer constant from left ventricular (LV) blood to myocardium, was measured using the Patlak plot method applied to time-attenuation curve data of the LV blood pool and myocardium. The results were compared against microsphere MBF measurements, and the extraction fraction of contrast agent was calculated. K1 is related to the regional MBF as K1=EF, E=(1-exp(-PS/F)), where PS is the permeability-surface area product and F is myocardial flow. Based on the above relationship, a look-up table from K1 to MBF can be generated and Patlak plot-derived K1 values can be converted to the calculated MBF. The calculated MBF and microsphere MBF showed a strong linear association. The extraction fraction in dogs as a function of flow (F) was E=(1-exp(-(0.2532F+0.7871)/F)) . Regional MBF can be measured accurately using the Patlak plot method based on a compartment model and look-up table with extraction fraction correction from K1 to MBF.

  10. Implementing earth observation and advanced satellite based atmospheric sounders for water resource and climate modelling

    DEFF Research Database (Denmark)

    Boegh, E.; Dellwik, Ebba; Hahmann, Andrea N.

    2010-01-01

    This paper discusses preliminary remote sensing (MODIS) based hydrological modelling results for the Danish island Sjælland (7330 km2) in relation to project objectives and methodologies of a new research project “Implementing Earth observation and advanced satellite based atmospheric sounders....... For this purpose, a) internal catchment processes will be studied using a Distributed Temperature Sensing (DTS) system, b) Earth observations will be used to upscale from field to regional scales, and c) at the largest scale, satellite based atmospheric sounders and meso-scale climate modelling will be used...

  11. Spatial and temporal interpolation of satellite-based aerosol optical depth measurements over North America using B-splines

    Science.gov (United States)

    Pfister, Nicolas; O'Neill, Norman T.; Aube, Martin; Nguyen, Minh-Nghia; Bechamp-Laganiere, Xavier; Besnier, Albert; Corriveau, Louis; Gasse, Geremie; Levert, Etienne; Plante, Danick

    2005-08-01

    Satellite-based measurements of aerosol optical depth (AOD) over land are obtained from an inversion procedure applied to dense dark vegetation pixels of remotely sensed images. The limited number of pixels over which the inversion procedure can be applied leaves many areas with little or no AOD data. Moreover, satellite coverage by sensors such as MODIS yields only daily images of a given region with four sequential overpasses required to straddle mid-latitude North America. Ground based AOD data from AERONET sun photometers are available on a more continuous basis but only at approximately fifty locations throughout North America. The object of this work is to produce a complete and coherent mapping of AOD over North America with a spatial resolution of 0.1 degree and a frequency of three hours by interpolating MODIS satellite-based data together with available AERONET ground based measurements. Before being interpolated, the MODIS AOD data extracted from different passes are synchronized to the mapping time using analyzed wind fields from the Global Multiscale Model (Meteorological Service of Canada). This approach amounts to a trajectory type of simplified atmospheric dynamics correction method. The spatial interpolation is performed using a weighted least squares method applied to bicubic B-spline functions defined on a rectangular grid. The least squares method enables one to weight the data accordingly to the measurement errors while the B-splines properties of local support and C2 continuity offer a good approximation of AOD behaviour viewed as a function of time and space.

  12. Satellite-Based actual evapotranspiration over drying semiarid terrain in West-Africa

    NARCIS (Netherlands)

    Schuttemeyer, D.; Schillings, Ch.; Moene, A.F.; Bruin, de H.A.R.

    2007-01-01

    A simple satellite-based algorithm for estimating actual evaporation based on Makkink¿s equation is applied to a seasonal cycle in 2002 at three test sites in Ghana, West Africa: at a location in the humid tropical southern region and two in the drier northern region. The required input for the

  13. Assessing satellite-based start-of-season trends in the US High Plains

    International Nuclear Information System (INIS)

    Lin, X; Sassenrath, G F; Hubbard, K G; Mahmood, R

    2014-01-01

    To adequately assess the effects of global warming it is necessary to address trends and impacts at the local level. This study examines phenological changes in the start-of-season (SOS) derived from satellite observations from 1982–2008 in the US High Plains region. The surface climate-based SOS was also evaluated. The averaged profiles of SOS from 37° to 49°N latitude by satellite- and climate-based methods were in reasonable agreement, especially for areas where croplands were masked out and an additional frost date threshold was adopted. The statistically significant trends of satellite-based SOS show a later spring arrival ranging from 0.1 to 4.9 days decade −1 over nine Level III ecoregions. We found the croplands generally exhibited larger trends (later arrival) than the non-croplands. The area-averaged satellite-based SOS for non-croplands (i.e. mostly grasslands) showed no significant trends. We examined the trends of temperatures, precipitation, and standardized precipitation index (SPI), as well as the strength of correlation between the satellite-based SOS and these climatic drivers. Our results indicate that satellite-based SOS trends are spatially and primarily related to annual maximum normalized difference vegetation index (NDVI, mostly in summertime) and/or annual minimum NDVI (mostly in wintertime) and these trends showed the best correlation with six-month SPI over the period 1982–2008 in the US High Plains region. (letter)

  14. Satellite-based empirical models linking river plume dynamics with hypoxic area andvolume

    Science.gov (United States)

    Satellite-based empirical models explaining hypoxic area and volume variation were developed for the seasonally hypoxic (O2 < 2 mg L−1) northern Gulf of Mexico adjacent to the Mississippi River. Annual variations in midsummer hypoxic area and ...

  15. Relative equilibrium plot improves graphical analysis and allows bias correction of standardized uptake value ratio in quantitative 11C-PiB PET studies.

    Science.gov (United States)

    Zhou, Yun; Sojkova, Jitka; Resnick, Susan M; Wong, Dean F

    2012-04-01

    Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVRs) in ligand-receptor dynamic PET studies. The objective of this study was to use a recently developed relative equilibrium-based graphical (RE) plot method to improve and simplify the 2 commonly used methods for quantification of (11)C-Pittsburgh compound B ((11)C-PiB) PET. The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight (11)C-PiB dynamic PET scans (66 from controls and 12 from participants with mild cognitive impaired [MCI] from the Baltimore Longitudinal Study of Aging) were acquired over 90 min. Regions of interest (ROIs) were defined on coregistered MR images. Both the ROI and the pixelwise time-activity curves were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI time-activity curves were used as a reference for comparison of DVR estimates. Results from the theoretic analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI time-activity curves. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, and cingulate regions and the striatum were underestimated by the Logan plot (controls, 4%-12%; MCI, 9%-16%) and overestimated by the SUVR (controls, 8%-16%; MCI, 16%-24%). This bias was higher in the MCI group than in controls (P bias and higher consistency of DVR estimates than of SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of (11)C-PiB studies.

  16. Beat-to-beat respiratory motion correction with near 100% efficiency: a quantitative assessment using high-resolution coronary artery imaging☆

    Science.gov (United States)

    Scott, Andrew D.; Keegan, Jennifer; Firmin, David N.

    2011-01-01

    This study quantitatively assesses the effectiveness of retrospective beat-to-beat respiratory motion correction (B2B-RMC) at near 100% efficiency using high-resolution coronary artery imaging. Three-dimensional (3D) spiral images were obtained in a coronary respiratory motion phantom with B2B-RMC and navigator gating. In vivo, targeted 3D coronary imaging was performed in 10 healthy subjects using B2B-RMC spiral and navigator gated balanced steady-state free-precession (nav-bSSFP) techniques. Vessel diameter and sharpness in proximal and mid arteries were used as a measure of respiratory motion compensation effectiveness and compared between techniques. Phantom acquisitions with B2B-RMC were sharper than those acquired with navigator gating (B2B-RMC vs. navigator gating: 1.01±0.02 mm−1 vs. 0.86±0.08 mm−1, PB2B-RMC respiratory efficiency was significantly and substantially higher (99.7%±0.5%) than nav-bSSFP (44.0%±8.9%, PB2B-RMC vs. nav-bSSFP, proximal: 1.00±0.14 mm−1 vs. 1.08±0.11 mm−1, mid: 1.01±0.11 mm−1 vs. 1.05±0.12 mm−1; both P=not significant [ns]). Mid vessel diameters were not significantly different (2.85±0.39 mm vs. 2.80±0.35 mm, P=ns), but proximal B2B-RMC diameters were slightly higher (2.85±0.38 mm vs. 2.70±0.34 mm, PB2B-RMC is less variable and significantly higher than navigator gating. Phantom and in vivo vessel sharpness and diameter values suggest that respiratory motion compensation is equally effective. PMID:21292418

  17. Determination of correction coefficients for quantitative analysis by mass spectrometry. Application to uranium impurities analysis; Recherche des coefficients de correction permettant l'analyse quantitative par spectrometrie de masse. Application a l'analyse d'impuretes dans l'uranium

    Energy Technology Data Exchange (ETDEWEB)

    Billon, J P [Commissariat a l' Energie Atomique, Bruyeres-le-Chatel (France). Centre d' Etudes

    1970-07-01

    Some of basic principles in spark source mass spectrometry are recalled. It is shown how this method can lead to quantitative analysis when attention is paid to some theoretical aspects. A time constant relation being assumed between the analysed solid sample and the ionic beam it gives we determined experimental relative sensitivity factors for impurities in uranium matrix. Results being in fairly good agreement with: an unelaborate theory on ionization yield in spark-source use of theoretically obtained relative sensitivity factors in uranium matrix has been developed. (author) [French] Apres avoir rappele quelques principes fondamentaux regissant la spectrometrie de masse a etincelles, nous avons montre que moyennant un certain nombre de precautions, il etait possible d'utiliser cette methode en analyse quantitative. Ayant admis qu'il existait une relation constante dans le temps entre l'echantillon solide analyse et le faisceau ionique qui en est issu, nous avons d'abord entrepris de determiner des coefficients de correction experimentaux pour des matrices d'uranium. Les premiers resultats pratiques semblant en accord avec une theorie simple relative au rendement d'ionisation dans la source a etincelles, nous avons etudie la possibilite d'appliquer directement les coefficients theoriques ainsi definis, l'application etant toujours faite sur des matrices d'uranium. (auteur)

  18. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis

  19. Relative equilibrium plot improves graphical analysis and allows bias correction of SUVR in quantitative [11C]PiB PET studies

    Science.gov (United States)

    Zhou, Yun; Sojkova, Jitka; Resnick, Susan M.; Wong, Dean F.

    2012-01-01

    Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVR) in ligand-receptor dynamic PET studies. The objective of this study is to use a recently developed relative equilibrium-based graphical plot (RE plot) method to improve and simplify the two commonly used methods for quantification of [11C]PiB PET. Methods The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight [11C]PiB dynamic PET scans (66 from controls and 12 from mildly cognitively impaired participants (MCI) from the Baltimore Longitudinal Study of Aging (BLSA)) were acquired over 90 minutes. Regions of interest (ROIs) were defined on coregistered MRIs. Both the ROI and pixelwise time activity curves (TACs) were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI TACs were used as a reference for comparison of DVR estimates. Results Results from the theoretical analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI TACs. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, cingulate regions, and the striatum were underestimated by the Logan plot (controls 4 – 12%; MCI 9 – 16%) and overestimated by the SUVR (controls 8 – 16%; MCI 16 – 24%). This bias was higher in the MCI group than in controls (p plot or the bcSUVR. Conclusion The RE plot improves pixel-wise quantification of [11C]PiB dynamic PET compared to the conventional Logan plot. The bcSUVR results in lower bias and higher consistency of DVR estimates compared to SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of [11C]PiB studies. PMID:22414634

  20. Beat-to-beat respiratory motion correction with near 100% efficiency: a quantitative assessment using high-resolution coronary artery imaging.

    Science.gov (United States)

    Scott, Andrew D; Keegan, Jennifer; Firmin, David N

    2011-05-01

    This study quantitatively assesses the effectiveness of retrospective beat-to-beat respiratory motion correction (B2B-RMC) at near 100% efficiency using high-resolution coronary artery imaging. Three-dimensional (3D) spiral images were obtained in a coronary respiratory motion phantom with B2B-RMC and navigator gating. In vivo, targeted 3D coronary imaging was performed in 10 healthy subjects using B2B-RMC spiral and navigator gated balanced steady-state free-precession (nav-bSSFP) techniques. Vessel diameter and sharpness in proximal and mid arteries were used as a measure of respiratory motion compensation effectiveness and compared between techniques. Phantom acquisitions with B2B-RMC were sharper than those acquired with navigator gating (B2B-RMC vs. navigator gating: 1.01±0.02 mm(-1) vs. 0.86±0.08 mm(-1), PB2B-RMC respiratory efficiency was significantly and substantially higher (99.7%±0.5%) than nav-bSSFP (44.0%±8.9%, PB2B-RMC vs. nav-bSSFP, proximal: 1.00±0.14 mm(-1) vs. 1.08±0.11 mm(-1), mid: 1.01±0.11 mm(-1) vs. 1.05±0.12 mm(-1); both P=not significant [ns]). Mid vessel diameters were not significantly different (2.85±0.39 mm vs. 2.80±0.35 mm, P=ns), but proximal B2B-RMC diameters were slightly higher (2.85±0.38 mm vs. 2.70±0.34 mm, PB2B-RMC is less variable and significantly higher than navigator gating. Phantom and in vivo vessel sharpness and diameter values suggest that respiratory motion compensation is equally effective. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. B1 mapping for bias-correction in quantitative T1 imaging of the brain at 3T using standard pulse sequences.

    Science.gov (United States)

    Boudreau, Mathieu; Tardif, Christine L; Stikov, Nikola; Sled, John G; Lee, Wayne; Pike, G Bruce

    2017-12-01

    B 1 mapping is important for many quantitative imaging protocols, particularly those that include whole-brain T 1 mapping using the variable flip angle (VFA) technique. However, B 1 mapping sequences are not typically available on many magnetic resonance imaging (MRI) scanners. The aim of this work was to demonstrate that B 1 mapping implemented using standard scanner product pulse sequences can produce B 1 (and VFA T 1 ) maps comparable in quality and acquisition time to advanced techniques. Six healthy subjects were scanned at 3.0T. An interleaved multislice spin-echo echo planar imaging double-angle (EPI-DA) B 1 mapping protocol, using a standard product pulse sequence, was compared to two alternative methods (actual flip angle imaging, AFI, and Bloch-Siegert shift, BS). Single-slice spin-echo DA B 1 maps were used as a reference for comparison (Ref. DA). VFA flip angles were scaled using each B 1 map prior to fitting T 1 ; the nominal flip angle case was also compared. The pooled-subject voxelwise correlation (ρ) for B 1 maps (BS/AFI/EPI-DA) relative to the reference B 1 scan (Ref. DA) were ρ = 0.92/0.95/0.98. VFA T 1 correlations using these maps were ρ = 0.86/0.88/0.96, much better than without B 1 correction (ρ = 0.53). The relative error for each B 1 map (BS/AFI/EPI-DA/Nominal) had 95 th percentiles of 5/4/3/13%. Our findings show that B 1 mapping implemented using product pulse sequences can provide excellent quality B 1 (and VFA T 1 ) maps, comparable to other custom techniques. This fast whole-brain measurement (∼2 min) can serve as an excellent alternative for researchers without access to advanced B 1 pulse sequences. 1 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2017;46:1673-1682. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Simulation of large-scale soil water systems using groundwater data and satellite based soil moisture

    Science.gov (United States)

    Kreye, Phillip; Meon, Günter

    2016-04-01

    Complex concepts for the physically correct depiction of dominant processes in the hydrosphere are increasingly at the forefront of hydrological modelling. Many scientific issues in hydrological modelling demand for additional system variables besides a simulation of runoff only, such as groundwater recharge or soil moisture conditions. Models that include soil water simulations are either very simplified or require a high number of parameters. Against this backdrop there is a heightened demand of observations to be used to calibrate the model. A reasonable integration of groundwater data or remote sensing data in calibration procedures as well as the identifiability of physically plausible sets of parameters is subject to research in the field of hydrology. Since this data is often combined with conceptual models, the given interfaces are not suitable for such demands. Furthermore, the application of automated optimisation procedures is generally associated with conceptual models, whose (fast) computing times allow many iterations of the optimisation in an acceptable time frame. One of the main aims of this study is to reduce the discrepancy between scientific and practical applications in the field of hydrological modelling. Therefore, the soil model DYVESOM (DYnamic VEgetation SOil Model) was developed as one of the primary components of the hydrological modelling system PANTA RHEI. DYVESOMs structure provides the required interfaces for the calibrations made at runoff, satellite based soil moisture and groundwater level. The model considers spatial and temporal differentiated feedback of the development of the vegetation on the soil system. In addition, small scale heterogeneities of soil properties (subgrid-variability) are parameterized by variation of van Genuchten parameters depending on distribution functions. Different sets of parameters are operated simultaneously while interacting with each other. The developed soil model is innovative regarding concept

  3. Education and Public Outreach for the PICASSO-CENA Satellite-Based Research Mission: K-12 Students Use Sun Photometers to Assist Scientists in Validating Atmospheric Data

    Science.gov (United States)

    Robinson, D. Q.

    2001-05-01

    Hampton University, a historically black university, is leading the Education and Public Outreach (EPO) portion of the PICASSO-CENA satellite-based research mission. Currently scheduled for launch in 2004, PICASSO-CENA will use LIDAR (LIght Detection and Ranging), to study earth's atmosphere. The PICASSO-CENA Outreach program works with scientists, teachers, and students to better understand the effects of clouds and aerosols on earth's atmosphere. This program actively involves students nationwide in NASA research by having them obtain sun photometer measurements from their schools and homes for comparison with data collected by the PICASSO-CENA mission. Students collect data from their classroom ground observations and report the data via the Internet. Scientists will use the data from the PICASSO-CENA research and the student ground-truthing observations to improve predications about climatic change. The two-band passive remote sensing sun photometer is designed for student use as a stand alone instrument to study atmospheric turbidity or in conjunction with satellite data to provide ground-truthing. The instrument will collect measurements of column optical depth from the ground level. These measurements will not only give the students an appreciation for atmospheric turbidity, but will also provide quantitative correlative information to the PICASSO-CENA mission on ground-level optical depth. Student data obtained in this manner will be sufficiently accurate for scientists to use as ground truthing. Thus, students will have the opportunity to be involved with a NASA satellite-based research mission.

  4. Concept for a Satellite-Based Advanced Air Traffic Management System : Volume 4. Operational Description and Qualitative Assessment.

    Science.gov (United States)

    1974-02-01

    The volume presents a description of how the Satellite-Based Advanced Air Traffic Management System (SAATMS) operates and a qualitative assessment of the system. The operational description includes the services, functions, and tasks performed by the...

  5. Using satellite-based measurements to explore spatiotemporal scales and variability of drivers of new particle formation

    Science.gov (United States)

    New particle formation (NPF) can potentially alter regional climate by increasing aerosol particle (hereafter particle) number concentrations and ultimately cloud condensation nuclei. The large scales on which NPF is manifest indicate potential to use satellite-based (inherently ...

  6. Improving satellite-based post-fire evapotranspiration estimates in semi-arid regions

    Science.gov (United States)

    Poon, P.; Kinoshita, A. M.

    2017-12-01

    Climate change and anthropogenic factors contribute to the increased frequency, duration, and size of wildfires, which can alter ecosystem and hydrological processes. The loss of vegetation canopy and ground cover reduces interception and alters evapotranspiration (ET) dynamics in riparian areas, which can impact rainfall-runoff partitioning. Previous research evaluated the spatial and temporal trends of ET based on burn severity and observed an annual decrease of 120 mm on average for three years after fire. Building upon these results, this research focuses on the Coyote Fire in San Diego, California (USA), which burned a total of 76 km2 in 2003 to calibrate and improve satellite-based ET estimates in semi-arid regions affected by wildfire. The current work utilizes satellite-based products and techniques such as the Google Earth Engine Application programming interface (API). Various ET models (ie. Operational Simplified Surface Energy Balance Model (SSEBop)) are compared to the latent heat flux from two AmeriFlux eddy covariance towers, Sky Oaks Young (US-SO3), and Old Stand (US-SO2), from 2000 - 2015. The Old Stand tower has a low burn severity and the Young Stand tower has a moderate to high burn severity. Both towers are used to validate spatial ET estimates. Furthermore, variables and indices, such as Enhanced Vegetation Index (EVI), Normalized Difference Moisture Index (NDMI), and the Normalized Burn Ratio (NBR) are utilized to evaluate satellite-based ET through a multivariate statistical analysis at both sites. This point-scale study will able to improve ET estimates in spatially diverse regions. Results from this research will contribute to the development of a post-wildfire ET model for semi-arid regions. Accurate estimates of post-fire ET will provide a better representation of vegetation and hydrologic recovery, which can be used to improve hydrologic models and predictions.

  7. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...... distances, and an Expectation-Maximization algorithm. Methods tended to perform better on contemporary datasets; bias correction did not significantly improve method performance. Mesial sections were most difficult for all methods. Although AD image sets were most difficult to strip, HWA and BSE were more...

  8. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gotseff, Peter [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear sky model performance.

  9. Quantitative atom column position analysis at the incommensurate interfaces of a (PbS)1.14NbS2 misfit layered compound with aberration-corrected HRTEM

    International Nuclear Information System (INIS)

    Garbrecht, M.; Spiecker, E.; Tillmann, K.; Jaeger, W.

    2011-01-01

    Aberration-corrected HRTEM is applied to explore the potential of NCSI contrast imaging to quantitatively analyse the complex atomic structure of misfit layered compounds and their incommensurate interfaces. Using the (PbS) 1.14 NbS 2 misfit layered compound as a model system it is shown that atom column position analyses at the incommensurate interfaces can be performed with precisions reaching a statistical accuracy of ±6 pm. The procedure adopted for these studies compares experimental images taken from compound regions free of defects and interface modulations with a structure model derived from XRD experiments and with multi-slice image simulations for the corresponding NCSI contrast conditions used. The high precision achievable in such experiments is confirmed by a detailed quantitative analysis of the atom column positions at the incommensurate interfaces, proving a tetragonal distortion of the monochalcogenide sublattice. -- Research Highlights: → Quantitative aberration-corrected HRTEM analysis of atomic column positions in (PbS) 1.14 NbS 2 misfit layered compound reveals tetragonal distortion of the PbS subsystem. → Detailed comparison of multi-slice simulations with the experimental NCSI contrast condition imaging results lead to a high precision (better than 10 pm) for determining the positions of atoms. → Precision in gaining information of local structure at atomic scale is demonstrated, which may not be accessible by means of X-ray and neutron diffraction analysis.

  10. Bias in the Cq value observed with hydrolysis probe based quantitative PCR can be corrected with the estimated PCR efficiency value

    NARCIS (Netherlands)

    Tuomi, Jari Michael; Voorbraak, Frans; Jones, Douglas L.; Ruijter, Jan M.

    2010-01-01

    For real-time monitoring of PCR amplification of DNA, quantitative PCR (qPCR) assays use various fluorescent reporters. DNA binding molecules and hybridization reporters (primers and probes) only fluoresce when bound to DNA and result in the non-cumulative increase in observed fluorescence.

  11. Correction for Partial Volume Effect Is a Must, Not a Luxury, to Fully Exploit the Potential of Quantitative PET Imaging in Clinical Oncology

    DEFF Research Database (Denmark)

    Alavi, Abass; Werner, Thomas J; Høilund-Carlsen, Poul Flemming

    2018-01-01

    The partial volume effect (PVE) is considered as one of the major degrading factors impacting image quality and hampering the accuracy of quantitative PET imaging in clinical oncology. This effect is the consequence of the limited spatial resolution of whole-body PET scanners, which results in bl...

  12. Ecological change, sliding baselines and the importance of historical data: lessons from Combining [corrected] observational and quantitative data on a temperate reef over 70 years.

    Directory of Open Access Journals (Sweden)

    Giulia Gatti

    Full Text Available Understanding the effects of environmental change on ecosystems requires the identification of baselines that may act as reference conditions. However, the continuous change of these references challenges our ability to define the true natural status of ecosystems. The so-called sliding baseline syndrome can be overcome through the analysis of quantitative time series, which are, however, extremely rare. Here we show how combining historical quantitative data with descriptive 'naturalistic' information arranged in a chronological chain allows highlighting long-term trends and can be used to inform present conservation schemes. We analysed the long-term change of a coralligenous reef, a marine habitat endemic to the Mediterranean Sea. The coralligenous assemblages of Mesco Reef (Ligurian Sea, NW Mediterranean have been studied, although discontinuously, since 1937 thus making available both detailed descriptive information and scanty quantitative data: while the former was useful to understand the natural history of the ecosystem, the analysis of the latter was of paramount importance to provide a formal measure of change over time. Epibenthic assemblages remained comparatively stable until the 1990s, when species replacement, invasion by alien algae, and biotic homogenisation occurred within few years, leading to a new and completely different ecosystem state. The shift experienced by the coralligenous assemblages of Mesco Reef was probably induced by a combination of seawater warming and local human pressures, the latter mainly resulting in increased water turbidity; in turn, cumulative stress may have favoured the establishment of alien species. This study showed that the combined analysis of quantitative and descriptive historical data represent a precious knowledge to understand ecosystem trends over time and provide help to identify baselines for ecological management.

  13. Qualitative and quantitative evaluation of rigid and deformable motion correction algorithms using dual-energy CT images in view of application to CT perfusion measurements in abdominal organs affected by breathing motion.

    Science.gov (United States)

    Skornitzke, S; Fritz, F; Klauss, M; Pahn, G; Hansen, J; Hirsch, J; Grenacher, L; Kauczor, H-U; Stiller, W

    2015-02-01

    To compare six different scenarios for correcting for breathing motion in abdominal dual-energy CT (DECT) perfusion measurements. Rigid [RRComm(80 kVp)] and non-rigid [NRComm(80 kVp)] registration of commercially available CT perfusion software, custom non-rigid registration [NRCustom(80 kVp], demons algorithm) and a control group [CG(80 kVp)] without motion correction were evaluated using 80 kVp images. Additionally, NRCustom was applied to dual-energy (DE)-blended [NRCustom(DE)] and virtual non-contrast [NRCustom(VNC)] images, yielding six evaluated scenarios. After motion correction, perfusion maps were calculated using a combined maximum slope/Patlak model. For qualitative evaluation, three blinded radiologists independently rated motion correction quality and resulting perfusion maps on a four-point scale (4 = best, 1 = worst). For quantitative evaluation, relative changes in metric values, R(2) and residuals of perfusion model fits were calculated. For motion-corrected images, mean ratings differed significantly [NRCustom(80 kVp) and NRCustom(DE), 3.3; NRComm(80 kVp), 3.1; NRCustom(VNC), 2.9; RRComm(80 kVp), 2.7; CG(80 kVp), 2.7; all p VNC), 22.8%; RRComm(80 kVp), 0.6%; CG(80 kVp), 0%]. Regarding perfusion maps, NRCustom(80 kVp) and NRCustom(DE) were rated highest [NRCustom(80 kVp), 3.1; NRCustom(DE), 3.0; NRComm(80 kVp), 2.8; NRCustom(VNC), 2.6; CG(80 kVp), 2.5; RRComm(80 kVp), 2.4] and had significantly higher R(2) and lower residuals. Correlation between qualitative and quantitative evaluation was low to moderate. Non-rigid motion correction improves spatial alignment of the target region and fit of CT perfusion models. Using DE-blended and DE-VNC images for deformable registration offers no significant improvement. Non-rigid algorithms improve the quality of abdominal CT perfusion measurements but do not benefit from DECT post processing.

  14. Preliminary study for differential diagnosis of intracranial tumors using in vivo quantitative proton MR spectroscopy with correction for T2 relaxation time

    International Nuclear Information System (INIS)

    Isobe, Tomonori; Yamamoto, Tetsuya; Akutsu, Hiroyoshi; Shiigai, Masanari; Shibata, Yasushi; Takada, Kenta; Masumoto, Tomohiko; Anno, Izumi; Matsumura, Akira

    2015-01-01

    Introduction: The intent of this study was to differentiate intracranial tumors using the metabolite concentrations obtained by quantification with correction for T2 relaxation time, and to analyze whether the spectrum peak was generated by the existence of metabolites in proton magnetic resonance spectroscopy (MRS). Methods: All proton MRS studies were performed on a clinical 1.5T MR system. 7 normal volunteers and 57 patients (gliomas, metastases, meningiomas, acoustic neuromas, and pituitary adenomas) underwent single voxel proton MRS with different echo times (TE: 68, 136, 272 ms) for T2 correction of signal derived from metabolites and tissue water. With tissue water employed as an internal reference, the concentrations of metabolite (i.e. N-acetylaspartate (NAA), total creatine (t-Cr) and choline-containing compounds (Cho)) were calculated. Moreover, proton MRS data of previously published typical literatures were critically reviewed and compared with our data. Results: Extramedullary tumors were characterized by absence of NAA compared with intramedullary tumors. High-grade glioma differed from low-grade glioma by lower t-Cr concentrations. Metastasis differed from cystic glioblastoma by higher Cho concentrations, lower t-Cr concentrations, an absence of NAA, and a prominent Lipids peak. Based on these results and review of previous reports, we suggest a clinical pathway for the differentiation of intracranial tumors. Conclusion: The metabolite concentrations obtained by quantification with correction for T2 relaxation time, and to analyze whether the spectrum peak was generated by the existence of metabolites in proton MRS is useful for the diagnosis of the intracranial tumors

  15. Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling

    Science.gov (United States)

    Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.

    2015-01-01

    Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless

  16. A novel baseline-correction method for standard addition based derivative spectra and its application to quantitative analysis of benzo(a)pyrene in vegetable oil samples.

    Science.gov (United States)

    Li, Na; Li, Xiu-Ying; Zou, Zhe-Xiang; Lin, Li-Rong; Li, Yao-Qun

    2011-07-07

    In the present work, a baseline-correction method based on peak-to-derivative baseline measurement was proposed for the elimination of complex matrix interference that was mainly caused by unknown components and/or background in the analysis of derivative spectra. This novel method was applicable particularly when the matrix interfering components showed a broad spectral band, which was common in practical analysis. The derivative baseline was established by connecting two crossing points of the spectral curves obtained with a standard addition method (SAM). The applicability and reliability of the proposed method was demonstrated through both theoretical simulation and practical application. Firstly, Gaussian bands were used to simulate 'interfering' and 'analyte' bands to investigate the effect of different parameters of interfering band on the derivative baseline. This simulation analysis verified that the accuracy of the proposed method was remarkably better than other conventional methods such as peak-to-zero, tangent, and peak-to-peak measurements. Then the above proposed baseline-correction method was applied to the determination of benzo(a)pyrene (BaP) in vegetable oil samples by second-derivative synchronous fluorescence spectroscopy. The satisfactory results were obtained by using this new method to analyze a certified reference material (coconut oil, BCR(®)-458) with a relative error of -3.2% from the certified BaP concentration. Potentially, the proposed method can be applied to various types of derivative spectra in different fields such as UV-visible absorption spectroscopy, fluorescence spectroscopy and infrared spectroscopy.

  17. Ground-and satellite-based evidence of the biophysical mechanisms behind the greening Sahel

    DEFF Research Database (Denmark)

    Brandt, Martin Stefan; Mbow, Cheikh; Diouf, Abdoul A.

    2015-01-01

    After a dry period with prolonged droughts in the 1970s and 1980s, recent scientific outcome suggests that the decades of abnormally dry conditions in the Sahel have been reversed by positive anomalies in rainfall. Various remote sensing studies observed a positive trend in vegetation greenness...... over the last decades which is known as the re-greening of the Sahel. However, little investment has been made in including long-term ground-based data collections to evaluate and better understand the biophysical mechanisms behind these findings. Thus, deductions on a possible increment in biomass...... remain speculative. Our aim is to bridge these gaps and give specifics on the biophysical background factors of the re-greening Sahel. Therefore, a trend analysis was applied on long time series (1987-2013) of satellite-based vegetation and rainfall data, as well as on ground-observations of leaf biomass...

  18. Engineering satellite-based navigation and timing global navigation satellite systems, signals, and receivers

    CERN Document Server

    Betz, J

    2016-01-01

    This book describes the design and performance analysis of satnav systems, signals, and receivers. It also provides succinct descriptions and comparisons of all the world’s satnav systems. Its comprehensive and logical structure addresses all satnav signals and systems in operation and being developed. Engineering Satellite-Based Navigation and Timing: Global Navigation Satellite Systems, Signals, and Receivers provides the technical foundation for designing and analyzing satnav signals, systems, and receivers. Its contents and structure address all satnav systems and signals: legacy, modernized, and new. It combines qualitative information with detailed techniques and analyses, providing a comprehensive set of insights and engineering tools for this complex multidisciplinary field. Part I describes system and signal engineering including orbital mechanics and constellation design, signal design principles and underlying considerations, link budgets, qua tifying receiver performance in interference, and e...

  19. Trellis coding with Continuous Phase Modulation (CPM) for satellite-based land-mobile communications

    Science.gov (United States)

    1989-01-01

    This volume of the final report summarizes the results of our studies on the satellite-based mobile communications project. It includes: a detailed analysis, design, and simulations of trellis coded, full/partial response CPM signals with/without interleaving over various Rician fading channels; analysis and simulation of computational cutoff rates for coherent, noncoherent, and differential detection of CPM signals; optimization of the complete transmission system; analysis and simulation of power spectrum of the CPM signals; design and development of a class of Doppler frequency shift estimators; design and development of a symbol timing recovery circuit; and breadboard implementation of the transmission system. Studies prove the suitability of the CPM system for mobile communications.

  20. Network design consideration of a satellite-based mobile communications system

    Science.gov (United States)

    Yan, T.-Y.

    1986-01-01

    Technical considerations for the Mobile Satellite Experiment (MSAT-X), the ground segment testbed for the low-cost spectral efficient satellite-based mobile communications technologies being developed for the 1990's, are discussed. The Network Management Center contains a flexible resource sharing algorithm, the Demand Assigned Multiple Access scheme, which partitions the satellite transponder bandwidth among voice, data, and request channels. Satellite use of multiple UHF beams permits frequency reuse. The backhaul communications and the Telemetry, Tracking and Control traffic are provided through a single full-coverage SHF beam. Mobile Terminals communicate with the satellite using UHF. All communications including SHF-SHF between Base Stations and/or Gateways, are routed through the satellite. Because MSAT-X is an experimental network, higher level network protocols (which are service-specific) will be developed only to test the operation of the lowest three levels, the physical, data link, and network layers.

  1. Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images. A study based on phantom experiments and clinical images

    International Nuclear Information System (INIS)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho

    2014-01-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV. (author)

  2. [Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images: a study based on phantom experiments and clinical images].

    Science.gov (United States)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho; Ito, Shigeru; Sano, Yoshitaka; Sato, Mayumi; Kanno, Toshihiko; Okada, Hiroyuki; Torizuka, Tatsuo; Nishizawa, Sadahiko

    2014-06-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV.

  3. Advancing land surface model development with satellite-based Earth observations

    Science.gov (United States)

    Orth, Rene; Dutra, Emanuel; Trigo, Isabel F.; Balsamo, Gianpaolo

    2017-04-01

    The land surface forms an essential part of the climate system. It interacts with the atmosphere through the exchange of water and energy and hence influences weather and climate, as well as their predictability. Correspondingly, the land surface model (LSM) is an essential part of any weather forecasting system. LSMs rely on partly poorly constrained parameters, due to sparse land surface observations. With the use of newly available land surface temperature observations, we show in this study that novel satellite-derived datasets help to improve LSM configuration, and hence can contribute to improved weather predictability. We use the Hydrology Tiled ECMWF Scheme of Surface Exchanges over Land (HTESSEL) and validate it comprehensively against an array of Earth observation reference datasets, including the new land surface temperature product. This reveals satisfactory model performance in terms of hydrology, but poor performance in terms of land surface temperature. This is due to inconsistencies of process representations in the model as identified from an analysis of perturbed parameter simulations. We show that HTESSEL can be more robustly calibrated with multiple instead of single reference datasets as this mitigates the impact of the structural inconsistencies. Finally, performing coupled global weather forecasts we find that a more robust calibration of HTESSEL also contributes to improved weather forecast skills. In summary, new satellite-based Earth observations are shown to enhance the multi-dataset calibration of LSMs, thereby improving the representation of insufficiently captured processes, advancing weather predictability and understanding of climate system feedbacks. Orth, R., E. Dutra, I. F. Trigo, and G. Balsamo (2016): Advancing land surface model development with satellite-based Earth observations. Hydrol. Earth Syst. Sci. Discuss., doi:10.5194/hess-2016-628

  4. An intercomparison and validation of satellite-based surface radiative energy flux estimates over the Arctic

    Science.gov (United States)

    Riihelä, Aku; Key, Jeffrey R.; Meirink, Jan Fokke; Kuipers Munneke, Peter; Palo, Timo; Karlsson, Karl-Göran

    2017-05-01

    Accurate determination of radiative energy fluxes over the Arctic is of crucial importance for understanding atmosphere-surface interactions, melt and refreezing cycles of the snow and ice cover, and the role of the Arctic in the global energy budget. Satellite-based estimates can provide comprehensive spatiotemporal coverage, but the accuracy and comparability of the existing data sets must be ascertained to facilitate their use. Here we compare radiative flux estimates from Clouds and the Earth's Radiant Energy System (CERES) Synoptic 1-degree (SYN1deg)/Energy Balanced and Filled, Global Energy and Water Cycle Experiment (GEWEX) surface energy budget, and our own experimental FluxNet / Satellite Application Facility on Climate Monitoring cLoud, Albedo and RAdiation (CLARA) data against in situ observations over Arctic sea ice and the Greenland Ice Sheet during summer of 2007. In general, CERES SYN1deg flux estimates agree best with in situ measurements, although with two particular limitations: (1) over sea ice the upwelling shortwave flux in CERES SYN1deg appears to be underestimated because of an underestimated surface albedo and (2) the CERES SYN1deg upwelling longwave flux over sea ice saturates during midsummer. The Advanced Very High Resolution Radiometer-based GEWEX and FluxNet-CLARA flux estimates generally show a larger range in retrieval errors relative to CERES, with contrasting tendencies relative to each other. The largest source of retrieval error in the FluxNet-CLARA downwelling shortwave flux is shown to be an overestimated cloud optical thickness. The results illustrate that satellite-based flux estimates over the Arctic are not yet homogeneous and that further efforts are necessary to investigate the differences in the surface and cloud properties which lead to disagreements in flux retrievals.

  5. Comparison of four machine learning algorithms for their applicability in satellite-based optical rainfall retrievals

    Science.gov (United States)

    Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas

    2016-03-01

    Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.

  6. Current trends in satellite based emergency mapping - the need for harmonisation

    Science.gov (United States)

    Voigt, Stefan

    2013-04-01

    During the past years, the availability and use of satellite image data to support disaster management and humanitarian relief organisations has largely increased. The automation and data processing techniques are greatly improving as well as the capacity in accessing and processing satellite imagery in getting better globally. More and more global activities via the internet and through global organisations like the United Nations or the International Charter Space and Major Disaster engage in the topic, while at the same time, more and more national or local centres engage rapid mapping operations and activities. In order to make even more effective use of this very positive increase of capacity, for the sake of operational provision of analysis results, for fast validation of satellite derived damage assessments, for better cooperation in the joint inter agency generation of rapid mapping products and for general scientific use, rapid mapping results in general need to be better harmonized, if not even standardized. In this presentation, experiences from various years of rapid mapping gained by the DLR Center for satellite based Crisis Information (ZKI) within the context of the national activities, the International Charter Space and Major Disasters, GMES/Copernicus etc. are reported. Furthermore, an overview on how automation, quality assurance and optimization can be achieved through standard operation procedures within a rapid mapping workflow is given. Building on this long term rapid mapping experience, and building on the DLR initiative to set in pace an "International Working Group on Satellite Based Emergency Mapping" current trends in rapid mapping are discussed and thoughts on how the sharing of rapid mapping information can be optimized by harmonizing analysis results and data structures are presented. Such an harmonization of analysis procedures, nomenclatures and representations of data as well as meta data are the basis to better cooperate within

  7. Improved Lower Mekong River Basin Hydrological Decision Making Using NASA Satellite-based Earth Observation Systems

    Science.gov (United States)

    Bolten, J. D.; Mohammed, I. N.; Srinivasan, R.; Lakshmi, V.

    2017-12-01

    Better understanding of the hydrological cycle of the Lower Mekong River Basin (LMRB) and addressing the value-added information of using remote sensing data on the spatial variability of soil moisture over the Mekong Basin is the objective of this work. In this work, we present the development and assessment of the LMRB (drainage area of 495,000 km2) Soil and Water Assessment Tool (SWAT). The coupled model framework presented is part of SERVIR, a joint capacity building venture between NASA and the U.S. Agency for International Development, providing state-of-the-art, satellite-based earth monitoring, imaging and mapping data, geospatial information, predictive models, and science applications to improve environmental decision-making among multiple developing nations. The developed LMRB SWAT model enables the integration of satellite-based daily gridded precipitation, air temperature, digital elevation model, soil texture, and land cover and land use data to drive SWAT model simulations over the Lower Mekong River Basin. The LMRB SWAT model driven by remote sensing climate data was calibrated and verified with observed runoff data at the watershed outlet as well as at multiple sites along the main river course. Another LMRB SWAT model set driven by in-situ climate observations was also calibrated and verified to streamflow data. Simulated soil moisture estimates from the two models were then examined and compared to a downscaled Soil Moisture Active Passive Sensor (SMAP) 36 km radiometer products. Results from this work present a framework for improving SWAT performance by utilizing a downscaled SMAP soil moisture products used for model calibration and validation. Index Terms: 1622: Earth system modeling; 1631: Land/atmosphere interactions; 1800: Hydrology; 1836 Hydrological cycles and budgets; 1840 Hydrometeorology; 1855: Remote sensing; 1866: Soil moisture; 6334: Regional Planning

  8. Quantitative evaluation of automated skull-stripping methods applied to contemporary and legacy images: effects of diagnosis, bias correction, and slice location

    DEFF Research Database (Denmark)

    Fennema-Notestine, Christine; Ozyurt, I Burak; Clark, Camellia P

    2006-01-01

    Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of four...... methods: Brain Extraction Tool (BET; Smith [2002]: Hum Brain Mapp 17:143-155); 3dIntracranial (Ward [1999] Milwaukee: Biophysics Research Institute, Medical College of Wisconsin; in AFNI); a Hybrid Watershed algorithm (HWA, Segonne et al. [2004] Neuroimage 22:1060-1075; in FreeSurfer); and Brain Surface...... Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41-54; Shattuck et al. [2001] Neuroimage 13:856-876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed...

  9. Towards quantitative analysis of core-shell catalyst nano-particles by aberration corrected high angle annular dark field STEM and EDX

    International Nuclear Information System (INIS)

    Haibo, E; Nellist, P D; Lozano-Perez, S; Ozkaya, D

    2010-01-01

    Core-shell structured heterogeneous catalyst nano-particles offer the promise of more efficient precious metal usage and also novel functionalities but are as yet poorly characterised due to large compositional variations over short ranges. High angle annular dark field detector in a scanning transmission electron microscope is frequently used to image at high resolution because of its Z-contrast and incoherent imaging process, but generally little attention is paid to quantification. Energy dispersive X-ray analysis provides information on thickness and chemical composition and, used in conjunction with HAADF-STEM, aids interpretation of imaged nano-particles. We present important calibrations and initial data for truly quantitative high resolution analysis.

  10. Application of bias correction methods to improve U{sub 3}Si{sub 2} sample preparation for quantitative analysis by WDXRF

    Energy Technology Data Exchange (ETDEWEB)

    Scapin, Marcos A.; Guilhen, Sabine N.; Azevedo, Luciana C. de; Cotrim, Marycel E.B.; Pires, Maria Ap. F., E-mail: mascapin@ipen.br, E-mail: snguilhen@ipen.br, E-mail: lvsantana@ipen.br, E-mail: mecotrim@ipen.br, E-mail: mapires@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The determination of silicon (Si), total uranium (U) and impurities in uranium-silicide (U{sub 3}Si{sub 2}) samples by wavelength dispersion X-ray fluorescence technique (WDXRF) has been already validated and is currently implemented at IPEN's X-Ray Fluorescence Laboratory (IPEN-CNEN/SP) in São Paulo, Brazil. Sample preparation requires the use of approximately 3 g of H{sub 3}BO{sub 3} as sample holder and 1.8 g of U{sub 3}Si{sub 2}. However, because boron is a neutron absorber, this procedure precludes U{sub 3}Si{sub 2} sample's recovery, which, in time, considering routinely analysis, may account for significant unusable uranium waste. An estimated average of 15 samples per month are expected to be analyzed by WDXRF, resulting in approx. 320 g of U{sub 3}Si{sub 2} that would not return to the nuclear fuel cycle. This not only impacts in production losses, but generates another problem: radioactive waste management. The purpose of this paper is to present the mathematical models that may be applied for the correction of systematic errors when H{sub 3}BO{sub 3} sample holder is substituted by cellulose-acetate {[C_6H_7O_2(OH)_3_-_m(OOCCH_3)m], m = 0∼3}, thus enabling U{sub 3}Si{sub 2} sample’s recovery. The results demonstrate that the adopted mathematical model is statistically satisfactory, allowing the optimization of the procedure. (author)

  11. Efficient all solid-state UV source for satellite-based lidar applications.

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Darrell Jewell; Smith, Arlee Virgil

    2003-07-01

    A satellite-based UV-DIAL measurement system would allow continuous global monitoring of ozone concentration in the upper atmosphere. However such systems remain difficult to implement because aerosol-scattering return signals for satellite-based lidars are very weak. A suitable system must produce high-energy UV pulses at multiple wavelengths with very high efficiency. For example, a nanosecond system operating at 10 Hz must generate approximately 1 J per pulse at 308-320 nm. An efficient space-qualified wavelength-agile system based on a single UV source that can meet this requirement is probably not available using current laser technology. As an alternative, we're pursuing a multi-source approach employing all-solid-state modules that individually generate 300-320 nm light with pulse energies in the range of 50-200 mJ, with transform-limited bandwidths and good beam quality. Pulses from the individual sources can be incoherently summed to obtain the required single-pulse energy. These sources use sum-frequency mixing of the 532 nm second harmonic of an Nd:YAG pump laser with 731-803 nm light derived from a recently-developed, state-of-the-art, nanosecond optical parametric oscillator. Two source configurations are under development, one using extra-cavity sum-frequency mixing, and the other intra-cavity sum-frequency mixing. In either configuration, we hope to obtain sum-frequency mixing efficiency approaching 60% by carefully matching the spatial and temporal properties of the laser and OPO pulses. This ideal balance of green and near-IR photons requires an injection-seeded Nd:YAG pump-laser with very high beam quality, and an OPO exhibiting unusually high conversion efficiency and exceptional signal beam quality. The OPO employs a singly-resonant high-Fresnel-number image-rotating self-injection-seeded nonplanar-ring cavity that achieves pump depletion > 65% and produces signal beams with M{sup 2} {approx} 3 at pulse energies exceeding 50 mJ. Pump beam

  12. The Satellite based Monitoring Initiative for Regional Air quality (SAMIRA): Project summary and first results

    Science.gov (United States)

    Schneider, Philipp; Stebel, Kerstin; Ajtai, Nicolae; Diamandi, Andrei; Horalek, Jan; Nemuc, Anca; Stachlewska, Iwona; Zehner, Claus

    2017-04-01

    We present a summary and some first results of a new ESA-funded project entitled Satellite based Monitoring Initiative for Regional Air quality (SAMIRA), which aims at improving regional and local air quality monitoring through synergetic use of data from present and upcoming satellite instruments, traditionally used in situ air quality monitoring networks and output from chemical transport models. Through collaborative efforts in four countries, namely Romania, Poland, the Czech Republic and Norway, all with existing air quality problems, SAMIRA intends to support the involved institutions and associated users in their national monitoring and reporting mandates as well as to generate novel research in this area. The primary goal of SAMIRA is to demonstrate the usefulness of existing and future satellite products of air quality for improving monitoring and mapping of air pollution at the regional scale. A total of six core activities are being carried out in order to achieve this goal: Firstly, the project is developing and optimizing algorithms for the retrieval of hourly aerosol optical depth (AOD) maps from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) onboard of Meteosat Second Generation. As a second activity, SAMIRA aims to derive particulate matter (PM2.5) estimates from AOD data by developing robust algorithms for AOD-to-PM conversion with the support from model- and Lidar data. In a third activity, we evaluate the added value of satellite products of atmospheric composition for operational European-scale air quality mapping using geostatistics and auxiliary datasets. The additional benefit of satellite-based monitoring over existing monitoring techniques (in situ, models) is tested by combining these datasets using geostatistical methods and demonstrated for nitrogen dioxide (NO2), sulphur dioxide (SO2), and aerosol optical depth/particulate matter. As a fourth activity, the project is developing novel algorithms for downscaling coarse

  13. Multitemporal Monitoring of the Air Quality in Bulgaria by Satellite Based Instruments

    Science.gov (United States)

    Nikolov, Hristo; Borisova, Denitsa

    2015-04-01

    Nowadays the effect on climate changes on the population and environment caused by air pollutants at local and regional scale by pollution concentrations higher than allowed is undisputable. Main sources of gas releases are due to anthropogenic emissions caused by the economic and domestic activities of the inhabitants, and to less extent having natural origin. Complementary to pollutants emissions the local weather parameters such as temperature, precipitation, wind speed, clouds, atmospheric water vapor, and wind direction control the chemical reactions in the atmosphere. It should be noted that intrinsic property of the air pollution is its "transboundary-ness" and this is why the air quality (AQ) is not affecting the population of one single country only. This why the exchange of information concerning AQ at EU level is subject to well established legislation and one of EU flagship initiatives for standardization in data exchange, namely INSPIRE, has to cope with. It should be noted that although good reporting mechanism with regard to AQ is already established between EU member states national networks suffer from a serious disadvantage - they don't form a regular grid which is a prerequisite for verification of pollutants transport modeling. Alternative sources of information for AQ are the satellite observations (i.e. OMI, TOMS instruments) providing daily data for ones of the major contributors to air pollution such as O3, NOX and SO2. Those data form regular grids and are processed the same day of the acquisition so they could be used in verification of the outputs generated by numerical modeling of the AQ and pollution transfer. In this research we present results on multitemporal monitoring of several regional "hot spots" responsible for greenhouse gases emissions in Bulgaria with emphasis on satellite-based instruments. Other output from this study is a method for validation of the AQ forecasts and also providing feedback to the service that prepares

  14. Impacts of Satellite-Based Snow Albedo Assimilation on Offline and Coupled Land Surface Model Simulations.

    Directory of Open Access Journals (Sweden)

    Tao Wang

    Full Text Available Seasonal snow cover in the Northern Hemisphere is the largest component of the terrestrial cryosphere and plays a major role in the climate system through strong positive feedbacks related to albedo. The snow-albedo feedback is invoked as an important cause for the polar amplification of ongoing and projected climate change, and its parameterization across models is an important source of uncertainty in climate simulations. Here, instead of developing a physical snow albedo scheme, we use a direct insertion approach to assimilate satellite-based surface albedo during the snow season (hereafter as snow albedo assimilation into the land surface model ORCHIDEE (ORganizing Carbon and Hydrology In Dynamic EcosystEms and assess the influences of such assimilation on offline and coupled simulations. Our results have shown that snow albedo assimilation in both ORCHIDEE and ORCHIDEE-LMDZ (a general circulation model of Laboratoire de Météorologie Dynamique improve the simulation accuracy of mean seasonal (October throughout May snow water equivalent over the region north of 40 degrees. The sensitivity of snow water equivalent to snow albedo assimilation is more pronounced in the coupled simulation than the offline simulation since the feedback of albedo on air temperature is allowed in ORCHIDEE-LMDZ. We have also shown that simulations of air temperature at 2 meters in ORCHIDEE-LMDZ due to snow albedo assimilation are significantly improved during the spring in particular over the eastern Siberia region. This is a result of the fact that high amounts of shortwave radiation during the spring can maximize its snow albedo feedback, which is also supported by the finding that the spatial sensitivity of temperature change to albedo change is much larger during the spring than during the autumn and winter. In addition, the radiative forcing at the top of the atmosphere induced by snow albedo assimilation during the spring is estimated to be -2.50 W m-2, the

  15. Satellite-based Estimates of Ambient Air Pollution and Global Variations in Childhood Asthma Prevalence

    Science.gov (United States)

    Anderson, H. Ross; Butland, Barbara K.; Donkelaar, Aaron Matthew Van; Brauer, Michael; Strachan, David P.; Clayton, Tadd; van Dingenen, Rita; Amann, Marcus; Brunekreef, Bert; Cohen, Aaron; hide

    2012-01-01

    Background: The effect of ambient air pollution on global variations and trends in asthma prevalence is unclear. Objectives: Our goal was to investigate community-level associations between asthma prevalence data from the International Study of Asthma and Allergies in Childhood (ISAAC) and satellite-based estimates of particulate matter with aerodynamic diameter < 2.5 microm (PM2.5) and nitrogen dioxide (NO2), and modelled estimates of ozone. Methods: We assigned satellite-based estimates of PM2.5 and NO2 at a spatial resolution of 0.1deg × 0.1deg and modeled estimates of ozone at a resolution of 1deg × 1deg to 183 ISAAC centers. We used center-level prevalence of severe asthma as the outcome and multilevel models to adjust for gross national income (GNI) and center- and country-level sex, climate, and population density. We examined associations (adjusting for GNI) between air pollution and asthma prevalence over time in centers with data from ISAAC Phase One (mid-1900s) and Phase Three (2001-2003). Results: For the 13- to 14-year age group (128 centers in 28 countries), the estimated average within-country change in center-level asthma prevalence per 100 children per 10% increase in center-level PM2.5 and NO2 was -0.043 [95% confidence interval (CI): -0.139, 0.053] and 0.017 (95% CI: -0.030, 0.064) respectively. For ozone the estimated change in prevalence per parts per billion by volume was -0.116 (95% CI: -0.234, 0.001). Equivalent results for the 6- to 7-year age group (83 centers in 20 countries), though slightly different, were not significantly positive. For the 13- to 14-year age group, change in center-level asthma prevalence over time per 100 children per 10% increase in PM2.5 from Phase One to Phase Three was -0.139 (95% CI: -0.347, 0.068). The corresponding association with ozone (per ppbV) was -0.171 (95% CI: -0.275, -0.067). Conclusion: In contrast to reports from within-community studies of individuals exposed to traffic pollution, we did not find

  16. A Satellite-Based Model for Simulating Ecosystem Respiration in the Tibetan and Inner Mongolian Grasslands

    Directory of Open Access Journals (Sweden)

    Rong Ge

    2018-01-01

    Full Text Available It is important to accurately evaluate ecosystem respiration (RE in the alpine grasslands of the Tibetan Plateau and the temperate grasslands of the Inner Mongolian Plateau, as it serves as a sensitivity indicator of regional and global carbon cycles. Here, we combined flux measurements taken between 2003 and 2013 from 16 grassland sites across northern China and the corresponding MODIS land surface temperature (LST, enhanced vegetation index (EVI, and land surface water index (LSWI to build a satellite-based model to estimate RE at a regional scale. First, the dependencies of both spatial and temporal variations of RE on these biotic and climatic factors were examined explicitly. We found that plant productivity and moisture, but not temperature, can best explain the spatial pattern of RE in northern China’s grasslands; while temperature plays a major role in regulating the temporal variability of RE in the alpine grasslands, and moisture is equally as important as temperature in the temperate grasslands. However, the moisture effect on RE and the explicit representation of spatial variation process are often lacking in most of the existing satellite-based RE models. On this basis, we developed a model by comprehensively considering moisture, temperature, and productivity effects on both temporal and spatial processes of RE, and then, we evaluated the model performance. Our results showed that the model well explained the observed RE in both the alpine (R2 = 0.79, RMSE = 0.77 g C m−2 day−1 and temperate grasslands (R2 = 0.75, RMSE = 0.60 g C m−2 day−1. The inclusion of the LSWI as the water-limiting factor substantially improved the model performance in arid and semi-arid ecosystems, and the spatialized basal respiration rate as an indicator for spatial variation largely determined the regional pattern of RE. Finally, the model accurately reproduced the seasonal and inter-annual variations and spatial variability of RE, and it avoided

  17. Satellite based radar interferometry to estimate large-scale soil water depletion from clay shrinkage: possibilities and limitations

    NARCIS (Netherlands)

    Brake, te B.; Hanssen, R.F.; Ploeg, van der M.J.; Rooij, de G.H.

    2013-01-01

    Satellite-based radar interferometry is a technique capable of measuring small surface elevation changes at large scales and with a high resolution. In vadose zone hydrology, it has been recognized for a long time that surface elevation changes due to swell and shrinkage of clayey soils can serve as

  18. Providing satellite-based early warnings of fires to reduce fire flashovers on South Africa’s transmission lines

    CSIR Research Space (South Africa)

    Frost, PE

    2007-07-01

    Full Text Available The Advanced Fire Information System (AFIS) is the first near real time operational satellite-based fire monitoring system of its kind in Africa. The main aim of AFIS is to provide information regarding the prediction, detection and assessment...

  19. A method based on Monte Carlo simulations and voxelized anatomical atlases to evaluate and correct uncertainties on radiotracer accumulation quantitation in beta microprobe studies in the rat brain

    Science.gov (United States)

    Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.

    2008-10-01

    The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is

  20. Developing Information Services and Tools to Access and Evaluate Data Quality in Global Satellite-based Precipitation Products

    Science.gov (United States)

    Liu, Z.; Shie, C. L.; Meyer, D. J.

    2017-12-01

    Global satellite-based precipitation products have been widely used in research and applications around the world. Compared to ground-based observations, satellite-based measurements provide precipitation data on a global scale, especially in remote continents and over oceans. Over the years, satellite-based precipitation products have evolved from single sensor and single algorithm to multi-sensors and multi-algorithms. As a result, many satellite-based precipitation products have been enhanced such as spatial and temporal coverages. With inclusion of ground-based measurements, biases of satellite-based precipitation products have been significantly reduced. However, data quality issues still exist and can be caused by many factors such as observations, satellite platform anomaly, algorithms, production, calibration, validation, data services, etc. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) is home to NASA global precipitation product archives including the Tropical Rainfall Measuring Mission (TRMM), the Global Precipitation Measurement (GPM), as well as other global and regional precipitation products. Precipitation is one of the top downloaded and accessed parameters in the GES DISC data archive. Meanwhile, users want to easily locate and obtain data quality information at regional and global scales to better understand how precipitation products perform and how reliable they are. As data service providers, it is necessary to provide an easy access to data quality information, however, such information normally is not available, and when it is available, it is not in one place and difficult to locate. In this presentation, we will present challenges and activities at the GES DISC to address precipitation data quality issues.

  1. Advancing land surface model development with satellite-based Earth observations

    Science.gov (United States)

    Orth, Rene; Dutra, Emanuel; Trigo, Isabel F.; Balsamo, Gianpaolo

    2017-05-01

    The land surface forms an essential part of the climate system. It interacts with the atmosphere through the exchange of water and energy and hence influences weather and climate, as well as their predictability. Correspondingly, the land surface model (LSM) is an essential part of any weather forecasting system. LSMs rely on partly poorly constrained parameters, due to sparse land surface observations. With the use of newly available land surface temperature observations, we show in this study that novel satellite-derived datasets help improve LSM configuration, and hence can contribute to improved weather predictability. We use the Hydrology Tiled ECMWF Scheme of Surface Exchanges over Land (HTESSEL) and validate it comprehensively against an array of Earth observation reference datasets, including the new land surface temperature product. This reveals satisfactory model performance in terms of hydrology but poor performance in terms of land surface temperature. This is due to inconsistencies of process representations in the model as identified from an analysis of perturbed parameter simulations. We show that HTESSEL can be more robustly calibrated with multiple instead of single reference datasets as this mitigates the impact of the structural inconsistencies. Finally, performing coupled global weather forecasts, we find that a more robust calibration of HTESSEL also contributes to improved weather forecast skills. In summary, new satellite-based Earth observations are shown to enhance the multi-dataset calibration of LSMs, thereby improving the representation of insufficiently captured processes, advancing weather predictability, and understanding of climate system feedbacks.

  2. Fundamentals of Inertial Navigation, Satellite-based Positioning and their Integration

    CERN Document Server

    Noureldin, Aboelmagd; Georgy, Jacques

    2013-01-01

    Fundamentals of Inertial Navigation, Satellite-based Positioning and their Integration is an introduction to the field of Integrated Navigation Systems. It serves as an excellent reference for working engineers as well as textbook for beginners and students new to the area. The book is easy to read and understand with minimum background knowledge. The authors explain the derivations in great detail. The intermediate steps are thoroughly explained so that a beginner can easily follow the material. The book shows a step-by-step implementation of navigation algorithms and provides all the necessary details. It provides detailed illustrations for an easy comprehension. The book also demonstrates real field experiments and in-vehicle road test results with professional discussions and analysis. This work is unique in discussing the different INS/GPS integration schemes in an easy to understand and straightforward way. Those schemes include loosely vs tightly coupled, open loop vs closed loop, and many more.

  3. A Satellite-Based Sunshine Duration Climate Data Record for Europe and Africa

    Directory of Open Access Journals (Sweden)

    Steffen Kothe

    2017-05-01

    Full Text Available Besides 2 m - temperature and precipitation, sunshine duration is one of the most important and commonly used parameter in climatology, with measured time series of partly more than 100 years in length. EUMETSAT’s Satellite Application Facility on Climate Monitoring (CM SAF presents a climate data record for daily and monthly sunshine duration (SDU for Europe and Africa. Basis for the advanced retrieval is a highly resolved satellite product of the direct solar radiation from measurements by Meteosat satellites 2 to 10. The data record covers the time period 1983 to 2015 with a spatial resolution of 0.05° × 0.05°. The comparison against ground-based data shows high agreement but also some regional differences. Sunshine duration is overestimated by the satellite-based data in many regions, compared to surface data. In West and Central Africa, low clouds seem to be the reason for a stronger overestimation of sunshine duration in this region (up to 20% for monthly sums. For most stations, the overestimation is low, with a bias below 7.5 h for monthly sums and below 0.4 h for daily sums. A high correlation of 0.91 for daily SDU and 0.96 for monthly SDU also proved the high agreement with station data. As SDU is based on a stable and homogeneous climate data record of more than 30 years length, it is highly suitable for climate applications, such as trend estimates.

  4. Does Urban Form Affect Urban NO2? Satellite-Based Evidence for More than 1200 Cities.

    Science.gov (United States)

    Bechle, Matthew J; Millet, Dylan B; Marshall, Julian D

    2017-11-07

    Modifying urban form may be a strategy to mitigate urban air pollution. For example, evidence suggests that urban form can affect motor vehicle usage, a major contributor to urban air pollution. We use satellite-based measurements of urban form and nitrogen dioxide (NO 2 ) to explore relationships between urban form and air pollution for a global data  set of 1274 cities. Three of the urban form metrics studied (contiguity, circularity, and vegetation) have a statistically significant relationship with urban NO 2 ; their combined effect could be substantial. As illustration, if findings presented here are causal, that would suggest that if Christchurch, New Zealand (a city at the 75th percentile for all three urban-form metrics, and with a network of buses, trams, and bicycle facilities) was transformed to match the urban form of Indio - Cathedral City, California, United States (a city at the 25th percentile for those same metrics, and exhibiting sprawl-like suburban development), our models suggest that Christchurch's NO 2 concentrations would be ∼60% higher than its current level. We also find that the combined effect of urban form on NO 2 is larger for small cities (β × IQR = -0.46 for cities urban population and are where much of the future urban growth is expected to occur. This work highlights the need for future study of how changes in urban form and related land use and transportation policies impact urban air pollution, especially for small cities.

  5. An Exploitation of Satellite-based Observation for Health Information: The UFOS Project

    Energy Technology Data Exchange (ETDEWEB)

    Mangin, A.; Morel, M.; Fanton d' Andon, O

    2000-07-01

    Short, medium and long-term trends of UV intensity levels are of crucial importance for either assessing effective biological impacts on human population, or implementing adequate preventive behaviours. Better information on a large spatial scale and increased public awareness of the short-term variations in UV values will help to support health agencies' goals of educating the public on UV risks. The Ultraviolet Forecast Operational Service Project (UFAS), financed in part by the European Commission/DG Information Society (TEN-TELECOM programme), aims to exploit satellite-based observations and to supply a set of UV products directly useful to health care. The short-term objective is to demonstrate the technical and economical feasibility and benefits that could be brought by such a system. UFOS is carried out by ACRI, with the support of an Advisory Group chaired by WHO and involving representation from the sectors of Health (WHO, INTERSUN collaborating centres, ZAMBON), Environment (WMO, IASB), and Telecommunications (EURECOM, IMET). (author)

  6. Satellite Based Downward Long Wave Radiation by Various Models in Northeast Asia

    Directory of Open Access Journals (Sweden)

    Chanyang Sur

    2014-01-01

    Full Text Available Satellite-based downward long wave radiation measurement under clear sky conditions in Northeast Asia was conducted using five well-known physical models (Brunt 1932, Idso and Jackson 1969, Brutsaert 1975, Satterlund 1979, Prata 1996 with a newly proposed global Rld model (Abramowitz et al. 2012. Data from two flux towers in South Korea were used to validate downward long wave radiation. Moderate resolution imaging spectroradiometer (MODIS atmospheric profile products were used to develop the Rld models. The overall root mean square error (RMSE of MODIS Rld with respect to two ecosystem-type flux towers was determined to be ≈ 20 W m-2. Based on the statistical analyses, MODIS Rld estimates with Brutsaert (1975 and Abramowitz et al. (2012 models were the most applicable for evaluating Rld for clear sky conditions in Northeast Asia. The Abramowitz Rld maps with MODIS Ta and ea showed reasonable seasonal patterns, which were well-aligned with other biophysical variables reported by previous studies. The MODIS Rld map developed in this study will be very useful for identifying spatial patterns that are not detectable from ground-based Rld measurement sites.

  7. Satellite based hydroclimatic understanding of evolution of Dengue and Zika virus

    Science.gov (United States)

    Khan, R.; Jutla, A.; Colwell, R. R.

    2017-12-01

    Vector-borne diseases are prevalent in tropical and subtropical regions especially in Africa, South America, and Asia. Vector eradication is perhaps not possible since pathogens adapt to local environment. In absence of appropriate vaccinations for Dengue and Zika virus, burden of these two infections continue to increase in several geographical locations. Aedes spp. is one of the major vectors for Dengue and Zika viruses. Etiologies on Dengue and Zika viruses are evolving, however the key question remains as to how one species of mosquito can transmit two different infections? We argue that a set of conducive environmental condition, modulated by regional climatic and weather processes, may lead to abundance of a specific virus. Using satellite based rainfall (TRMM/GPM), land surface temperature (MODIS) and dew point temperature (AIRS/MERRA), we have identified appropriate thresholds that can provide estimate on risk of abundance of Dengue or Zika viruses at least few weeks in advance. We will discuss a framework coupling satellite derived hydroclimatic and societal processes to predict environmental niches of favorability of conditions of Dengue or Zika risk in human population on a global scale.

  8. An Exploitation of Satellite-based Observation for Health Information: The UFOS Project

    International Nuclear Information System (INIS)

    Mangin, A.; Morel, M.; Fanton d'Andon, O.

    2000-01-01

    Short, medium and long-term trends of UV intensity levels are of crucial importance for either assessing effective biological impacts on human population, or implementing adequate preventive behaviours. Better information on a large spatial scale and increased public awareness of the short-term variations in UV values will help to support health agencies' goals of educating the public on UV risks. The Ultraviolet Forecast Operational Service Project (UFAS), financed in part by the European Commission/DG Information Society (TEN-TELECOM programme), aims to exploit satellite-based observations and to supply a set of UV products directly useful to health care. The short-term objective is to demonstrate the technical and economical feasibility and benefits that could be brought by such a system. UFOS is carried out by ACRI, with the support of an Advisory Group chaired by WHO and involving representation from the sectors of Health (WHO, INTERSUN collaborating centres, ZAMBON), Environment (WMO, IASB), and Telecommunications (EURECOM, IMET). (author)

  9. Satellite-based ET estimation using Landsat 8 images and SEBAL model

    Directory of Open Access Journals (Sweden)

    Bruno Bonemberger da Silva

    Full Text Available ABSTRACT Estimation of evapotranspiration is a key factor to achieve sustainable water management in irrigated agriculture because it represents water use of crops. Satellite-based estimations provide advantages compared to direct methods as lysimeters especially when the objective is to calculate evapotranspiration at a regional scale. The present study aimed to estimate the actual evapotranspiration (ET at a regional scale, using Landsat 8 - OLI/TIRS images and complementary data collected from a weather station. SEBAL model was used in South-West Paraná, region composed of irrigated and dry agricultural areas, native vegetation and urban areas. Five Landsat 8 images, row 223 and path 78, DOY 336/2013, 19/2014, 35/2014, 131/2014 and 195/2014 were used, from which ET at daily scale was estimated as a residual of the surface energy balance to produce ET maps. The steps for obtain ET using SEBAL include radiometric calibration, calculation of the reflectance, surface albedo, vegetation indexes (NDVI, SAVI and LAI and emissivity. These parameters were obtained based on the reflective bands of the orbital sensor with temperature surface estimated from thermal band. The estimated ET values in agricultural areas, native vegetation and urban areas using SEBAL algorithm were compatible with those shown in the literature and ET errors between the ET estimates from SEBAL model and Penman Monteith FAO 56 equation were less than or equal to 1.00 mm day-1.

  10. Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology

    Science.gov (United States)

    Jin, Z.; Azzari, G.; Lobell, D. B.

    2016-12-01

    Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.

  11. Satellite-based detection of global urban heat-island temperature influence

    Science.gov (United States)

    Gallo, K.P.; Adegoke, Jimmy O.; Owen, T.W.; Elvidge, C.D.

    2002-01-01

    This study utilizes a satellite-based methodology to assess the urban heat-island influence during warm season months for over 4400 stations included in the Global Historical Climatology Network of climate stations. The methodology includes local and regional satellite retrievals of an indicator of the presence green photosynthetically active vegetation at and around the stations. The difference in local and regional samples of the normalized difference vegetation index (NDVI) is used to estimate differences in mean air temperature. Stations classified as urban averaged 0.90??C (N. Hemisphere) and 0.92??C (S. Hemisphere) warmer than the surrounding environment on the basis of the NDVI-derived temperature estimates. Additionally, stations classified as rural averaged 0.19??C (N. Hemisphere) and 0.16??C (S. Hemisphere) warmer than the surrounding environment. The NDVI-derived temperature estimates were found to be in reasonable agreement with temperature differences observed between climate stations. The results suggest that satellite-derived data sets can be used to estimate the urban heat-island temperature influence on a global basis and that a more detailed analysis of rural stations and their surrounding environment may be necessary to assure that temperature trends derived from assumed rural environments are not influenced by changes in land use/land cover. Copyright 2002 by the American Geophysical Union.

  12. Development and validation of satellite-based estimates of surface visibility

    Science.gov (United States)

    Brunner, J.; Pierce, R. B.; Lenzen, A.

    2016-02-01

    A satellite-based surface visibility retrieval has been developed using Moderate Resolution Imaging Spectroradiometer (MODIS) measurements as a proxy for Advanced Baseline Imager (ABI) data from the next generation of Geostationary Operational Environmental Satellites (GOES-R). The retrieval uses a multiple linear regression approach to relate satellite aerosol optical depth, fog/low cloud probability and thickness retrievals, and meteorological variables from numerical weather prediction forecasts to National Weather Service Automated Surface Observing System (ASOS) surface visibility measurements. Validation using independent ASOS measurements shows that the GOES-R ABI surface visibility retrieval (V) has an overall success rate of 64.5 % for classifying clear (V ≥ 30 km), moderate (10 km ≤ V United States Environmental Protection Agency (EPA) and National Park Service (NPS) Interagency Monitoring of Protected Visual Environments (IMPROVE) network and provide useful information to the regional planning offices responsible for developing mitigation strategies required under the EPA's Regional Haze Rule, particularly during regional haze events associated with smoke from wildfires.

  13. Satellite-Based Assessment of Rainfall-Triggered Landslide Hazard for Situational Awareness

    Science.gov (United States)

    Kirschbaum, Dalia; Stanley, Thomas

    2018-03-01

    Determining the time, location, and severity of natural disaster impacts is fundamental to formulating mitigation strategies, appropriate and timely responses, and robust recovery plans. A Landslide Hazard Assessment for Situational Awareness (LHASA) model was developed to indicate potential landslide activity in near real-time. LHASA combines satellite-based precipitation estimates with a landslide susceptibility map derived from information on slope, geology, road networks, fault zones, and forest loss. Precipitation data from the Global Precipitation Measurement (GPM) mission are used to identify rainfall conditions from the past 7 days. When rainfall is considered to be extreme and susceptibility values are moderate to very high, a "nowcast" is issued to indicate the times and places where landslides are more probable. When LHASA nowcasts were evaluated with a Global Landslide Catalog, the probability of detection (POD) ranged from 8% to 60%, depending on the evaluation period, precipitation product used, and the size of the spatial and temporal window considered around each landslide point. Applications of the LHASA system are also discussed, including how LHASA is used to estimate long-term trends in potential landslide activity at a nearly global scale and how it can be used as a tool to support disaster risk assessment. LHASA is intended to provide situational awareness of landslide hazards in near real-time, providing a flexible, open-source framework that can be adapted to other spatial and temporal scales based on data availability.

  14. Using satellite-based rainfall estimates for streamflow modelling: Bagmati Basin

    Science.gov (United States)

    Shrestha, M.S.; Artan, Guleid A.; Bajracharya, S.R.; Sharma, R. R.

    2008-01-01

    In this study, we have described a hydrologic modelling system that uses satellite-based rainfall estimates and weather forecast data for the Bagmati River Basin of Nepal. The hydrologic model described is the US Geological Survey (USGS) Geospatial Stream Flow Model (GeoSFM). The GeoSFM is a spatially semidistributed, physically based hydrologic model. We have used the GeoSFM to estimate the streamflow of the Bagmati Basin at Pandhera Dovan hydrometric station. To determine the hydrologic connectivity, we have used the USGS Hydro1k DEM dataset. The model was forced by daily estimates of rainfall and evapotranspiration derived from weather model data. The rainfall estimates used for the modelling are those produced by the National Oceanic and Atmospheric Administration Climate Prediction Centre and observed at ground rain gauge stations. The model parameters were estimated from globally available soil and land cover datasets – the Digital Soil Map of the World by FAO and the USGS Global Land Cover dataset. The model predicted the daily streamflow at Pandhera Dovan gauging station. The comparison of the simulated and observed flows at Pandhera Dovan showed that the GeoSFM model performed well in simulating the flows of the Bagmati Basin.

  15. Satellite-based trends of solar radiation and cloud parameters in Europe

    Science.gov (United States)

    Pfeifroth, Uwe; Bojanowski, Jedrzej S.; Clerbaux, Nicolas; Manara, Veronica; Sanchez-Lorenzo, Arturo; Trentmann, Jörg; Walawender, Jakub P.; Hollmann, Rainer

    2018-04-01

    Solar radiation is the main driver of the Earth's climate. Measuring solar radiation and analysing its interaction with clouds are essential for the understanding of the climate system. The EUMETSAT Satellite Application Facility on Climate Monitoring (CM SAF) generates satellite-based, high-quality climate data records, with a focus on the energy balance and water cycle. Here, multiple of these data records are analyzed in a common framework to assess the consistency in trends and spatio-temporal variability of surface solar radiation, top-of-atmosphere reflected solar radiation and cloud fraction. This multi-parameter analysis focuses on Europe and covers the time period from 1992 to 2015. A high correlation between these three variables has been found over Europe. An overall consistency of the climate data records reveals an increase of surface solar radiation and a decrease in top-of-atmosphere reflected radiation. In addition, those trends are confirmed by negative trends in cloud cover. This consistency documents the high quality and stability of the CM SAF climate data records, which are mostly derived independently from each other. The results of this study indicate that one of the main reasons for the positive trend in surface solar radiation since the 1990's is a decrease in cloud coverage even if an aerosol contribution cannot be completely ruled out.

  16. Impact of ubiquitous inhibitors on the GUS gene reporter system: evidence from the model plants Arabidopsis, tobacco and rice and correction methods for quantitative assays of transgenic and endogenous GUS

    Directory of Open Access Journals (Sweden)

    Gerola Paolo D

    2009-12-01

    Full Text Available Abstract Background The β-glucuronidase (GUS gene reporter system is one of the most effective and employed techniques in the study of gene regulation in plant molecular biology. Improving protocols for GUS assays have rendered the original method described by Jefferson amenable to various requirements and conditions, but the serious limitation caused by inhibitors of the enzyme activity in plant tissues has thus far been underestimated. Results We report that inhibitors of GUS activity are ubiquitous in organ tissues of Arabidopsis, tobacco and rice, and significantly bias quantitative assessment of GUS activity in plant transformation experiments. Combined with previous literature reports on non-model species, our findings suggest that inhibitors may be common components of plant cells, with variable affinity towards the E. coli enzyme. The reduced inhibitory capacity towards the plant endogenous GUS discredits the hypothesis of a regulatory role of these compounds in plant cells, and their effect on the bacterial enzyme is better interpreted as a side effect due to their interaction with GUS during the assay. This is likely to have a bearing also on histochemical analyses, leading to inaccurate evaluations of GUS expression. Conclusions In order to achieve reliable results, inhibitor activity should be routinely tested during quantitative GUS assays. Two separate methods to correct the measured activity of the transgenic and endogenous GUS are presented.

  17. Air traffic management system design using satellite based geo-positioning and communications assets

    Science.gov (United States)

    Horkin, Phil

    1995-01-01

    The current FAA and ICAO FANS vision of Air Traffic Management will transition the functions of Communications, Navigation, and Surveillance to satellite based assets in the 21st century. Fundamental to widespread acceptance of this vision is a geo-positioning system that can provide worldwide access with best case differential GPS performance, but without the associated problems. A robust communications capability linking-up aircraft and towers to meet the voice and data requirements is also essential. The current GPS constellation does not provide continuous global coverage with a sufficient number of satellites to meet the precision landing requirements as set by the world community. Periodic loss of the minimum number of satellites in view creates an integrity problem, which prevents GPS from becoming the primary system for navigation. Furthermore, there is reluctance on the part of many countries to depend on assets like GPS and GLONASS which are controlled by military communities. This paper addresses these concerns and provides a system solving the key issues associated with navigation, automatic dependent surveillance, and flexible communications. It contains an independent GPS-like navigation system with 27 satellites providing global coverage with a minimum of six in view at all times. Robust communications is provided by a network of TDMA/FDMA communications payloads contained on these satellites. This network can support simultaneous communications for up to 30,000 links, nearly enough to simultaneously support three times the current global fleet of jumbo air passenger aircraft. All of the required hardware is directly traceable to existing designs.

  18. Categorizing natural disaster damage assessment using satellite-based geospatial techniques

    Science.gov (United States)

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.

    2008-01-01

    Remote sensing of a natural disaster's damage offers an exciting backup and/or alternative to traditional means of on-site damage assessment. Although necessary for complete assessment of damage areas, ground-based damage surveys conducted in the aftermath of natural hazard passage can sometimes be potentially complicated due to on-site difficulties (e.g., interaction with various authorities and emergency services) and hazards (e.g., downed power lines, gas lines, etc.), the need for rapid mobilization (particularly for remote locations), and the increasing cost of rapid physical transportation of manpower and equipment. Satellite image analysis, because of its global ubiquity, its ability for repeated independent analysis, and, as we demonstrate here, its ability to verify on-site damage assessment provides an interesting new perspective and investigative aide to researchers. Using one of the strongest tornado events in US history, the 3 May 1999 Oklahoma City Tornado, as a case example, we digitized the tornado damage path and co-registered the damage path using pre- and post-Landsat Thematic Mapper image data to perform a damage assessment. We employed several geospatial approaches, specifically the Getis index, Geary's C, and two lacunarity approaches to categorize damage characteristics according to the original Fujita tornado damage scale (F-scale). Our results indicate strong relationships between spatial indices computed within a local window and tornado F-scale damage categories identified through the ground survey. Consequently, linear regression models, even incorporating just a single band, appear effective in identifying F-scale damage categories using satellite imagery. This study demonstrates that satellite-based geospatial techniques can effectively add spatial perspectives to natural disaster damages, and in particular for this case study, tornado damages.

  19. Ground- and satellite-based evidence of the biophysical mechanisms behind the greening Sahel.

    Science.gov (United States)

    Brandt, Martin; Mbow, Cheikh; Diouf, Abdoul A; Verger, Aleixandre; Samimi, Cyrus; Fensholt, Rasmus

    2015-04-01

    After a dry period with prolonged droughts in the 1970s and 1980s, recent scientific outcome suggests that the decades of abnormally dry conditions in the Sahel have been reversed by positive anomalies in rainfall. Various remote sensing studies observed a positive trend in vegetation greenness over the last decades which is known as the re-greening of the Sahel. However, little investment has been made in including long-term ground-based data collections to evaluate and better understand the biophysical mechanisms behind these findings. Thus, deductions on a possible increment in biomass remain speculative. Our aim is to bridge these gaps and give specifics on the biophysical background factors of the re-greening Sahel. Therefore, a trend analysis was applied on long time series (1987-2013) of satellite-based vegetation and rainfall data, as well as on ground-observations of leaf biomass of woody species, herb biomass, and woody species abundance in different ecosystems located in the Sahel zone of Senegal. We found that the positive trend observed in satellite vegetation time series (+36%) is caused by an increment of in situ measured biomass (+34%), which is highly controlled by precipitation (+40%). Whereas herb biomass shows large inter-annual fluctuations rather than a clear trend, leaf biomass of woody species has doubled within 27 years (+103%). This increase in woody biomass did not reflect on biodiversity with 11 of 16 woody species declining in abundance over the period. We conclude that the observed greening in the Senegalese Sahel is primarily related to an increasing tree cover that caused satellite-driven vegetation indices to increase with rainfall reversal. © 2014 John Wiley & Sons Ltd.

  20. Advances in the Validation of Satellite-Based Maps of Volcanic Sulfur Dioxide Plumes

    Science.gov (United States)

    Realmuto, V. J.; Berk, A.; Acharya, P. K.; Kennett, R.

    2013-12-01

    The monitoring of volcanic gas emissions with gas cameras, spectrometer arrays, tethersondes, and UAVs presents new opportunities for the validation of satellite-based retrievals of gas concentrations. Gas cameras and spectrometer arrays provide instantaneous observations of the gas burden, or concentration along an optical path, over broad sections of a plume, similar to the observations acquired by nadir-viewing satellites. Tethersondes and UAVs provide us with direct measurements of the vertical profiles of gas concentrations within plumes. This presentation will focus on our current efforts to validate ASTER-based maps of sulfur dioxide plumes at Turrialba and Kilauea Volcanoes (located in Costa Rica and Hawaii, respectively). These volcanoes, which are the subjects of comprehensive monitoring programs, are challenging targets for thermal infrared (TIR) remote sensing due the warm and humid atmospheric conditions. The high spatial resolution of ASTER in the TIR (90 meters) allows us to map the plumes back to their source vents, but also requires us to pay close attention to the temperature and emissivity of the surfaces beneath the plumes. Our knowledge of the surface and atmospheric conditions is never perfect, and we employ interactive mapping techniques that allow us to evaluate the impact of these uncertainties on our estimates of plume composition. To accomplish this interactive mapping we have developed the Plume Tracker tool kit, which integrates retrieval procedures, visualization tools, and a customized version of the MODTRAN radiative transfer (RT) model under a single graphics user interface (GUI). We are in the process of porting the RT calculations to graphics processing units (GPUs) with the goal of achieving a 100-fold increase in the speed of computation relative to conventional CPU-based processing. We will report on our progress with this evolution of Plume Tracker. Portions of this research were conducted at the Jet Propulsion Laboratory

  1. Long-term change analysis of satellite-based evapotranspiration over Indian vegetated surface

    Science.gov (United States)

    Gupta, Shweta; Bhattacharya, Bimal K.; Krishna, Akhouri P.

    2016-05-01

    In the present study, trend of satellite based annual evapotranspiration (ET) and natural forcing factors responsible for this were analyzed. Thirty years (1981-2010) of ET data at 0.08° grid resolution, generated over Indian region from opticalthermal observations from NOAA PAL and MODIS AQUA satellites, were used. Long-term data on gridded (0.5° x 0.5°) annual rainfall (RF), annual mean surface soil moisture (SSM) ERS scatterometer at 25 km resolution and annual mean incoming shortwave radiation from MERRA-2D reanalysis were also analyzed. Mann-Kendall tests were performed with time series data for trend analysis. Mean annual ET loss from Indian ago-ecosystem was found to be almost double (1100 Cubic Km) than Indian forest ecosystem (550 Cubic Km). Rainfed vegetation systems such as forest, rainfed cropland, grassland showed declining ET trend @ - 4.8, -0.6 &-0.4 Cubic Kmyr-1, respectively during 30 years. Irrigated cropland initially showed ET decline upto 1995 @ -0.8 cubic Kmyr-1 which could possibly be due to solar dimming followed by increasing ET @ 0.9 cubic Kmyr-1 after 1995. A cross-over point was detected between forest ET decline and ET increase in irrigated cropland during 2008. During 2001-2010, the four agriculturally important Indian states eastern, central, western and southern showed significantly increasing ET trend with S-score of 15-25 and Z-score of 1.09-2.9. Increasing ET in western and southern states was found to be coupled with increase in annual rainfall and SSM. But in eastern and central states no significant trend in rainfall was observed though significant increase in ET was noticed. The study recommended to investigate the influence of anthropogenic factors such as increase in area under irrigation, increased use of water for irrigation through ground water pumping, change in cropping pattern and cultivars on increasing ET.

  2. Satellite-based emergency mapping using optical imagery: experience and reflections from the 2015 Nepal earthquakes

    Science.gov (United States)

    Williams, Jack G.; Rosser, Nick J.; Kincey, Mark E.; Benjamin, Jessica; Oven, Katie J.; Densmore, Alexander L.; Milledge, David G.; Robinson, Tom R.; Jordan, Colm A.; Dijkstra, Tom A.

    2018-01-01

    Landslides triggered by large earthquakes in mountainous regions contribute significantly to overall earthquake losses and pose a major secondary hazard that can persist for months or years. While scientific investigations of coseismic landsliding are increasingly common, there is no protocol for rapid (hours-to-days) humanitarian-facing landslide assessment and no published recognition of what is possible and what is useful to compile immediately after the event. Drawing on the 2015 Mw 7.8 Gorkha earthquake in Nepal, we consider how quickly a landslide assessment based upon manual satellite-based emergency mapping (SEM) can be realistically achieved and review the decisions taken by analysts to ascertain the timeliness and type of useful information that can be generated. We find that, at present, many forms of landslide assessment are too slow to generate relative to the speed of a humanitarian response, despite increasingly rapid access to high-quality imagery. Importantly, the value of information on landslides evolves rapidly as a disaster response develops, so identifying the purpose, timescales, and end users of a post-earthquake landslide assessment is essential to inform the approach taken. It is clear that discussions are needed on the form and timing of landslide assessments, and how best to present and share this information, before rather than after an earthquake strikes. In this paper, we share the lessons learned from the Gorkha earthquake, with the aim of informing the approach taken by scientists to understand the evolving landslide hazard in future events and the expectations of the humanitarian community involved in disaster response.

  3. Quantitative lymphography

    International Nuclear Information System (INIS)

    Mostbeck, A.; Lofferer, O.; Kahn, P.; Partsch, H.; Koehn, H.; Bialonczyk, Ch.; Koenig, B.

    1984-01-01

    Labelled colloids and macromolecules are removed lymphatically. The uptake of tracer in the regional lymphnodes is a parameter of lymphatic flow. Due to great variations in patient shape - obesity, cachexia - and accompanying variations in counting efficiencies quantitative measurements with reasonable accuracy have not been reported to date. A new approach to regional absorption correction is based on the combination of transmission and emission scans for each patient. The transmission scan is used for calculation of an absorption correction matrix. Accurate superposition of the correction matrix and the emission scan is achieved by computing the centers of gravity of point sources and - in the case of aligning opposite views - by cross correlation of binary images. In phantom studies the recovery was high (98.3%) and the coefficient of variation of repeated measurement below 1%. In patient studies a standardized stress is a prerequisite for reliable and comparable results. Discrimination between normals (14.3 +- 4.2D%) and patients with lymphedema (2.05 +- 2.5D%) was highly significant using praefascial lymphography and sc injection. Clearence curve analysis of the activities at the injection site, however, gave no reliable data for this purpose. In normals, the uptake in lymphnodes after im injection is by one order of magnitude lower then the uptake after sc injection. The discrimination between normals and patients with postthromboic syndrome was significant. Lymphography after ic injection was in the normal range in 2/3 of the patients with lymphedema and is therefore of no diagnostic value. The difference in uptake after ic and sc injection demonstrated for the first time by our quantitative method provides new insights into the pathophysiology of lymphedema and needs further investigation. (Author)

  4. Publisher Correction

    DEFF Research Database (Denmark)

    Turcot, Valérie; Lu, Yingchang; Highland, Heather M

    2018-01-01

    In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....

  5. Author Correction

    DEFF Research Database (Denmark)

    Grundle, D S; Löscher, C R; Krahmann, G

    2018-01-01

    A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.......A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper....

  6. Advanced Oil Spill Detection Algorithms For Satellite Based Maritime Environment Monitoring

    Science.gov (United States)

    Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo

    2013-12-01

    During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST

  7. Regionalization Study of Satellite based Hydrological Model (SHM) in Hydrologically Homogeneous River Basins of India

    Science.gov (United States)

    Kumari, Babita; Paul, Pranesh Kumar; Singh, Rajendra; Mishra, Ashok; Gupta, Praveen Kumar; Singh, Raghvendra P.

    2017-04-01

    A new semi-distributed conceptual hydrological model, namely Satellite based Hydrological Model (SHM), has been developed under 'PRACRITI-2' program of Space Application Centre (SAC), Ahmedabad for sustainable water resources management of India by using data from Indian Remote Sensing satellites. Entire India is divided into 5km x 5km grid cells and properties at the center of the cells are assumed to represent the property of the cells. SHM contains five modules namely surface water, forest, snow, groundwater and routing. Two empirical equations (SCS-CN and Hargreaves) and water balance method have been used in the surface water module; the forest module is based on the calculations of water balancing & dynamics of subsurface. 2-D Boussinesq equation is used for groundwater modelling which is solved using implicit finite-difference. The routing module follows a distributed routing approach which requires flow path and network with the key point of travel time estimation. The aim of this study is to evaluate the performance of SHM using regionalization technique which also checks the usefulness of a model in data scarce condition or for ungauged basins. However, homogeneity analysis is pre-requisite to regionalization. Similarity index (Φ) and hierarchical agglomerative cluster analysis are adopted to test the homogeneity in terms of physical attributes of three basins namely Brahmani (39,033 km km^2)), Baitarani (10,982 km km^2)) and Kangsabati (9,660 km km^2)) with respect to Subarnarekha (29,196 km km^2)) basin. The results of both homogeneity analysis show that Brahmani basin is the most homogeneous with respect to Subarnarekha river basin in terms of physical characteristics (land use land cover classes, soiltype and elevation). The calibration and validation of model parameters of Brahmani basin is in progress which are to be transferred into the SHM set up of Subarnarekha basin and results are to be compared with the results of calibrated and validated

  8. Utility and Value of Satellite-Based Frost Forecasting for Kenya's Tea Farming Sector

    Science.gov (United States)

    Morrison, I.

    2016-12-01

    Frost damage regularly inflicts millions of dollars of crop losses in the tea-growing highlands of western Kenya, a problem that the USAID/NASA Regional Visualization and Monitoring System (SERVIR) program is working to mitigate through a frost monitoring and forecasting product that uses satellite-based temperature and soil moisture data to generate up to three days of advanced warning before frost events. This paper presents the findings of a value of information (VOI) study assessing the value of this product based on Kenyan tea farmers' experiences with frost and frost-damage mitigation. Value was calculated based on historic trends of frost frequency, severity, and extent; likelihood of warning receipt and response; and subsequent frost-related crop-loss aversion. Quantification of these factors was derived through inferential analysis of survey data from 400 tea-farming households across the tea-growing regions of Kericho and Nandi, supplemented with key informant interviews with decision-makers at large estate tea plantations, historical frost incident and crop-loss data from estate tea plantations and agricultural insurance companies, and publicly available demographic and economic data. At this time, the product provides a forecasting window of up to three days, and no other frost-prediction methods are used by the large or small-scale farmers of Kenya's tea sector. This represents a significant opportunity for preemptive loss-reduction via Earth observation data. However, the tea-growing community has only two realistic options for frost-damage mitigation: preemptive harvest of available tea leaves to minimize losses, or skiving (light pruning) to facilitate fast recovery from frost damage. Both options are labor-intensive and require a minimum of three days of warning to be viable. As a result, the frost forecasting system has a very narrow margin of usefulness, making its value highly dependent on rapid access to the warning messages and flexible access

  9. Towards a Near Real-Time Satellite-Based Flux Monitoring System for the MENA Region

    Science.gov (United States)

    Ershadi, A.; Houborg, R.; McCabe, M. F.; Anderson, M. C.; Hain, C.

    2013-12-01

    Satellite remote sensing has the potential to offer spatially and temporally distributed information on land surface characteristics, which may be used as inputs and constraints for estimating land surface fluxes of carbon, water and energy. Enhanced satellite-based monitoring systems for aiding local water resource assessments and agricultural management activities are particularly needed for the Middle East and North Africa (MENA) region. The MENA region is an area characterized by limited fresh water resources, an often inefficient use of these, and relatively poor in-situ monitoring as a result of sparse meteorological observations. To address these issues, an integrated modeling approach for near real-time monitoring of land surface states and fluxes at fine spatio-temporal scales over the MENA region is presented. This approach is based on synergistic application of multiple sensors and wavebands in the visible to shortwave infrared and thermal infrared (TIR) domain. The multi-scale flux mapping and monitoring system uses the Atmosphere-Land Exchange Inverse (ALEXI) model and associated flux disaggregation scheme (DisALEXI), and the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) in conjunction with model reanalysis data and multi-sensor remotely sensed data from polar orbiting (e.g. Landsat and MODerate resolution Imaging Spectroradiometer (MODIS)) and geostationary (MSG; Meteosat Second Generation) satellite platforms to facilitate time-continuous (i.e. daily) estimates of field-scale water, energy and carbon fluxes. Within this modeling system, TIR satellite data provide information about the sub-surface moisture status and plant stress, obviating the need for precipitation input and a detailed soil surface characterization (i.e. for prognostic modeling of soil transport processes). The STARFM fusion methodology blends aspects of high frequency (spatially coarse) and spatially fine resolution sensors and is applied directly to flux output

  10. Towards a more objective evaluation of modelled land-carbon trends using atmospheric CO2 and satellite-based vegetation activity observations

    Directory of Open Access Journals (Sweden)

    D. Dalmonech

    2013-06-01

    Full Text Available Terrestrial ecosystem models used for Earth system modelling show a significant divergence in future patterns of ecosystem processes, in particular the net land–atmosphere carbon exchanges, despite a seemingly common behaviour for the contemporary period. An in-depth evaluation of these models is hence of high importance to better understand the reasons for this disagreement. Here, we develop an extension for existing benchmarking systems by making use of the complementary information contained in the observational records of atmospheric CO2 and remotely sensed vegetation activity to provide a novel set of diagnostics of ecosystem responses to climate variability in the last 30 yr at different temporal and spatial scales. The selection of observational characteristics (traits specifically considers the robustness of information given that the uncertainty of both data and evaluation methodology is largely unknown or difficult to quantify. Based on these considerations, we introduce a baseline benchmark – a minimum test that any model has to pass – to provide a more objective, quantitative evaluation framework. The benchmarking strategy can be used for any land surface model, either driven by observed meteorology or coupled to a climate model. We apply this framework to evaluate the offline version of the MPI Earth System Model's land surface scheme JSBACH. We demonstrate that the complementary use of atmospheric CO2 and satellite-based vegetation activity data allows pinpointing of specific model deficiencies that would not be possible by the sole use of atmospheric CO2 observations.

  11. Quantitative chemical exchange saturation transfer (qCEST) MRI - omega plot analysis of RF-spillover-corrected inverse CEST ratio asymmetry for simultaneous determination of labile proton ratio and exchange rate.

    Science.gov (United States)

    Wu, Renhua; Xiao, Gang; Zhou, Iris Yuwen; Ran, Chongzhao; Sun, Phillip Zhe

    2015-03-01

    Chemical exchange saturation transfer (CEST) MRI is sensitive to labile proton concentration and exchange rate, thus allowing measurement of dilute CEST agent and microenvironmental properties. However, CEST measurement depends not only on the CEST agent properties but also on the experimental conditions. Quantitative CEST (qCEST) analysis has been proposed to address the limitation of the commonly used simplistic CEST-weighted calculation. Recent research has shown that the concomitant direct RF saturation (spillover) effect can be corrected using an inverse CEST ratio calculation. We postulated that a simplified qCEST analysis is feasible with omega plot analysis of the inverse CEST asymmetry calculation. Specifically, simulations showed that the numerically derived labile proton ratio and exchange rate were in good agreement with input values. In addition, the qCEST analysis was confirmed experimentally in a phantom with concurrent variation in CEST agent concentration and pH. Also, we demonstrated that the derived labile proton ratio increased linearly with creatine concentration (P analysis can simultaneously determine labile proton ratio and exchange rate in a relatively complex in vitro CEST system. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Advances In Global Aerosol Modeling Applications Through Assimilation of Satellite-Based Lidar Measurements

    Science.gov (United States)

    Campbell, James; Hyer, Edward; Zhang, Jianglong; Reid, Jeffrey; Westphal, Douglas; Xian, Peng; Vaughan, Mark

    2010-05-01

    Modeling the instantaneous three-dimensional aerosol field and its downwind transport represents an endeavor with many practical benefits foreseeable to air quality, aviation, military and science agencies. The recent proliferation of multi-spectral active and passive satellite-based instruments measuring aerosol physical properties has served as an opportunity to develop and refine the techniques necessary to make such numerical modeling applications possible. Spurred by high-resolution global mapping of aerosol source regions, and combined with novel multivariate data assimilation techniques designed to consider these new data streams, operational forecasts of visibility and aerosol optical depths are now available in near real-time1. Active satellite-based aerosol profiling, accomplished using lidar instruments, represents a critical element for accurate analysis and transport modeling. Aerosol source functions, alone, can be limited in representing the macrophysical structure of injection scenarios within a model. Two-dimensional variational (2D-VAR; x, y) assimilation of aerosol optical depth from passive satellite observations significantly improves the analysis of the initial state. However, this procedure can not fully compensate for any potential vertical redistribution of mass required at the innovation step. The expense of an inaccurate vertical analysis of aerosol structure is corresponding errors downwind, since trajectory paths within successive forecast runs will likely diverge with height. In this paper, the application of a newly-designed system for 3D-VAR (x,y,z) assimilation of vertical aerosol extinction profiles derived from elastic-scattering lidar measurements is described [Campbell et al., 2009]. Performance is evaluated for use with the U. S. Navy Aerosol Analysis and Prediction System (NAAPS) by assimilating NASA/CNES satellite-borne Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) 0.532 μm measurements [Winker et al., 2009

  13. Evaluating the hydrological consistency of evaporation products using satellite-based gravity and rainfall data

    Science.gov (United States)

    López, Oliver; Houborg, Rasmus; McCabe, Matthew Francis

    2017-01-01

    Advances in space-based observations have provided the capacity to develop regional- to global-scale estimates of evaporation, offering insights into this key component of the hydrological cycle. However, the evaluation of large-scale evaporation retrievals is not a straightforward task. While a number of studies have intercompared a range of these evaporation products by examining the variance amongst them, or by comparison of pixel-scale retrievals against ground-based observations, there is a need to explore more appropriate techniques to comprehensively evaluate remote-sensing-based estimates. One possible approach is to establish the level of product agreement between related hydrological components: for instance, how well do evaporation patterns and response match with precipitation or water storage changes? To assess the suitability of this consistency-based approach for evaluating evaporation products, we focused our investigation on four globally distributed basins in arid and semi-arid environments, comprising the Colorado River basin, Niger River basin, Aral Sea basin, and Lake Eyre basin. In an effort to assess retrieval quality, three satellite-based global evaporation products based on different methodologies and input data, including CSIRO-PML, the MODIS Global Evapotranspiration product (MOD16), and Global Land Evaporation: the Amsterdam Methodology (GLEAM), were evaluated against rainfall data from the Global Precipitation Climatology Project (GPCP) along with Gravity Recovery and Climate Experiment (GRACE) water storage anomalies. To ensure a fair comparison, we evaluated consistency using a degree correlation approach after transforming both evaporation and precipitation data into spherical harmonics. Overall we found no persistent hydrological consistency in these dryland environments. Indeed, the degree correlation showed oscillating values between periods of low and high water storage changes, with a phase difference of about 2-3 months

  14. Japanese Global Precipitation Measurement (GPM) mission status and application of satellite-based global rainfall map

    Science.gov (United States)

    Kachi, Misako; Shimizu, Shuji; Kubota, Takuji; Yoshida, Naofumi; Oki, Riko; Kojima, Masahiro; Iguchi, Toshio; Nakamura, Kenji

    2010-05-01

    . Collaboration with GCOM-W is not only limited to its participation to GPM constellation but also coordination in areas of algorithm development and validation in Japan. Generation of high-temporal and high-accurate global rainfall map is one of targets of the GPM mission. As a proto-type for GPM era, JAXA has developed and operates the Global Precipitation Map algorithm in near-real-time since October 2008, and hourly and 0.1-degree resolution binary data and images available at http://sharaku.eorc.jaxa.jp/GSMaP/ four hours after observation. The algorithms are based on outcomes from the Global Satellite Mapping for Precipitation (GSMaP) project, which was sponsored by the Japan Science and Technology Agency (JST) under the Core Research for Evolutional Science and Technology (CREST) framework between 2002 and 2007 (Okamoto et al., 2005; Aonashi et al., 2009; Ushio et al., 2009). Target of GSMaP project is to produce global rainfall maps that are highly accurate and in high temporal and spatial resolution through the development of rain rate retrieval algorithms based on reliable precipitation physical models by using several microwave radiometer data, and comprehensive use of precipitation radar and geostationary infrared imager data. Near-real-time GSMaP data is distributed via internet and utilized by end users. Purpose of data utilization by each user covers broad areas and in world wide; Science researches (model validation, data assimilation, typhoon study, etc.), weather forecast/service, flood warning and rain analysis over river basin, oceanographic condition forecast, agriculture, and education. Toward the GPM era, operational application should be further emphasized as well as science application. JAXA continues collaboration with hydrological communities to utilize satellite-based precipitation data as inputs to future flood prediction and warning system, as well as with meteorological agencies to proceed further data utilization in numerical weather prediction

  15. Goddard Satellite-Based Surface Turbulent Fluxes, 0.25x0.25 deg, Daily Grid, V3, (GSSTF_F14) V3

    Data.gov (United States)

    National Aeronautics and Space Administration — These data are part of the Goddard Satellite-based Surface Turbulent Fluxes Version 3 (GSSTF3) Dataset recently produced through a MEaSURES funded project led by Dr....

  16. Advancing satellite-based solar power forecasting through integration of infrared channels for automatic detection of coastal marine inversion layer

    Energy Technology Data Exchange (ETDEWEB)

    Kostylev, Vladimir; Kostylev, Andrey; Carter, Chris; Mahoney, Chad; Pavlovski, Alexandre; Daye, Tony [Green Power Labs Inc., Dartmouth, NS (Canada); Cormier, Dallas Eugene; Fotland, Lena [San Diego Gas and Electric Co., San Diego, CA (United States)

    2012-07-01

    The marine atmospheric boundary layer is a layer or cool, moist maritime air with the thickness of a few thousand feet immediately below a temperature inversion. In coastal areas as moist air rises from the ocean surface, it becomes trapped and is often compressed into fog above which a layer of stratus clouds often forms. This phenomenon is common for satellite-based solar radiation monitoring and forecasting. Hour ahead satellite-based solar radiation forecasts are commonly using visible spectrum satellite images, from which it is difficult to automatically differentiate low stratus clouds and fog from high altitude clouds. This provides a challenge for cloud motion tyracking and cloud cover forecasting. San Diego Gas and Electric {sup registered} (SDG and E {sup registered}) Marine Layer Project was undertaken to obtain information for integration with PV forecasts, and to develop a detailed understanding of long-term benefits from forecasting Marine Layer (ML) events and their effects on PV production. In order to establish climatological ML patterns, spatial extent and distribution of marine layer, we analyzed visible and IR spectrum satellite images (GOES WEST) archive for the period of eleven years (2000 - 2010). Historical boundaries of marine layers impact were established based on the cross-classification of visible spectrum (VIS) and infrared (IR) images. This approach is successfully used by us and elsewhere for evaluating cloud albedo in common satellite-based techniques for solar radiation monitoring and forecasting. The approach allows differentiation of cloud cover and helps distinguish low laying fog which is the main consequence of marine layer formation. ML occurrence probability and maximum extent inland was established for each hour and day of the analyzed period and seasonal/patterns were described. SDG and E service area is the most affected region by ML events with highest extent and probability of ML occurrence. Influence of ML was the

  17. Satellite-based Calibration of Heat Flux at the Ocean Surface

    Science.gov (United States)

    Barron, C. N.; Dastugue, J. M.; May, J. C.; Rowley, C. D.; Smith, S. R.; Spence, P. L.; Gremes-Cordero, S.

    2016-02-01

    Model forecasts of upper ocean heat content and variability on diurnal to daily scales are highly dependent on estimates of heat flux through the air-sea interface. Satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. Traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle. Subsequent evolution depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. The COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates) endeavors to correct ocean forecast bias through a responsive error partition among surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using Navy operational global or regional atmospheric forcing. COFFEE addresses satellite-calibration of surface fluxes to estimate surface error covariances and links these to the ocean interior. Experiment cases combine different levels of flux calibration with different assimilation alternatives. The cases may use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger

  18. Online Tools for Uncovering Data Quality (DQ) Issues in Satellite-Based Global Precipitation Products

    Science.gov (United States)

    Liu, Zhong; Heo, Gil

    2015-01-01

    Data quality (DQ) has many attributes or facets (i.e., errors, biases, systematic differences, uncertainties, benchmark, false trends, false alarm ratio, etc.)Sources can be complicated (measurements, environmental conditions, surface types, algorithms, etc.) and difficult to be identified especially for multi-sensor and multi-satellite products with bias correction (TMPA, IMERG, etc.) How to obtain DQ info fast and easily, especially quantified info in ROI Existing parameters (random error), literature, DIY, etc.How to apply the knowledge in research and applications.Here, we focus on online systems for integration of products and parameters, visualization and analysis as well as investigation and extraction of DQ information.

  19. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Science.gov (United States)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve

  20. Air-sea fluxes and satellite-based estimation of water masses formation

    Science.gov (United States)

    Sabia, Roberto; Klockmann, Marlene; Fernandez-Prieto, Diego; Donlon, Craig

    2015-04-01

    Recent work linking satellite-based measurements of sea surface salinity (SSS) and sea surface temperature (SST) with traditional physical oceanography has demonstrated the capability of generating routinely satellite-derived surface T-S diagrams [1] and analyze the distribution/dynamics of SSS and its relative surface density with respect to in-situ measurements. Even more recently [2,3], this framework has been extended by exploiting these T-S diagrams as a diagnostic tool to derive water masses formation rates and areas. A water mass describes a water body with physical properties distinct from the surrounding water, formed at the ocean surface under specific conditions which determine its temperature and salinity. The SST and SSS (and thus also density) at the ocean surface are largely determined by fluxes of heat and freshwater. The surface density flux is a function of the latter two and describes the change of the density of seawater at the surface. To obtain observations of water mass formation is of great interest, since they serve as indirect observations of the thermo-haline circulation. The SSS data which has become available through the SMOS [4] and Aquarius [5] satellite missions will provide the possibility of studying also the effect of temporally-varying SSS fields on water mass formation. In the present study, the formation of water masses as a function of SST and SSS is derived from the surface density flux by integrating the latter over a specific area and time period in bins of SST and SSS and then taking the derivative of the total density flux with respect to density. This study presents a test case using SMOS SSS, OSTIA SST, as well as Argo ISAS SST and SSS for comparison, heat fluxes from the NOCS Surface Flux Data Set v2.0, OAFlux evaporation and CMORPH precipitation. The study area, initially referred to the North Atlantic, is extended over two additional ocean basins and the study period covers the 2011-2012 timeframe. Yearly, seasonal

  1. Publisher Correction

    DEFF Research Database (Denmark)

    Stokholm, Jakob; Blaser, Martin J.; Thorsen, Jonathan

    2018-01-01

    The originally published version of this Article contained an incorrect version of Figure 3 that was introduced following peer review and inadvertently not corrected during the production process. Both versions contain the same set of abundance data, but the incorrect version has the children...

  2. Publisher Correction

    DEFF Research Database (Denmark)

    Flachsbart, Friederike; Dose, Janina; Gentschew, Liljana

    2018-01-01

    The original version of this Article contained an error in the spelling of the author Robert Häsler, which was incorrectly given as Robert Häesler. This has now been corrected in both the PDF and HTML versions of the Article....

  3. Correction to

    DEFF Research Database (Denmark)

    Roehle, Robert; Wieske, Viktoria; Schuetz, Georg M

    2018-01-01

    The original version of this article, published on 19 March 2018, unfortunately contained a mistake. The following correction has therefore been made in the original: The names of the authors Philipp A. Kaufmann, Ronny Ralf Buechel and Bernhard A. Herzog were presented incorrectly....

  4. Land Data Assimilation of Satellite-Based Soil Moisture Products Using the Land Information System Over the NLDAS Domain

    Science.gov (United States)

    Mocko, David M.; Kumar, S. V.; Peters-Lidard, C. D.; Tian, Y.

    2011-01-01

    This presentation will include results from data assimilation simulations using the NASA-developed Land Information System (LIS). Using the ensemble Kalman filter in LIS, two satellite-based soil moisture products from the AMSR-E instrument were assimilated, one a NASA-based product and the other from the Land Parameter Retrieval Model (LPRM). The domain and land-surface forcing data from these simulations were from the North American Land Data Assimilation System Phase-2, over the period 2002-2008. The Noah land-surface model, version 3.2, was used during the simulations. Changes to estimates of land surface states, such as soil moisture, as well as changes to simulated runoff/streamflow will be presented. Comparisons over the NLDAS domain will also be made to two global reference evapotranspiration (ET) products, one an interpolated product based on FLUXNET tower data and the other a satellite- based algorithm from the MODIS instrument. Results of an improvement metric show that assimilating the LPRM product improved simulated ET estimates while the NASA-based soil moisture product did not.

  5. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    Science.gov (United States)

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  6. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  7. Attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Hosoba, Minoru

    1986-01-01

    Attenuation correction is required for the reconstruction of a quantitative SPECT image. A new method for detecting body contours, which are important for the correction of tissue attenuation, is presented. The effect of body contours, detected by the newly developed method, on the reconstructed images was evaluated using various techniques for attenuation correction. The count rates in the specified region of interest in the phantom image by the Radial Post Correction (RPC) method, the Weighted Back Projection (WBP) method, Chang's method were strongly affected by the accuracy of the contours, as compared to those by Sorenson's method. To evaluate the effect of non-uniform attenuators on the cardiac SPECT, computer simulation experiments were performed using two types of models, the uniform attenuator model (UAM) and the non-uniform attenuator model (NUAM). The RPC method showed the lowest relative percent error (%ERROR) in UAM (11 %). However, 20 to 30 percent increase in %ERROR was observed for NUAM reconstructed with the RPC, WBP, and Chang's methods. Introducing an average attenuation coefficient (0.12/cm for Tc-99m and 0.14/cm for Tl-201) in the RPC method decreased %ERROR to the levels for UAM. Finally, a comparison between images, which were obtained by 180 deg and 360 deg scans and reconstructed from the RPC method, showed that the degree of the distortion of the contour of the simulated ventricles in the 180 deg scan was 15 % higher than that in the 360 deg scan. (Namekawa, K.)

  8. A satellite-based climatology (1989-2012) of lake surface water temperature from AVHRR 1-km for Central European water bodies

    Science.gov (United States)

    Riffler, Michael; Wunderle, Stefan

    2013-04-01

    The temperature of lakes is an important parameter for lake ecosystems influencing the speed of physio-chemical reactions, the concentration of dissolved gazes (e.g. oxygen), and vertical mixing. Even small temperature changes might have irreversible effects on the lacustrine system due to the high specific heat capacity of water. These effects could alter the quality of lake water depending on parameters like lake size and volume. Numerous studies mention lake water temperature as an indicator of climate change and in the Global Climate Observing System (GCOS) requirements it is listed as an essential climate variable. In contrast to in situ observations, satellite imagery offers the possibility to derive spatial patterns of lake surface water temperature (LSWT) and their variability. Moreover, although for some European lakes long in situ time series are available, the temperatures of many lakes are not measured or only on a non-regular basis making these observations insufficient for climate monitoring. However, only few satellite sensors offer the possibility to analyze time series which cover more than 20 years. The Advanced Very High Resolution Radiometer (AVHRR) is among these and has been flown on the National Oceanic and Atmospheric Administration (NOAA) Polar Operational Environmental Satellites (POES) and on the Meteorological Operational Satellites (MetOp) from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) as a heritage instrument for almost 35 years. It will be carried on for at least ten more years finally offering a unique opportunity for satellite-based climate studies. Herein we present the results from a study initiated by the Swiss GCOS office to generate a satellite-based LSWT climatology for the pre-alpine water bodies in Switzerland. It relies on the extensive AVHRR 1-km data record (1985-2012) of the Remote Sensing Research Group at the University of Bern (RSGB) and has been derived from the AVHRR/2

  9. Electroweak corrections

    International Nuclear Information System (INIS)

    Beenakker, W.J.P.

    1989-01-01

    The prospect of high accuracy measurements investigating the weak interactions, which are expected to take place at the electron-positron storage ring LEP at CERN and the linear collider SCL at SLAC, offers the possibility to study also the weak quantum effects. In order to distinguish if the measured weak quantum effects lie within the margins set by the standard model and those bearing traces of new physics one had to go beyond the lowest order and also include electroweak radiative corrections (EWRC) in theoretical calculations. These higher-order corrections also can offer the possibility of getting information about two particles present in the Glashow-Salam-Weinberg model (GSW), but not discovered up till now, the top quark and the Higgs boson. In ch. 2 the GSW standard model of electroweak interactions is described. In ch. 3 some special techniques are described for determination of integrals which are responsible for numerical instabilities caused by large canceling terms encountered in the calculation of EWRC effects, and methods necessary to get hold of the extensive algebra typical for EWRC. In ch. 4 various aspects related to EWRC effects are discussed, in particular the dependence of the unknown model parameters which are the masses of the top quark and the Higgs boson. The processes which are discussed are production of heavy fermions from electron-positron annihilation and those of the fermionic decay of the Z gauge boson. (H.W.). 106 refs.; 30 figs.; 6 tabs.; schemes

  10. A novel cross-satellite based assessment of the spatio-temporal development of a cyanobacterial harmful algal bloom

    Science.gov (United States)

    Page, Benjamin P.; Kumar, Abhishek; Mishra, Deepak R.

    2018-04-01

    As the frequency of cyanobacterial harmful algal blooms (CyanoHABs) become more common in recreational lakes and water supply reservoirs, demand for rapid detection and temporal monitoring will be imminent for effective management. The goal of this study was to demonstrate a novel and potentially operational cross-satellite based protocol for synoptic monitoring of rapidly evolving and increasingly common CyanoHABs in inland waters. The analysis involved a novel way to cross-calibrate a chlorophyll-a (Chl-a) detection model for the Landsat-8 OLI sensor from the relationship between the normalized difference chlorophyll index and the floating algal index derived from Sentinel-2A on a coinciding overpass date during the summer CyanoHAB bloom in Utah Lake. This aided in the construction of a time-series phenology of the Utah Lake CyanoHAB event. Spatio-temporal cyanobacterial density maps from both Sentinel-2A and Landsat-8 sensors revealed that the bloom started in the first week of July 2016 (July 3rd, mean cell count: 9163 cells/mL), reached peak in mid-July (July 15th, mean cell count: 108176 cells/mL), and reduced in August (August 24th, mean cell count: 9145 cells/mL). Analysis of physical and meteorological factors suggested a complex interaction between landscape processes (high surface runoff), climatic conditions (high temperature, high rainfall followed by negligible rainfall, stable wind), and water quality (low water level, high Chl-a) which created a supportive environment for triggering these blooms in Utah Lake. This cross satellite-based monitoring methods can be a great tool for regular monitoring and will reduce the budget cost for monitoring and predicting CyanoHABs in large lakes.

  11. Validation and Application of the Modified Satellite-Based Priestley-Taylor Algorithm for Mapping Terrestrial Evapotranspiration

    Directory of Open Access Journals (Sweden)

    Yunjun Yao

    2014-01-01

    Full Text Available Satellite-based vegetation indices (VIs and Apparent Thermal Inertia (ATI derived from temperature change provide valuable information for estimating evapotranspiration (LE and detecting the onset and severity of drought. The modified satellite-based Priestley-Taylor (MS-PT algorithm that we developed earlier, coupling both VI and ATI, is validated based on observed data from 40 flux towers distributed across the world on all continents. The validation results illustrate that the daily LE can be estimated with the Root Mean Square Error (RMSE varying from 10.7 W/m2 to 87.6 W/m2, and with the square of correlation coefficient (R2 from 0.41 to 0.89 (p < 0.01. Compared with the Priestley-Taylor-based LE (PT-JPL algorithm, the MS-PT algorithm improves the LE estimates at most flux tower sites. Importantly, the MS-PT algorithm is also satisfactory in reproducing the inter-annual variability at flux tower sites with at least five years of data. The R2 between measured and predicted annual LE anomalies is 0.42 (p = 0.02. The MS-PT algorithm is then applied to detect the variations of long-term terrestrial LE over Three-North Shelter Forest Region of China and to monitor global land surface drought. The MS-PT algorithm described here demonstrates the ability to map regional terrestrial LE and identify global soil moisture stress, without requiring precipitation information.

  12. Costs and benefits of satellite-based tools for irrigation management

    Directory of Open Access Journals (Sweden)

    Francesco eVuolo

    2015-07-01

    Full Text Available This paper presents the results of a collaborative work with farmers and a cost-benefit analysis of geospatial technologies applied to irrigation water management in the semi-arid agricultural area in Lower Austria. We use Earth observation (EO data to estimate crop evapotranspiration (ET and webGIS technologies to deliver maps and irrigation advice to farmers. The study reports the technical and qualitative evaluation performed during a demonstration phase in 2013 and provides an outlook to future developments. The calculation of the benefits is based on a comparison of the irrigation volumes estimated from satellite vs. the irrigation supplied by the farmers. In most cases, the amount of water supplied was equal to the maximum amount of water required by crops. At the same time high variability was observed for the different irrigation units and crop types. Our data clearly indicates that economic benefits could be achieved by reducing irrigation volumes, especially for water-intensive crops. Regarding the qualitative evaluation, most of the farmers expressed a very positive interest in the provided information. In particular, information related to crop ET was appreciated as this helps to make better informed decisions on irrigation. The majority of farmers (54% also expressed a general willingness to pay, either directly or via cost sharing, for such a service. Based on different cost scenarios, we calculated the cost of the service. Considering 20,000 ha regularly irrigated land, the advisory service would cost between 2.5 and 4.3 €/ha per year depending on the type of satellite data used. For comparison, irrigation costs range between 400 and 1000 €/ha per year for a typical irrigation volume of 2,000 cubic meters per ha. With a correct irrigation application, more than 10% of the water and energy could be saved in water-intensive crops, which is equivalent to an economic benefit of 40-100 €/ha per year.

  13. Satellite-based studies of maize yield spatial variations and their causes in China

    Science.gov (United States)

    Zhao, Y.

    2013-12-01

    Maize production in China has been expanding significantly in the past two decades, but yield has become relatively stagnant in the past few years, and needs to be improved to meet increasing demand. Multiple studies found that the gap between potential and actual yield of maize is as large as 40% to 60% of yield potential. Although a few major causes of yield gap have been qualitatively identified with surveys, there has not been spatial analysis aimed at quantifying relative importance of specific biophysical and socio-economic causes, information which would be useful for targeting interventions. This study analyzes the causes of yield variation at field and village level in Quzhou county of North China Plain (NCP). We combine remote sensing and crop modeling to estimate yields in 2009-2012, and identify fields that are consistently high or low yielding. To establish the relationship between yield and potential factors, we gather data on those factors through a household survey. We select targeted survey fields such that not only both extremes of yield distribution but also all soil texture categories in the county is covered. Our survey assesses management and biophysical factors as well as social factors such as farmers' access to agronomic knowledge, which is approximated by distance to the closest demonstration plot or 'Science and technology backyard'. Our survey covers 10 townships, 53 villages and 180 fields. Three to ten farmers are surveyed depending on the amount of variation present among sub pixels of each field. According to survey results, we extract the amount of variation within as well as between villages and or soil type. The higher within village or within field variation, the higher importance of management factors. Factors such as soil type and access to knowledge are more represented by between village variation. Through regression and analysis of variance, we gain more quantitative and thorough understanding of causes to yield variation at

  14. Validation and in vivo assessment of an innovative satellite-based solar UV dosimeter for a mobile app dedicated to skin health.

    Science.gov (United States)

    Morelli, M; Masini, A; Simeone, E; Khazova, M

    2016-08-31

    We present an innovative satellite-based solar UV (ultraviolet) radiation dosimeter with a mobile app interface that has been validated by exploiting both ground-based measurements and an in vivo assessment of the erythemal effects on some volunteers having controlled exposure to solar radiation. The app with this satellite-based UV dosimeter also includes other related functionalities such as the provision of safe sun exposure time updated in real-time and end exposure visual/sound alert. Both validations showed that the system has a good accuracy and reliability needed for health-related applications. This app will be launched on the market by siHealth Ltd in May 2016 under the name of "HappySun" and is available for both Android and iOS devices (more info on ). Extensive R&D activities are on-going for the further improvement of the satellite-based UV dosimeter's accuracy.

  15. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  16. Relation between Ocean SST Dipoles and Downwind Continental Croplands Assessed for Early Management Using Satellite-based Photosynthesis Models

    Science.gov (United States)

    Kaneko, Daijiro

    2015-04-01

    Crop-monitoring systems with the unit of carbon-dioxide sequestration for environmental issues related to climate adaptation to global warming have been improved using satellite-based photosynthesis and meteorological conditions. Early management of crop status is desirable for grain production, stockbreeding, and bio-energy providing that the seasonal climate forecasting is sufficiently accurate. Incorrect seasonal forecasting of crop production can damage global social activities if the recognized conditions are unsatisfied. One cause of poor forecasting related to the atmospheric dynamics at the Earth surface, which reflect the energy budget through land surface, especially the oceans and atmosphere. Recognition of the relation between SST anomalies (e.g. ENSO, Atlantic Niño, Indian dipoles, and Ningaloo Niño) and crop production, as expressed precisely by photosynthesis or the sequestrated-carbon rate, is necessary to elucidate the mechanisms related to poor production. Solar radiation, surface air temperature, and water stress all directly affect grain vegetation photosynthesis. All affect stomata opening, which is related to the water balance or definition by the ratio of the Penman potential evaporation and actual transpiration. Regarding stomata, present data and reanalysis data give overestimated values of stomata opening because they are extended from wet models in forests rather than semi-arid regions commonly associated with wheat, maize, and soybean. This study applies a complementary model based on energy conservation for semi-arid zones instead of the conventional Penman-Monteith method. Partitioning of the integrated Net PSN enables precise estimation of crop yields by modifying the semi-closed stomata opening. Partitioning predicts production more accurately using the cropland distribution already classified using satellite data. Seasonal crop forecasting should include near-real-time monitoring using satellite-based process crop models to avoid

  17. Satellite-based evidence of wavelength-dependent aerosol absorption in biomass burning smoke inferred from Ozone Monitoring Instrument

    Directory of Open Access Journals (Sweden)

    H. Jethva

    2011-10-01

    Full Text Available We provide satellite-based evidence of the spectral dependence of absorption in biomass burning aerosols over South America using near-UV measurements made by the Ozone Monitoring Instrument (OMI during 2005–2007. In the current near-UV OMI aerosol algorithm (OMAERUV, it is implicitly assumed that the only absorbing component in carbonaceous aerosols is black carbon whose imaginary component of the refractive index is wavelength independent. With this assumption, OMI-derived aerosol optical depth (AOD is found to be significantly over-estimated compared to that of AERONET at several sites during intense biomass burning events (August-September. Other well-known sources of error affecting the near-UV method of aerosol retrieval do not explain the large observed AOD discrepancies between the satellite and the ground-based observations. A number of studies have revealed strong spectral dependence in carbonaceous aerosol absorption in the near-UV region suggesting the presence of organic carbon in biomass burning generated aerosols. A sensitivity analysis examining the importance of accounting for the presence of wavelength-dependent aerosol absorption in carbonaceous particles in satellite-based remote sensing was carried out in this work. The results convincingly show that the inclusion of spectrally-dependent aerosol absorption in the radiative transfer calculations leads to a more accurate characterization of the atmospheric load of carbonaceous aerosols. The use of a new set of aerosol models assuming wavelength-dependent aerosol absorption in the near-UV region (Absorption Angstrom Exponent λ−2.5 to −3.0 improved the OMAERUV retrieval results by significantly reducing the AOD bias observed when gray aerosols were assumed. In addition, the new retrieval of single-scattering albedo is in better agreement with those of AERONET within the uncertainties (ΔSSA = ±0.03. The new colored carbonaceous aerosol model was also found to

  18. Global Monitoring RSEM System for Crop Production by Incorporating Satellite-based Photosynthesis Rates and Anomaly Data of Sea Surface Temperature

    Science.gov (United States)

    Kaneko, D.; Sakuma, H.

    2014-12-01

    The first author has been developing RSEM crop-monitoring system using satellite-based assessment of photosynthesis, incorporating meteorological conditions. Crop production comprises of several stages and plural mechanisms based on leaf photosynthesis, surface energy balance, and the maturing of grains after fixation of CO2, along with water exchange through soil vegetation-atmosphere transfer. Grain production in prime countries appears to be randomly perturbed regionally and globally. Weather for crop plants reflects turbulent phenomena of convective and advection flows in atmosphere and surface boundary layer. It has been difficult for scientists to simulate and forecast weather correctly for sufficiently long terms to crop harvesting. However, severely poor harvests related to continental events must originate from a consistent mechanism of abnormal energetic flow in the atmosphere through both land and oceans. It should be remembered that oceans have more than 100 times of energy storage compared to atmosphere and ocean currents represent gigantic energy flows, strongly affecting climate. Anomalies of Sea Surface Temperature (SST), globally known as El Niño, Indian Ocean dipole, and Atlantic Niño etc., affect the seasonal climate on a continental scale. The authors aim to combine monitoring and seasonal forecasting, considering such mechanisms through land-ocean biosphere transfer. The present system produces assessments for all continents, specifically monitoring agricultural fields of main crops. Historical regions of poor and good harvests are compared with distributions of SST anomalies, which are provided by NASA GSFC. Those comparisons fairly suggest that the Worst harvest in 1993 and the Best in 1994 relate to the offshore distribution of low temperature anomalies and high gaps in ocean surface temperatures. However, high-temperature anomalies supported good harvests because of sufficient solar radiation for photosynthesis, and poor harvests because

  19. A Comparison of Two Above-Ground Biomass Estimation Techniques Integrating Satellite-Based Remotely Sensed Data and Ground Data for Tropical and Semiarid Forests in Puerto Rico

    Science.gov (United States)

    Two above-ground forest biomass estimation techniques were evaluated for the United States Territory of Puerto Rico using predictor variables acquired from satellite based remotely sensed data and ground data from the U.S. Department of Agriculture Forest Inventory Analysis (FIA)...

  20. Quantitative film radiography

    International Nuclear Information System (INIS)

    Devine, G.; Dobie, D.; Fugina, J.; Hernandez, J.; Logan, C.; Mohr, P.; Moss, R.; Schumacher, B.; Updike, E.; Weirup, D.

    1991-01-01

    We have developed a system of quantitative radiography in order to produce quantitative images displaying homogeneity of parts. The materials that we characterize are synthetic composites and may contain important subtle density variations not discernible by examining a raw film x-radiograph. In order to quantitatively interpret film radiographs, it is necessary to digitize, interpret, and display the images. Our integrated system of quantitative radiography displays accurate, high-resolution pseudo-color images in units of density. We characterize approximately 10,000 parts per year in hundreds of different configurations and compositions with this system. This report discusses: the method; film processor monitoring and control; verifying film and processor performance; and correction of scatter effects

  1. Cross-validation Methodology between Ground and GPM Satellite-based Radar Rainfall Product over Dallas-Fort Worth (DFW) Metroplex

    Science.gov (United States)

    Chen, H.; Chandrasekar, V.; Biswas, S.

    2015-12-01

    Over the past two decades, a large number of rainfall products have been developed based on satellite, radar, and/or rain gauge observations. However, to produce optimal rainfall estimation for a given region is still challenging due to the space time variability of rainfall at many scales and the spatial and temporal sampling difference of different rainfall instruments. In order to produce high-resolution rainfall products for urban flash flood applications and improve the weather sensing capability in urban environment, the center for Collaborative Adaptive Sensing of the Atmosphere (CASA), in collaboration with National Weather Service (NWS) and North Central Texas Council of Governments (NCTCOG), has developed an urban radar remote sensing network in DFW Metroplex. DFW is the largest inland metropolitan area in the U.S., that experiences a wide range of natural weather hazards such as flash flood and hailstorms. The DFW urban remote sensing network, centered by the deployment of eight dual-polarization X-band radars and a NWS WSR-88DP radar, is expected to provide impacts-based warning and forecasts for benefit of the public safety and economy. High-resolution quantitative precipitation estimation (QPE) is one of the major goals of the development of this urban test bed. In addition to ground radar-based rainfall estimation, satellite-based rainfall products for this area are also of interest for this study. Typical example is the rainfall rate product produced by the Dual-frequency Precipitation Radar (DPR) onboard Global Precipitation Measurement (GPM) Core Observatory satellite. Therefore, cross-comparison between ground and space-based rainfall estimation is critical to building an optimal regional rainfall system, which can take advantages of the sampling differences of different sensors. This paper presents the real-time high-resolution QPE system developed for DFW urban radar network, which is based upon the combination of S-band WSR-88DP and X

  2. Demonstrating the Value of Near Real-time Satellite-based Earth Observations in a Research and Education Framework

    Science.gov (United States)

    Chiu, L.; Hao, X.; Kinter, J. L.; Stearn, G.; Aliani, M.

    2017-12-01

    The launch of GOES-16 series provides an opportunity to advance near real-time applications in natural hazard detection, monitoring and warning. This study demonstrates the capability and values of receiving real-time satellite-based Earth observations over a fast terrestrial networks and processing high-resolution remote sensing data in a university environment. The demonstration system includes 4 components: 1) Near real-time data receiving and processing; 2) data analysis and visualization; 3) event detection and monitoring; and 4) information dissemination. Various tools are developed and integrated to receive and process GRB data in near real-time, produce images and value-added data products, and detect and monitor extreme weather events such as hurricane, fire, flooding, fog, lightning, etc. A web-based application system is developed to disseminate near-real satellite images and data products. The images are generated with GIS-compatible format (GeoTIFF) to enable convenient use and integration in various GIS platforms. This study enhances the capacities for undergraduate and graduate education in Earth system and climate sciences, and related applications to understand the basic principles and technology in real-time applications with remote sensing measurements. It also provides an integrated platform for near real-time monitoring of extreme weather events, which are helpful for various user communities.

  3. A comparision between satellite based and drone based remote sensing technology to achieve sustainable development: a review

    Directory of Open Access Journals (Sweden)

    Babankumar Bansod

    2017-12-01

    Full Text Available Precision agriculture is a way to manage the crop yield resources like water, fertilizers, soil, seeds in order to increase production, quality, gain and reduce squander products so that the existing system become eco-friendly. The main target of precision agriculture is to match resources and execution according to the crop and climate to ameliorate the effects of Praxis. Global Positioning System, Geographic Information System, Remote sensing technologies and various sensors are used in Precision farming for identifying the variability in field and using different methods to deal with them. Satellite based remote sensing is used to study the variability in crop and ground but suffer from various disadvantageous such as prohibited use, high price, less revisiting them, poor resolution due to great height, Unmanned Aerial Vehicle (UAV is other alternative option for application in precision farming. UAV overcomes the drawback of the ground based system, i.e. inaccessibility to muddy and very dense regions. Hovering at a peak of 500 meter - 1000 meter is good enough to offer various advantageous in image acquisition such as high spatial and temporal resolution, full flexibility, low cost. Recent studies of application of UAV in precision farming indicate advanced designing of UAV, enhancement in georeferencing and the mosaicking of image, analysis and extraction of information required for supplying a true end product to farmers. This paper also discusses the various platforms of UAV used in farming applications, its technical constraints, seclusion rites, reliability and safety.

  4. The Effectiveness of Using Limited Gauge Measurements for Bias Adjustment of Satellite-Based Precipitation Estimation over Saudi Arabia

    Science.gov (United States)

    Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2018-01-01

    Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.

  5. Strategic system development toward biofuel, desertification, and crop production monitoring in continental scales using satellite-based photosynthesis models

    Science.gov (United States)

    Kaneko, Daijiro

    2013-10-01

    The author regards fundamental root functions as underpinning photosynthesis activities by vegetation and as affecting environmental issues, grain production, and desertification. This paper describes the present development of monitoring and near real-time forecasting of environmental projects and crop production by approaching established operational monitoring step-by-step. The author has been developing a thematic monitoring structure (named RSEM system) which stands on satellite-based photosynthesis models over several continents for operational supports in environmental fields mentioned above. Validation methods stand not on FLUXNET but on carbon partitioning validation (CPV). The models demand continuing parameterization. The entire frame system has been built using Reanalysis meteorological data, but model accuracy remains insufficient except for that of paddy rice. The author shall accomplish the system that incorporates global environmental forces. Regarding crop production applications, industrialization in developing countries achieved through direct investment by economically developed nations raises their income, resulting in increased food demand. Last year, China began to import rice as it had in the past with grains of maize, wheat, and soybeans. Important agro-potential countries make efforts to cultivate new crop lands in South America, Africa, and Eastern Europe. Trends toward less food sustainability and stability are continuing, with exacerbation by rapid social and climate changes. Operational monitoring of carbon sequestration by herbaceous and bore plants converges with efforts at bio-energy, crop production monitoring, and socio-environmental projects such as CDM A/R, combating desertification, and bio-diversity.

  6. Comparison of Different Machine Learning Approaches for Monthly Satellite-Based Soil Moisture Downscaling over Northeast China

    Directory of Open Access Journals (Sweden)

    Yangxiaoyue Liu

    2017-12-01

    Full Text Available Although numerous satellite-based soil moisture (SM products can provide spatiotemporally continuous worldwide datasets, they can hardly be employed in characterizing fine-grained regional land surface processes, owing to their coarse spatial resolution. In this study, we proposed a machine-learning-based method to enhance SM spatial accuracy and improve the availability of SM data. Four machine learning algorithms, including classification and regression trees (CART, K-nearest neighbors (KNN, Bayesian (BAYE, and random forests (RF, were implemented to downscale the monthly European Space Agency Climate Change Initiative (ESA CCI SM product from 25-km to 1-km spatial resolution. During the regression, the land surface temperature (including daytime temperature, nighttime temperature, and diurnal fluctuation temperature, normalized difference vegetation index, surface reflections (red band, blue band, NIR band and MIR band, and digital elevation model were taken as explanatory variables to produce fine spatial resolution SM. We chose Northeast China as the study area and acquired corresponding SM data from 2003 to 2012 in unfrozen seasons. The reconstructed SM datasets were validated against in-situ measurements. The results showed that the RF-downscaled results had superior matching performance to both ESA CCI SM and in-situ measurements, and can positively respond to precipitation variation. Additionally, the RF was less affected by parameters, which revealed its robustness. Both CART and KNN ranked second. Compared to KNN, CART had a relatively close correlation with the validation data, but KNN showed preferable precision. Moreover, BAYE ranked last with significantly abnormal regression values.

  7. A New Temperature-Vegetation Triangle Algorithm with Variable Edges (TAVE for Satellite-Based Actual Evapotranspiration Estimation

    Directory of Open Access Journals (Sweden)

    Hua Zhang

    2016-09-01

    Full Text Available The estimation of spatially-variable actual evapotranspiration (AET is a critical challenge to regional water resources management. We propose a new remote sensing method, the Triangle Algorithm with Variable Edges (TAVE, to generate daily AET estimates based on satellite-derived land surface temperature and the vegetation index NDVI. The TAVE captures heterogeneity in AET across elevation zones and permits variability in determining local values of wet and dry end-member classes (known as edges. Compared to traditional triangle methods, TAVE introduces three unique features: (i the discretization of the domain as overlapping elevation zones; (ii a variable wet edge that is a function of elevation zone; and (iii variable values of a combined-effect parameter (that accounts for aerodynamic and surface resistance, vapor pressure gradient, and soil moisture availability along both wet and dry edges. With these features, TAVE effectively addresses the combined influence of terrain and water stress on semi-arid environment AET estimates. We demonstrate the effectiveness of this method in one of the driest countries in the world—Jordan, and compare it to a traditional triangle method (TA and a global AET product (MOD16 over different land use types. In irrigated agricultural lands, TAVE matched the results of the single crop coefficient model (−3%, in contrast to substantial overestimation by TA (+234% and underestimation by MOD16 (−50%. In forested (non-irrigated, water consuming regions, TA and MOD16 produced AET average deviations 15.5 times and −3.5 times of those based on TAVE. As TAVE has a simple structure and low data requirements, it provides an efficient means to satisfy the increasing need for evapotranspiration estimation in data-scarce semi-arid regions. This study constitutes a much needed step towards the satellite-based quantification of agricultural water consumption in Jordan.

  8. Hydrological real-time modelling in the Zambezi river basin using satellite-based soil moisture and rainfall data

    Directory of Open Access Journals (Sweden)

    P. Meier

    2011-03-01

    Full Text Available Reliable real-time forecasts of the discharge can provide valuable information for the management of a river basin system. For the management of ecological releases even discharge forecasts with moderate accuracy can be beneficial. Sequential data assimilation using the Ensemble Kalman Filter provides a tool that is both efficient and robust for a real-time modelling framework. One key parameter in a hydrological system is the soil moisture, which recently can be characterized by satellite based measurements. A forecasting framework for the prediction of discharges is developed and applied to three different sub-basins of the Zambezi River Basin. The model is solely based on remote sensing data providing soil moisture and rainfall estimates. The soil moisture product used is based on the back-scattering intensity of a radar signal measured by a radar scatterometer. These soil moisture data correlate well with the measured discharge of the corresponding watershed if the data are shifted by a time lag which is dependent on the size and the dominant runoff process in the catchment. This time lag is the basis for the applicability of the soil moisture data for hydrological forecasts. The conceptual model developed is based on two storage compartments. The processes modeled include evaporation losses, infiltration and percolation. The application of this model in a real-time modelling framework yields good results in watersheds where soil storage is an important factor. The lead time of the forecast is dependent on the size and the retention capacity of the watershed. For the largest watershed a forecast over 40 days can be provided. However, the quality of the forecast increases significantly with decreasing prediction time. In a watershed with little soil storage and a quick response to rainfall events, the performance is relatively poor and the lead time is as short as 10 days only.

  9. AIM satellite-based research bridges the unique scientific aspects of the mission to informal education programs globally

    Science.gov (United States)

    Robinson, D.; Maggi, B.

    2003-04-01

    The Education and Public Outreach (EPO) component of the satellite-based research mission "Aeronomy of Ice In the Mesosphere" (AIM) will bridge the unique scientific aspects of the mission to informal education organizations. The informal education materials developed by the EPO will utilize AIM data and educate the public about the environmental implications associated with the data. This will assist with creating a scientifically literate workforce and in developing a citizenry capable of making educated decisions related to environmental policies and laws. The objective of the AIM mission is to understand the mechanisms that cause Polar Mesospheric Clouds (PMCs) to form, how their presence affects the atmosphere, and how change in the atmosphere affects them. PMCs are sometimes known as Noctilucent Clouds (NLCs) because of their visibility during the night from appropriate locations. The phenomenon of PMCs is an observable indicator of global change, a concern to all citizens. Recent sightings of these clouds over populated regions have compelled AIM educators to expand informal education opportunities to communities worldwide. Collaborations with informal organizations include: Museums/Science Centers; NASA Sun-Earth Connection Forum; Alaska Native Ways of Knowing Project; Amateur Noctilucent Cloud Observers Organization; National Parks Education Programs; After School Science Clubs; Public Broadcasting Associations; and National Public Radio. The Native Ways of Knowing Project is an excellent example of informal collaboration with the AIM EPO. This Alaska based project will assist native peoples of the state with photographing NLCs for the EPO website. It will also aid the EPO with developing materials for informal organizations that incorporate traditional native knowledge and science, related to the sky. Another AIM collaboration that will offer citizens lasting informal education opportunities is the one established with the United States National Parks

  10. Satellite-Based Evaluation of the Post-Fire Recovery Process from the Worst Forest Fire Case in South Korea

    Directory of Open Access Journals (Sweden)

    Jae-Hyun Ryu

    2018-06-01

    different, using only one satellite-based indicator will not be suitable to understand the post-fire recovery process. NBR, NDVI, and GPP can be combined. Further studies will require more approaches using various terms of indices.

  11. Ground and satellite-based remote sensing of mineral dust using AERI spectra and MODIS thermal infrared window brightness temperatures

    Science.gov (United States)

    Hansell, Richard Allen, Jr.

    The radiative effects of dust aerosol on our climate system have yet to be fully understood and remain a topic of contemporary research. To investigate these effects, detection/retrieval methods for dust events over major dust outbreak and transport areas have been developed using satellite and ground-based approaches. To this end, both the shortwave and longwave surface radiative forcing of dust aerosol were investigated. The ground-based remote sensing approach uses the Atmospheric Emitted Radiance Interferometer brightness temperature spectra to detect mineral dust events and to retrieve their properties. Taking advantage of the high spectral resolution of the AERI instrument, absorptive differences in prescribed thermal IR window sub-band channels were exploited to differentiate dust from cirrus clouds. AERI data collected during the UAE2 at Al-Ain UAE was employed for dust retrieval. Assuming a specified dust composition model a priori and using the light scattering programs of T-matrix and the finite difference time domain methods for oblate spheroids and hexagonal plates, respectively, dust optical depths have been retrieved and compared to those inferred from a collocated and coincident AERONET sun-photometer dataset. The retrieved optical depths were then used to determine the dust longwave surface forcing during the UAE2. Likewise, dust shortwave surface forcing is investigated employing a differential technique from previous field studies. The satellite-based approach uses MODIS thermal infrared brightness temperature window data for the simultaneous detection/separation of mineral dust and cirrus clouds. Based on the spectral variability of dust emissivity at the 3.75, 8.6, 11 and 12 mum wavelengths, the D*-parameter, BTD-slope and BTD3-11 tests are combined to identify dust and cirrus. MODIS data for the three dust-laden scenes have been analyzed to demonstrate the effectiveness of this detection/separation method. Detected daytime dust and cloud

  12. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...

  13. Estimation of snowpack matching ground-truth data and MODIS satellite-based observations by using regression kriging

    Science.gov (United States)

    Juan Collados-Lara, Antonio; Pardo-Iguzquiza, Eulogio; Pulido-Velazquez, David

    2016-04-01

    The estimation of Snow Water Equivalent (SWE) is essential for an appropriate assessment of the available water resources in Alpine catchment. The hydrologic regime in these areas is dominated by the storage of water in the snowpack, which is discharged to rivers throughout the melt season. An accurate estimation of the resources will be necessary for an appropriate analysis of the system operation alternatives using basin scale management models. In order to obtain an appropriate estimation of the SWE we need to know the spatial distribution snowpack and snow density within the Snow Cover Area (SCA). Data for these snow variables can be extracted from in-situ point measurements and air-borne/space-borne remote sensing observations. Different interpolation and simulation techniques have been employed for the estimation of the cited variables. In this paper we propose to estimate snowpack from a reduced number of ground-truth data (1 or 2 campaigns per year with 23 observation point from 2000-2014) and MODIS satellite-based observations in the Sierra Nevada Mountain (Southern Spain). Regression based methodologies has been used to study snowpack distribution using different kind of explicative variables: geographic, topographic, climatic. 40 explicative variables were considered: the longitude, latitude, altitude, slope, eastness, northness, radiation, maximum upwind slope and some mathematical transformation of each of them [Ln(v), (v)^-1; (v)^2; (v)^0.5). Eight different structure of regression models have been tested (combining 1, 2, 3 or 4 explicative variables). Y=B0+B1Xi (1); Y=B0+B1XiXj (2); Y=B0+B1Xi+B2Xj (3); Y=B0+B1Xi+B2XjXl (4); Y=B0+B1XiXk+B2XjXl (5); Y=B0+B1Xi+B2Xj+B3Xl (6); Y=B0+B1Xi+B2Xj+B3XlXk (7); Y=B0+B1Xi+B2Xj+B3Xl+B4Xk (8). Where: Y is the snow depth; (Xi, Xj, Xl, Xk) are the prediction variables (any of the 40 variables); (B0, B1, B2, B3) are the coefficients to be estimated. The ground data are employed to calibrate the multiple regressions. In

  14. Quantitative research.

    Science.gov (United States)

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  15. Quantitative habitability.

    Science.gov (United States)

    Shock, Everett L; Holland, Melanie E

    2007-12-01

    A framework is proposed for a quantitative approach to studying habitability. Considerations of environmental supply and organismal demand of energy lead to the conclusions that power units are most appropriate and that the units for habitability become watts per organism. Extreme and plush environments are revealed to be on a habitability continuum, and extreme environments can be quantified as those where power supply only barely exceeds demand. Strategies for laboratory and field experiments are outlined that would quantify power supplies, power demands, and habitability. An example involving a comparison of various metabolisms pursued by halophiles is shown to be well on the way to a quantitative habitability analysis.

  16. Time-resolved quantitative phosphoproteomics

    DEFF Research Database (Denmark)

    Verano-Braga, Thiago; Schwämmle, Veit; Sylvester, Marc

    2012-01-01

    proteins involved in the Ang-(1-7) signaling, we performed a mass spectrometry-based time-resolved quantitative phosphoproteome study of human aortic endothelial cells (HAEC) treated with Ang-(1-7). We identified 1288 unique phosphosites on 699 different proteins with 99% certainty of correct peptide...

  17. Quantitative Finance

    Science.gov (United States)

    James, Jessica

    2017-01-01

    Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.

  18. NWS Corrections to Observations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...

  19. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ... more surgeries depending on the extent of the repair needed. Click here to find out more. Corrective ...

  20. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... Jaw Surgery Download Download the ebook for further information Corrective jaw, or orthognathic surgery is performed by ... your treatment. Correction of Common Dentofacial Deformities ​ ​ The information provided here is not intended as a substitute ...

  1. A Satellite-Based Surface Radiation Climatology Derived by Combining Climate Data Records and Near-Real-Time Data

    Directory of Open Access Journals (Sweden)

    Bodo Ahrens

    2013-09-01

    Full Text Available This study presents a method for adjusting long-term climate data records (CDRs for the integrated use with near-real-time data using the example of surface incoming solar irradiance (SIS. Recently, a 23-year long (1983–2005 continuous SIS CDR has been generated based on the visible channel (0.45–1 μm of the MVIRI radiometers onboard the geostationary Meteosat First Generation Platform. The CDR is available from the EUMETSAT Satellite Application Facility on Climate Monitoring (CM SAF. Here, it is assessed whether a homogeneous extension of the SIS CDR to the present is possible with operationally generated surface radiation data provided by CM SAF using the SEVIRI and GERB instruments onboard the Meteosat Second Generation satellites. Three extended CM SAF SIS CDR versions consisting of MVIRI-derived SIS (1983–2005 and three different SIS products derived from the SEVIRI and GERB instruments onboard the MSG satellites (2006 onwards were tested. A procedure to detect shift inhomogeneities in the extended data record (1983–present was applied that combines the Standard Normal Homogeneity Test (SNHT and a penalized maximal T-test with visual inspection. Shift detection was done by comparing the SIS time series with the ground stations mean, in accordance with statistical significance. Several stations of the Baseline Surface Radiation Network (BSRN and about 50 stations of the Global Energy Balance Archive (GEBA over Europe were used as the ground-based reference. The analysis indicates several breaks in the data record between 1987 and 1994 probably due to artefacts in the raw data and instrument failures. After 2005 the MVIRI radiometer was replaced by the narrow-band SEVIRI and the broadband GERB radiometers and a new retrieval algorithm was applied. This induces significant challenges for the homogenisation across the satellite generations. Homogenisation is performed by applying a mean-shift correction depending on the shift size of

  2. Monturaqui meteorite impact crater, Chile: A field test of the utility of satellite-based mapping of ejecta at small craters

    Science.gov (United States)

    Rathbun, K.; Ukstins, I.; Drop, S.

    2017-12-01

    Monturaqui Crater is a small ( 350 m diameter), simple meteorite impact crater located in the Atacama Desert of northern Chile that was emplaced in Ordovician granite overlain by discontinuous Pliocene ignimbrite. Ejecta deposits are granite and ignimbrite, with lesser amounts of dark impact melt and rare tektites and iron shale. The impact restructured existing drainage systems in the area that have subsequently eroded through the ejecta. Satellite-based mapping and modeling, including a synthesis of photographic satellite imagery and ASTER thermal infrared imagery in ArcGIS, were used to construct a basic geological interpretation of the site with special emphasis on understanding ejecta distribution patterns. This was combined with field-based mapping to construct a high-resolution geologic map of the crater and its ejecta blanket and field check the satellite-based geologic interpretation. The satellite- and modeling-based interpretation suggests a well-preserved crater with an intact, heterogeneous ejecta blanket that has been subjected to moderate erosion. In contrast, field mapping shows that the crater has a heavily-eroded rim and ejecta blanket, and the ejecta is more heterogeneous than previously thought. In addition, the erosion rate at Monturaqui is much higher than erosion rates reported elsewhere in the Atacama Desert. The bulk compositions of the target rocks at Monturaqui are similar and the ejecta deposits are highly heterogeneous, so distinguishing between them with remote sensing is less effective than with direct field observations. In particular, the resolution of available imagery for the site is too low to resolve critical details that are readily apparent in the field on the scale of 10s of cm, and which significantly alter the geologic interpretation. The limiting factors for effective remote interpretation at Monturaqui are its target composition and crater size relative to the resolution of the remote sensing methods employed. This

  3. Long-term analysis of aerosol optical depth over Northeast Asia using a satellite-based measurement: MI Yonsei Aerosol Retrieval Algorithm (YAER)

    Science.gov (United States)

    Kim, Mijin; Kim, Jhoon; Yoon, Jongmin; Chung, Chu-Yong; Chung, Sung-Rae

    2017-04-01

    In 2010, the Korean geostationary earth orbit (GEO) satellite, the Communication, Ocean, and Meteorological Satellite (COMS), was launched including the Meteorological Imager (MI). The MI measures atmospheric condition over Northeast Asia (NEA) using a single visible channel centered at 0.675 μm and four IR channels at 3.75, 6.75, 10.8, 12.0 μm. The visible measurement can also be utilized for the retrieval of aerosol optical properties (AOPs). Since the GEO satellite measurement has an advantage for continuous monitoring of AOPs, we can analyze the spatiotemporal variation of the aerosol using the MI observations over NEA. Therefore, we developed an algorithm to retrieve aerosol optical depth (AOD) using the visible observation of MI, and named as MI Yonsei Aerosol Retrieval Algorithm (YAER). In this study, we investigated the accuracy of MI YAER AOD by comparing the values with the long-term products of AERONET sun-photometer. The result showed that the MI AODs were significantly overestimated than the AERONET values over bright surface in low AOD case. Because the MI visible channel centered at red color range, contribution of aerosol signal to the measured reflectance is relatively lower than the surface contribution. Therefore, the AOD error in low AOD case over bright surface can be a fundamental limitation of the algorithm. Meanwhile, an assumption of background aerosol optical depth (BAOD) could result in the retrieval uncertainty, also. To estimate the surface reflectance by considering polluted air condition over the NEA, we estimated the BAOD from the MODIS dark target (DT) aerosol products by pixel. The satellite-based AOD retrieval, however, largely depends on the accuracy of the surface reflectance estimation especially in low AOD case, and thus, the BAOD could include the uncertainty in surface reflectance estimation of the satellite-based retrieval. Therefore, we re-estimated the BAOD using the ground-based sun-photometer measurement, and

  4. Satellite-Based Thermophysical Analysis of Volcaniclastic Deposits: A Terrestrial Analog for Mantled Lava Flows on Mars

    Directory of Open Access Journals (Sweden)

    Mark A. Price

    2016-02-01

    Full Text Available Orbital thermal infrared (TIR remote sensing is an important tool for characterizing geologic surfaces on Earth and Mars. However, deposition of material from volcanic or eolian activity results in bedrock surfaces becoming significantly mantled over time, hindering the accuracy of TIR compositional analysis. Moreover, interplay between particle size, albedo, composition and surface roughness add complexity to these interpretations. Apparent Thermal Inertia (ATI is the measure of the resistance to temperature change and has been used to determine parameters such as grain/block size, density/mantling, and the presence of subsurface soil moisture/ice. Our objective is to document the quantitative relationship between ATI derived from orbital visible/near infrared (VNIR and thermal infrared (TIR data and tephra fall mantling of the Mono Craters and Domes (MCD in California, which were chosen as an analog for partially mantled flows observed at Arsia Mons volcano on Mars. The ATI data were created from two images collected ~12 h apart by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER instrument. The results were validated with a quantitative framework developed using fieldwork that was conducted at 13 pre-chosen sites. These sites ranged in grain size from ash-sized to meter-scale blocks and were all rhyolitic in composition. Block size and mantling were directly correlated with ATI. Areas with ATI under 2.3 × 10−2 were well-mantled with average grain size below 4 cm; whereas values greater than 3.0 × 10−2 corresponded to mantle-free surfaces. Correlation was less accurate where checkerboard-style mixing between mantled and non-mantled surfaces occurred below the pixel scale as well as in locations where strong shadowing occurred. However, the results validate that the approach is viable for a large majority of mantled surfaces on Earth and Mars. This is relevant for determining the volcanic history of Mars, for

  5. Quantitative radiography

    International Nuclear Information System (INIS)

    Brase, J.M.; Martz, H.E.; Waltjen, K.E.; Hurd, R.L.; Wieting, M.G.

    1986-01-01

    Radiographic techniques have been used in nondestructive evaluation primarily to develop qualitative information (i.e., defect detection). This project applies and extends the techniques developed in medical x-ray imaging, particularly computed tomography (CT), to develop quantitative information (both spatial dimensions and material quantities) on the three-dimensional (3D) structure of solids. Accomplishments in FY 86 include (1) improvements in experimental equipment - an improved microfocus system that will give 20-μm resolution and has potential for increased imaging speed, and (2) development of a simple new technique for displaying 3D images so as to clearly show the structure of the object. Image reconstruction and data analysis for a series of synchrotron CT experiments conducted by LLNL's Chemistry Department has begun

  6. A Dynamic Enhancement With Background Reduction Algorithm: Overview and Application to Satellite-Based Dust Storm Detection

    Science.gov (United States)

    Miller, Steven D.; Bankert, Richard L.; Solbrig, Jeremy E.; Forsythe, John M.; Noh, Yoo-Jeong; Grasso, Lewis D.

    2017-12-01

    This paper describes a Dynamic Enhancement Background Reduction Algorithm (DEBRA) applicable to multispectral satellite imaging radiometers. DEBRA uses ancillary information about the clear-sky background to reduce false detections of atmospheric parameters in complex scenes. Applied here to the detection of lofted dust, DEBRA enlists a surface emissivity database coupled with a climatological database of surface temperature to approximate the clear-sky equivalent signal for selected infrared-based multispectral dust detection tests. This background allows for suppression of false alarms caused by land surface features while retaining some ability to detect dust above those problematic surfaces. The algorithm is applicable to both day and nighttime observations and enables weighted combinations of dust detection tests. The results are provided quantitatively, as a detection confidence factor [0, 1], but are also readily visualized as enhanced imagery. Utilizing the DEBRA confidence factor as a scaling factor in false color red/green/blue imagery enables depiction of the targeted parameter in the context of the local meteorology and topography. In this way, the method holds utility to both automated clients and human analysts alike. Examples of DEBRA performance from notable dust storms and comparisons against other detection methods and independent observations are presented.

  7. Corrections to primordial nucleosynthesis

    International Nuclear Information System (INIS)

    Dicus, D.A.; Kolb, E.W.; Gleeson, A.M.; Sudarshan, E.C.G.; Teplitz, V.L.; Turner, M.S.

    1982-01-01

    The changes in primordial nucleosynthesis resulting from small corrections to rates for weak processes that connect neutrons and protons are discussed. The weak rates are corrected by improved treatment of Coulomb and radiative corrections, and by inclusion of plasma effects. The calculations lead to a systematic decrease in the predicted 4 He abundance of about ΔY = 0.0025. The relative changes in other primoridal abundances are also 1 to 2%

  8. Comparing cropland net primary production estimates from inventory, a satellite-based model, and a process-based model in the Midwest of the United States

    Science.gov (United States)

    Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi; Bliss, Norman B.; Young, Claudia J.; West, Tristram O.; Ogle, Stephen M.

    2014-01-01

    Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m−2 yr−1and total NPP in the range of 318–490 Tg C yr−1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m−2 yr−1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m−2 yr−1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. We suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.

  9. SAT-MAP-CLIMATE project results[SATellite base bio-geophysical parameter MAPping and aggregation modelling for CLIMATE models

    Energy Technology Data Exchange (ETDEWEB)

    Bay Hasager, C.; Woetmann Nielsen, N.; Soegaard, H.; Boegh, E.; Hesselbjerg Christensen, J.; Jensen, N.O.; Schultz Rasmussen, M.; Astrup, P.; Dellwik, E.

    2002-08-01

    Earth Observation (EO) data from imaging satellites are analysed with respect to albedo, land and sea surface temperatures, land cover types and vegetation parameters such as the Normalized Difference Vegetation Index (NDVI) and the leaf area index (LAI). The observed parameters are used in the DMI-HIRLAM-D05 weather prediction model in order to improve the forecasting. The effect of introducing actual sea surface temperatures from NOAA AVHHR compared to climatological mean values, shows a more pronounced land-sea breeze effect which is also observable in field observations. The albedo maps from NOAA AVHRR are rather similar to the climatological mean values so for the HIRLAM model this is insignicant, yet most likely of some importance in the HIRHAM regional climate model. Land cover type maps are assigned local roughness values determined from meteorological field observations. Only maps with a spatial resolution around 25 m can adequately map the roughness variations of the typical patch size distribution in Denmark. A roughness map covering Denmark is aggregated (ie area-average non-linearly) by a microscale aggregation model that takes the non-linear turbulent responses of each roughness step change between patches in an arbitrary pattern into account. The effective roughnesses are calculated into a 15 km by 15 km grid for the HIRLAM model. The effect of hedgerows is included as an added roughness effect as a function of hedge density mapped from a digital vector map. Introducing the new effective roughness maps into the HIRLAM model appears to remedy on the seasonal wind speed bias over land and sea in spring. A new parameterisation on the effective roughness for scalar surface fluxes is developed and tested on synthetic data. Further is a method for the estimation the evapotranspiration from albedo, surface temperatures and NDVI succesfully compared to field observations. The HIRLAM predictions of water vapour at 12 GMT are used for atmospheric correction of

  10. Hydrological modeling of the Peruvian–Ecuadorian Amazon Basin using GPM-IMERG satellite-based precipitation dataset

    Directory of Open Access Journals (Sweden)

    R. Zubieta

    2017-07-01

    Full Text Available In the last two decades, rainfall estimates provided by the Tropical Rainfall Measurement Mission (TRMM have proven applicable in hydrological studies. The Global Precipitation Measurement (GPM mission, which provides the new generation of rainfall estimates, is now considered a global successor to TRMM. The usefulness of GPM data in hydrological applications, however, has not yet been evaluated over the Andean and Amazonian regions. This study uses GPM data provided by the Integrated Multi-satellite Retrievals (IMERG (product/final run as input to a distributed hydrological model for the Amazon Basin of Peru and Ecuador for a 16-month period (from March 2014 to June 2015 when all datasets are available. TRMM products (TMPA V7 and TMPA RT datasets and a gridded precipitation dataset processed from observed rainfall are used for comparison. The results indicate that precipitation data derived from GPM-IMERG correspond more closely to TMPA V7 than TMPA RT datasets, but both GPM-IMERG and TMPA V7 precipitation data tend to overestimate, compared to observed rainfall (by 11.1 and 15.7 %, respectively. In general, GPM-IMERG, TMPA V7 and TMPA RT correlate with observed rainfall, with a similar number of rain events correctly detected ( ∼  20 %. Statistical analysis of modeled streamflows indicates that GPM-IMERG is as useful as TMPA V7 or TMPA RT datasets in southern regions (Ucayali Basin. GPM-IMERG, TMPA V7 and TMPA RT do not properly simulate streamflows in northern regions (Marañón and Napo basins, probably because of the lack of adequate rainfall estimates in northern Peru and the Ecuadorian Amazon.

  11. Quantitative Thermochronology

    Science.gov (United States)

    Braun, Jean; van der Beek, Peter; Batt, Geoffrey

    2006-05-01

    Thermochronology, the study of the thermal history of rocks, enables us to quantify the nature and timing of tectonic processes. Quantitative Thermochronology is a robust review of isotopic ages, and presents a range of numerical modeling techniques to allow the physical implications of isotopic age data to be explored. The authors provide analytical, semi-analytical, and numerical solutions to the heat transfer equation in a range of tectonic settings and under varying boundary conditions. They then illustrate their modeling approach built around a large number of case studies. The benefits of different thermochronological techniques are also described. Computer programs on an accompanying website at www.cambridge.org/9780521830577 are introduced through the text and provide a means of solving the heat transport equation in the deforming Earth to predict the ages of rocks and compare them directly to geological and geochronological data. Several short tutorials, with hints and solutions, are also included. Numerous case studies help geologists to interpret age data and relate it to Earth processes Essential background material to aid understanding and using thermochronological data Provides a thorough treatise on numerical modeling of heat transport in the Earth's crust Supported by a website hosting relevant computer programs and colour slides of figures from the book for use in teaching

  12. Publisher Correction: Predicting unpredictability

    Science.gov (United States)

    Davis, Steven J.

    2018-06-01

    In this News & Views article originally published, the wrong graph was used for panel b of Fig. 1, and the numbers on the y axes of panels a and c were incorrect; the original and corrected Fig. 1 is shown below. This has now been corrected in all versions of the News & Views.

  13. The quantitative Morse theorem

    OpenAIRE

    Loi, Ta Le; Phien, Phan

    2013-01-01

    In this paper, we give a proof of the quantitative Morse theorem stated by {Y. Yomdin} in \\cite{Y1}. The proof is based on the quantitative Sard theorem, the quantitative inverse function theorem and the quantitative Morse lemma.

  14. Quantitative reactive modeling and verification.

    Science.gov (United States)

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  15. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1980-01-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of the energy loss of the incident particle with penetration depth, and X-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle. (orig.)

  16. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1979-06-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of energy loss of the incident particle with penetration depth, and x-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle.(Author) [pt

  17. Evaluation of Satellite-Based Precipitation Products from IMERG V04A and V03D, CMORPH and TMPA with Gauged Rainfall in Three Climatologic Zones in China

    Directory of Open Access Journals (Sweden)

    Guanghua Wei

    2017-12-01

    Full Text Available A critical evaluation of the newly released precipitation data set is very important for both the end users and data developers. Meanwhile, the evaluation may provide a benchmark for the product’s continued development and future improvement. To these ends, the four precipitation estimates including IMERG (the Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement V04A, IMERG V03D, CMORPH (the Climate Prediction Center Morphing technique-CRT and TRMM (the Tropical Rainfall Measuring Mission 3B42 are systematically evaluated against the gauge precipitation estimates at multiple spatiotemporal scales from 1 June 2014 to 30 November 2015 over three different topographic and climatic watersheds in China. Meanwhile, the statistical methods are utilized to quantize the performance of the four satellite-based precipitation estimates. The results show that: (1 over the Tibetan Plateau cold region, among all products, IMERG V04A underestimates precipitation with the largest RB (−46.98% during the study period and the similar results are seen at the seasonal scale. However, IMERG V03D demonstrates the best performance according to RB (7.46%, RMSE (0.44 mm/day and RRMSE (28.37%. Except for in summer, TRMM 3B42 perform better than CMORPH according to RMSEs, RRMSEs and Rs; (2 within the semi-humid Huaihe River Basin, IMERG V04A has a slight advantage over the other three satellite-based precipitation products with the lowest RMSE (0.32 mm/day during the evaluation period and followed by IMERG V03D, TRMM 3B42 and CMORPH orderly; (3 over the arid/semi-arid Weihe River Basin, in comparison with the other three products, TRMM 3B42 demonstrates the best performance with the lowest RMSE (0.1 mm/day, RRMSE (8.44% and highest R (0.92 during the study period. Meanwhile, IMERG V03D perform better than IMERG V04A according all the statistical indicators; (4 in winter, IMERG V04A and IMERG V03D tend to underestimate the total precipitation

  18. Correction of Neonatal Hypovolemia

    Directory of Open Access Journals (Sweden)

    V. V. Moskalev

    2007-01-01

    Full Text Available Objective: to evaluate the efficiency of hydroxyethyl starch solution (6% refortane, Berlin-Chemie versus fresh frozen plasma used to correct neonatal hypovolemia.Materials and methods. In 12 neonatal infants with hypoco-agulation, hypovolemia was corrected with fresh frozen plasma (10 ml/kg body weight. In 13 neonates, it was corrected with 6% refortane infusion in a dose of 10 ml/kg. Doppler echocardiography was used to study central hemodynamic parameters and Doppler study was employed to examine regional blood flow in the anterior cerebral and renal arteries.Results. Infusion of 6% refortane and fresh frozen plasma at a rate of 10 ml/hour during an hour was found to normalize the parameters of central hemodynamics and regional blood flow.Conclusion. Comparative analysis of the findings suggests that 6% refortane is the drug of choice in correcting neonatal hypovolemia. Fresh frozen plasma should be infused in hemostatic disorders. 

  19. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... surgery. It is important to understand that your treatment, which will probably include orthodontics before and after ... to realistically estimate the time required for your treatment. Correction of Common Dentofacial Deformities ​ ​ The information provided ...

  20. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... misalignment of jaws and teeth. Surgery can improve chewing, speaking and breathing. While the patient's appearance may ... indicate the need for corrective jaw surgery: Difficulty chewing, or biting food Difficulty swallowing Chronic jaw or ...

  1. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... It can also invite bacteria that lead to gum disease. Click here to find out more. Who We ... It can also invite bacteria that lead to gum disease. Click here to find out more. Corrective Jaw ...

  2. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... is performed by an oral and maxillofacial surgeon (OMS) to correct a wide range of minor and ... when sleeping, including snoring) Your dentist, orthodontist and OMS will work together to determine whether you are ...

  3. Detecting robust signals of interannual variability of gross primary productivity in Asia from multiple terrestrial carbon cycle models and long-term satellite-based vegetation data

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Ueyama, M.; Kato, T.; Ito, A.; Sasai, T.; Sato, H.; Kobayashi, H.; Saigusa, N.

    2014-12-01

    Long term record of satellite-based terrestrial vegetation are important to evaluate terrestrial carbon cycle models. In this study, we demonstrate how multiple satellite observation can be used for evaluating past changes in gross primary productivity (GPP) and detecting robust anomalies in terrestrial carbon cycle in Asia through our model-data synthesis analysis, Asia-MIP. We focused on the two different temporal coverages: long-term (30 years; 1982-2011) and decadal (10 years; 2001-2011; data intensive period) scales. We used a NOAA/AVHRR NDVI record for long-term analysis and multiple satellite data and products (e.g. Terra-MODIS, SPOT-VEGETATION) as historical satellite data, and multiple terrestrial carbon cycle models (e.g. BEAMS, Biome-BGC, ORCHIDEE, SEIB-DGVM, and VISIT). As a results of long-term (30 years) trend analysis, satellite-based time-series data showed that approximately 40% of the area has experienced a significant increase in the NDVI, while only a few areas have experienced a significant decreasing trend over the last 30 years. The increases in the NDVI were dominant in the sub-continental regions of Siberia, East Asia, and India. Simulations using the terrestrial biosphere models also showed significant increases in GPP, similar to the results for the NDVI, in boreal and temperate regions. A modeled sensitivity analysis showed that the increases in GPP are explained by increased temperature and precipitation in Siberia. Precipitation, solar radiation, CO2fertilization and land cover changes are important factors in the tropical regions. However, the relative contributions of each factor to GPP changes are different among the models. Year-to-year variations of terrestrial GPP were overall consistently captured by the satellite data and terrestrial carbon cycle models if the anomalies are large (e.g. 2003 summer GPP anomalies in East Asia and 2002 spring GPP anomalies in mid to high latitudes). The behind mechanisms can be consistently

  4. ICT: isotope correction toolbox.

    Science.gov (United States)

    Jungreuthmayer, Christian; Neubauer, Stefan; Mairinger, Teresa; Zanghellini, Jürgen; Hann, Stephan

    2016-01-01

    Isotope tracer experiments are an invaluable technique to analyze and study the metabolism of biological systems. However, isotope labeling experiments are often affected by naturally abundant isotopes especially in cases where mass spectrometric methods make use of derivatization. The correction of these additive interferences--in particular for complex isotopic systems--is numerically challenging and still an emerging field of research. When positional information is generated via collision-induced dissociation, even more complex calculations for isotopic interference correction are necessary. So far, no freely available tools can handle tandem mass spectrometry data. We present isotope correction toolbox, a program that corrects tandem mass isotopomer data from tandem mass spectrometry experiments. Isotope correction toolbox is written in the multi-platform programming language Perl and, therefore, can be used on all commonly available computer platforms. Source code and documentation can be freely obtained under the Artistic License or the GNU General Public License from: https://github.com/jungreuc/isotope_correction_toolbox/ {christian.jungreuthmayer@boku.ac.at,juergen.zanghellini@boku.ac.at} Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. A method to correct coordinate distortion in EBSD maps

    International Nuclear Information System (INIS)

    Zhang, Y.B.; Elbrønd, A.; Lin, F.X.

    2014-01-01

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. - Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction

  6. Geological Corrections in Gravimetry

    Science.gov (United States)

    Mikuška, J.; Marušiak, I.

    2015-12-01

    Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.

  7. Evaluating Coral Health in La Parguera, Puerto Rico, and Southeastern Florida: Comparison of Satellite-Based Sea Surface Temperature to In Situ Observations

    Science.gov (United States)

    Gomez, A. M.; McDonald, K. C.; Shein, K. A.; Devries, S. L.; Armstrong, R.; Carlo, M.

    2017-12-01

    The third global coral bleaching event, which began in mid-2014, is a major environmental stressor that has been causing significant documented damage to coral reefs in all tropical ocean basins. This worldwide phenomenon is the longest and largest coral bleaching event on record and now finally appears to be ending. During this event, some coral colonies proved to be more resilient to increased ocean temperatures while others bleached severely. This research investigates the spatial and temporal variability of bleaching stress on coral reefs in La Parguera, Puerto Rico, and Southeastern Florida to help further understand the role of temperature and light in coral bleaching. We examine the microclimate within two coral reef systems, using in situ collections of temperature and light data from data loggers deployed throughout Cayo Enrique and Cayo Mario in La Parguera, and Lauderdale-By-The-Sea in FLorida. The in situ measurements are compared to NOAA Coral Reef Watch's 5-km sea surface temperature data as well as to the associated Light Stress Damage Product. Research outcomes include statistical analyses of in situ measurements with satellite datasets supporting enhanced interpretation of satellite-based SST and light products, and ecological niche modeling to assess where corals could potentially survive under future climate conditions. Additional understanding of the microclimate encompassing coral reefs and improved satellite SST and light data will ultimately help coral reef ecosystem managers and policy makers in prioritizing resources toward the monitoring and protection of coral reef ecosystems.

  8. Toward a Satellite-Based System of Sugarcane Yield Estimation and Forecasting in Smallholder Farming Conditions: A Case Study on Reunion Island

    Directory of Open Access Journals (Sweden)

    Julien Morel

    2014-07-01

    Full Text Available Estimating sugarcane biomass is difficult to achieve when working with highly variable spatial distributions of growing conditions, like on Reunion Island. We used a dataset of in-farm fields with contrasted climatic conditions and farming practices to compare three methods of yield estimation based on remote sensing: (1 an empirical relationship method with a growing season-integrated Normalized Difference Vegetation Index NDVI, (2 the Kumar-Monteith efficiency model, and (3 a forced-coupling method with a sugarcane crop model (MOSICAS and satellite-derived fraction of absorbed photosynthetically active radiation. These models were compared with the crop model alone and discussed to provide recommendations for a satellite-based system for the estimation of yield at the field scale. Results showed that the linear empirical model produced the best results (RMSE = 10.4 t∙ha−1. Because this method is also the simplest to set up and requires less input data, it appears that it is the most suitable for performing operational estimations and forecasts of sugarcane yield at the field scale. The main limitation is the acquisition of a minimum of five satellite images. The upcoming open-access Sentinel-2 Earth observation system should overcome this limitation because it will provide 10-m resolution satellite images with a 5-day frequency.

  9. Role of physical forcings and nutrient availability on the control of satellite-based chlorophyll a concentration in the coastal upwelling area of the Sicilian Channel

    Directory of Open Access Journals (Sweden)

    Bernardo Patti

    2010-08-01

    Full Text Available The northern sector of the Sicilian Channel is an area of favourable upwelling winds, which ought to support primary production. However, the values for primary production are low when compared with other Mediterranean areas and very low compared with the most biologically productive regions of the world’s oceans: California, the Canary Islands, Humboldt and Benguela. The aim of this study was to identify the main factors that limit phytoplankton biomass in the Sicilian Channel and modulate its monthly changes. We compared satellite-based estimates of chlorophyll a concentration in the Strait of Sicily with those observed in the four Eastern Boundary Upwelling Systems mentioned above and in other Mediterranean wind-induced coastal upwelling systems (the Alboran Sea, the Gulf of Lions and the Aegean Sea. Our results show that this low level of chlorophyll is mainly due to the low nutrient level in surface and sub-surface waters, independently of wind-induced upwelling intensity. Further, monthly changes in chlorophyll are mainly driven by the mixing of water column and wind-induced and/or circulation-related upwelling processes. Finally, primary production limitation due to the enhanced stratification processes resulting from the general warming trend of Mediterranean waters is not active over most of the coastal upwelling area off the southern Sicilian coast.

  10. A change detection strategy for monitoring vegetative and land-use cover types using remotely-sensed, satellite-based data

    International Nuclear Information System (INIS)

    Hallum, C.

    1993-01-01

    Changes to the environment are of critical concern in the world today; consequently, monitoring such changes and assessing their impacts are tasks demanding considerably higher priority. The ecological impacts of the natural global cycles of gases and particulates in the earth's atmosphere are highly influenced by the extent of changes to vegetative canopy characteristics which dictates the need for capability to detect and assess the magnitude of such changes. The primary emphasis of this paper is on the determination of the size and configuration of the sampling unit that maximizes the probability of its intersection with a 'change' area. Assessment of the significance of the 'change' in a given locality is also addressed and relies on a statistical approach that compares the number of elemental units exceeding a reflectance threshold when compared to a previous point in time. Consideration is also given to a technical framework that supports quantifying the magnitude of the 'change' over large areas (i.e., the estimated area changing from forest to agricultural land-use). The latter entails a multistage approach which utilizes satellite-based and other related data sources

  11. Tree Canopy Light Interception Estimates in Almond and a Walnut Orchards Using Ground, Low Flying Aircraft, and Satellite Based Methods to Improve Irrigation Scheduling Programs

    Science.gov (United States)

    Rosecrance, Richard C.; Johnson, Lee; Soderstrom, Dominic

    2016-01-01

    Canopy light interception is a main driver of water use and crop yield in almond and walnut production. Fractional green canopy cover (Fc) is a good indicator of light interception and can be estimated remotely from satellite using the normalized difference vegetation index (NDVI) data. Satellite-based Fc estimates could be used to inform crop evapotranspiration models, and hence support improvements in irrigation evaluation and management capabilities. Satellite estimates of Fc in almond and walnut orchards, however, need to be verified before incorporating them into irrigation scheduling or other crop water management programs. In this study, Landsat-based NDVI and Fc from NASA's Satellite Irrigation Management Support (SIMS) were compared with four estimates of canopy cover: 1. light bar measurement, 2. in-situ and image-based dimensional tree-crown analyses, 3. high-resolution NDVI data from low flying aircraft, and 4. orchard photos obtained via Google Earth and processed by an Image J thresholding routine. Correlations between the various estimates are discussed.

  12. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  13. Generalised Batho correction factor

    International Nuclear Information System (INIS)

    Siddon, R.L.

    1984-01-01

    There are various approximate algorithms available to calculate the radiation dose in the presence of a heterogeneous medium. The Webb and Fox product over layers formulation of the generalised Batho correction factor requires determination of the number of layers and the layer densities for each ray path. It has been shown that the Webb and Fox expression is inefficient for the heterogeneous medium which is expressed as regions of inhomogeneity rather than layers. The inefficiency of the layer formulation is identified as the repeated problem of determining for each ray path which inhomogeneity region corresponds to a particular layer. It has been shown that the formulation of the Batho correction factor as a product over inhomogeneity regions avoids that topological problem entirely. The formulation in terms of a product over regions simplifies the computer code and reduces the time required to calculate the Batho correction factor for the general heterogeneous medium. (U.K.)

  14. THE SECONDARY EXTINCTION CORRECTION

    Energy Technology Data Exchange (ETDEWEB)

    Zachariasen, W. H.

    1963-03-15

    It is shown that Darwin's formula for the secondary extinction correction, which has been universally accepted and extensively used, contains an appreciable error in the x-ray diffraction case. The correct formula is derived. As a first order correction for secondary extinction, Darwin showed that one should use an effective absorption coefficient mu + gQ where an unpolarized incident beam is presumed. The new derivation shows that the effective absorption coefficient is mu + 2gQ(1 + cos/sup 4/2 theta )/(1 plus or minus cos/sup 2/2 theta )/s up 2/, which gives mu + gQ at theta =0 deg and theta = 90 deg , but mu + 2gQ at theta = 45 deg . Darwin's theory remains valid when applied to neutron diffraction. (auth)

  15. Satellite-Based Precipitation Datasets

    Science.gov (United States)

    Munchak, S. J.; Huffman, G. J.

    2017-12-01

    Of the possible sources of precipitation data, those based on satellites provide the greatest spatial coverage. There is a wide selection of datasets, algorithms, and versions from which to choose, which can be confusing to non-specialists wishing to use the data. The International Precipitation Working Group (IPWG) maintains tables of the major publicly available, long-term, quasi-global precipitation data sets (http://www.isac.cnr.it/ ipwg/data/datasets.html), and this talk briefly reviews the various categories. As examples, NASA provides two sets of quasi-global precipitation data sets: the older Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and current Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) mission (IMERG). Both provide near-real-time and post-real-time products that are uniformly gridded in space and time. The TMPA products are 3-hourly 0.25°x0.25° on the latitude band 50°N-S for about 16 years, while the IMERG products are half-hourly 0.1°x0.1° on 60°N-S for over 3 years (with plans to go to 16+ years in Spring 2018). In addition to the precipitation estimates, each data set provides fields of other variables, such as the satellite sensor providing estimates and estimated random error. The discussion concludes with advice about determining suitability for use, the necessity of being clear about product names and versions, and the need for continued support for satellite- and surface-based observation.

  16. Bryant J. correction formula

    International Nuclear Information System (INIS)

    Tejera R, A.; Cortes P, A.; Becerril V, A.

    1990-03-01

    For the practical application of the method proposed by J. Bryant, the authors carried out a series of small corrections, related with the bottom, the dead time of the detectors and channels, with the resolution time of the coincidences, with the accidental coincidences, with the decay scheme and with the gamma efficiency of the beta detector beta and the beta efficiency beta of the gamma detector. The calculation of the correction formula is presented in the development of the present report, being presented 25 combinations of the probability of the first existent state at once of one disintegration and the second state at once of the following disintegration. (Author)

  17. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  18. MRI intensity inhomogeneity correction by combining intensity and spatial information

    International Nuclear Information System (INIS)

    Vovk, Uros; Pernus, Franjo; Likar, Bostjan

    2004-01-01

    We propose a novel fully automated method for retrospective correction of intensity inhomogeneity, which is an undesired phenomenon in many automatic image analysis tasks, especially if quantitative analysis is the final goal. Besides most commonly used intensity features, additional spatial image features are incorporated to improve inhomogeneity correction and to make it more dynamic, so that local intensity variations can be corrected more efficiently. The proposed method is a four-step iterative procedure in which a non-parametric inhomogeneity correction is conducted. First, the probability distribution of image intensities and corresponding second derivatives is obtained. Second, intensity correction forces, condensing the probability distribution along the intensity feature, are computed for each voxel. Third, the inhomogeneity correction field is estimated by regularization of all voxel forces, and fourth, the corresponding partial inhomogeneity correction is performed. The degree of inhomogeneity correction dynamics is determined by the size of regularization kernel. The method was qualitatively and quantitatively evaluated on simulated and real MR brain images. The obtained results show that the proposed method does not corrupt inhomogeneity-free images and successfully corrects intensity inhomogeneity artefacts even if these are more dynamic

  19. Text Induced Spelling Correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from a very large corpus of raw text, without supervision, and contains word

  20. Ballistic deficit correction

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.; Curien, D.

    1991-01-01

    The EUROGAM data-acquisition has to handle a large number of events/s. Typical in-beam experiments using heavy-ion fusion reactions assume the production of about 50 000 compound nuclei per second deexciting via particle and γ-ray emissions. The very powerful γ-ray detection of EUROGAM is expected to produce high-fold event rates as large as 10 4 events/s. Such high count rates introduce, in a common dead time mode, large dead times for the whole system associated with the processing of the pulse, its digitization and its readout (from the preamplifier pulse up to the readout of the information). In order to minimize the dead time the shaping time constant τ, usually about 3 μs for large volume Ge detectors has to be reduced. Smaller shaping times, however, will adversely affect the energy resolution due to ballistic deficit. One possible solution is to operate the linear amplifier, with a somewhat smaller shaping time constant (in the present case we choose τ = 1.5 μs), in combination with a ballistic deficit compensator. The ballistic deficit can be corrected in different ways using a Gated Integrator, a hardware correction or even a software correction. In this paper we present a comparative study of the software and hardware corrections as well as gated integration

  1. Correctness of concurrent processes

    NARCIS (Netherlands)

    E.R. Olderog (Ernst-Rüdiger)

    1989-01-01

    textabstractA new notion of correctness for concurrent processes is introduced and investigated. It is a relationship P sat S between process terms P built up from operators of CCS [Mi 80], CSP [Ho 85] and COSY [LTS 79] and logical formulas S specifying sets of finite communication sequences as in

  2. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  3. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  4. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... their surgery, orthognathic surgery is performed to correct functional problems. Jaw Surgery can have a dramatic effect on many aspects of life. Following are some of the conditions that may ... front, or side Facial injury Birth defects Receding lower jaw and ...

  5. Error Correcting Codes

    Indian Academy of Sciences (India)

    successful consumer products of all time - the Compact Disc. (CD) digital audio .... We can make ... only 2 t additional parity check symbols are required, to be able to correct t .... display information (contah'ling music related data and a table.

  6. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  7. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  8. 10. Correctness of Programs

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 4. Algorithms - Correctness of Programs. R K Shyamasundar. Series Article Volume 3 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India.

  9. On the ability of RegCM4 to simulate surface solar radiation patterns over Europe: An assessment using satellite-based observations

    Science.gov (United States)

    Alexandri, Georgia; Georgoulias, Aristeidis K.; Zanis, Prodromos; Tsikerdekis, Athanasios; Katragkou, Eleni; Kourtidis, Konstantinos; Meleti, Charikleia

    2015-04-01

    We assess here the ability of RegCM4 to simulate the surface solar radiation (SSR) patterns over the European domain. For the needs of this work, a decadal (1999-2009) simulation was implemented at a horizontal resolution of 50km using the first year as a spin-up. The model is driven by emissions from CMIP5 while ERA-interim data were used as lateral boundary conditions. The RegCM4 SSR fields were validated against satellite-based SSR observations from Meteosat First Generation (MFG) and Meteosat Second Generation (MSG) sensors (CM SAF SIS product). The RegCM4 simulations slightly overestimate SSR compared to CM SAF over Europe with the bias being +1.54% in case of MFG (2000-2005) and +3.34% in case of MSG (2006-2009). SSR from RegCM4 is much closer to SSR from CM SAF over land (bias of -1.59% for MFG and +0.66% for MSG) than over ocean (bias of +7.20% for MFG and 8.07% for MSG). In order to understand the reasons of this bias, we proceeded to a detailed assessment of various parameters that define the SSR levels (cloud fractional cover - CFC, cloud optical thickness - COT, cloud droplet effective radius - Re, aerosol optical thickness - AOD, asymmetry factor - ASY, single scattering albedo - SSA, water vapor - WV and surface albedo - ALB). We validated the simulated CFC, COT and Re from RegCM4 against satellite-based observations from MSG and we found that RegCM4 significantly underestimates CFC and Re, and overestimates COT over Europe. The aerosol-related parameters from RegCM4 were compared with values from the aerosol climatology taken into account within CM SAF SSR estimates. AOD is significantly underestimated in our simulations which leads to a positive SSR bias. The RegCM4 WV and ALB were compared with WV values from ERA-interim and ALB climatological observations from CERES which are also taken into account within CM SAF SSR estimates. Finally, with the use of a radiative transfer model (SBDART) we manage to quantify the relative contribution of each of

  10. An assessment of commonly employed satellite-based remote sensors for mapping mangrove species in Mexico using an NDVI-based classification scheme.

    Science.gov (United States)

    Valderrama-Landeros, L; Flores-de-Santiago, F; Kovacs, J M; Flores-Verdugo, F

    2017-12-14

    Optimizing the classification accuracy of a mangrove forest is of utmost importance for conservation practitioners. Mangrove forest mapping using satellite-based remote sensing techniques is by far the most common method of classification currently used given the logistical difficulties of field endeavors in these forested wetlands. However, there is now an abundance of options from which to choose in regards to satellite sensors, which has led to substantially different estimations of mangrove forest location and extent with particular concern for degraded systems. The objective of this study was to assess the accuracy of mangrove forest classification using different remotely sensed data sources (i.e., Landsat-8, SPOT-5, Sentinel-2, and WorldView-2) for a system located along the Pacific coast of Mexico. Specifically, we examined a stressed semiarid mangrove forest which offers a variety of conditions such as dead areas, degraded stands, healthy mangroves, and very dense mangrove island formations. The results indicated that Landsat-8 (30 m per pixel) had  the lowest overall accuracy at 64% and that WorldView-2 (1.6 m per pixel) had the highest at 93%. Moreover, the SPOT-5 and the Sentinel-2 classifications (10 m per pixel) were very similar having accuracies of 75 and 78%, respectively. In comparison to WorldView-2, the other sensors overestimated the extent of Laguncularia racemosa and underestimated the extent of Rhizophora mangle. When considering such type of sensors, the higher spatial resolution can be particularly important in mapping small mangrove islands that often occur in degraded mangrove systems.

  11. Satellite Based Education and Training in Remote Sensing and Geo-Information AN E-Learning Approach to Meet the Growing Demands in India

    Science.gov (United States)

    Raju, P. L. N.; Gupta, P. K.

    2012-07-01

    One of the prime activities of Indian Space Research Organisation's (ISRO) Space Program is providing satellite communication services, viz., television broadcasting, mobile communication, cyclone disaster warning and rescue operations etc. so as to improve their economic conditions, disseminate technical / scientific knowledge to improve the agriculture production and education for rural people of India. ISRO, along with National Aeronautical and Space Administration (NASA) conducted experimental satellite communication project i.e. Satellite Instructional Television Experiment (SITE) using NASA's Advanced Telecommunication Satellite (i.e. ATS 6) with an objective to educate poor people of India via satellite broadcasting in 1975 and 1976, covering more than 2600 villages in six states of India and territories. Over the years India built communication satellites indigenously to meet the communication requirements of India. This has further lead to launch of an exclusive satellite from ISRO for educational purposes i.e. EDUSAT in 2004 through which rich audio-video content is transmitted / received, recreating virtual classes through interactivity. Indian Institute of Remote Sensing (IIRS) established in 1966, a premier institute in south East Asia in disseminating Remote Sensing (RS) and Geographical Information System (GIS), mainly focusing on contact based programs. But expanded the scope with satellite based Distance Learning Programs for Universities, utilizing the dedicated communication satellite i.e. EDUSAT in 2007. IIRS conducted successfully eight Distance Learning Programs in the last five years and training more than 6000 students mainly at postgraduate level from more than 60 universities /Institutions spread across India. IIRS obtained feedback and improved the programs on the continuous basis. Expanded the scope of IIRS outreach program to train user departments tailor made in any of the applications of Remote Sensing and Geoinformation, capacity

  12. The AMSR2 Satellite-based Microwave Snow Algorithm (SMSA) to estimate regional to global snow depth and snow water equivalent

    Science.gov (United States)

    Kelly, R. E. J.; Saberi, N.; Li, Q.

    2017-12-01

    With moderate to high spatial resolution (observation approaches yet to be fully scoped and developed, the long-term satellite passive microwave record remains an important tool for cryosphere-climate diagnostics. A new satellite microwave remote sensing approach is described for estimating snow depth (SD) and snow water equivalent (SWE). The algorithm, called the Satellite-based Microwave Snow Algorithm (SMSA), uses Advanced Microwave Scanning Radiometer - 2 (AMSR2) observations aboard the Global Change Observation Mission - Water mission launched by the Japan Aerospace Exploration Agency in 2012. The approach is unique since it leverages observed brightness temperatures (Tb) with static ancillary data to parameterize a physically-based retrieval without requiring parameter constraints from in situ snow depth observations or historical snow depth climatology. After screening snow from non-snow surface targets (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]), moderate and shallow snow depths are estimated by minimizing the difference between Dense Media Radiative Transfer model estimates (Tsang et al., 2000; Picard et al., 2011) and AMSR2 Tb observations to retrieve SWE and SD. Parameterization of the model combines a parsimonious snow grain size and density approach originally developed by Kelly et al. (2003). Evaluation of the SMSA performance is achieved using in situ snow depth data from a variety of standard and experiment data sources. Results presented from winter seasons 2012-13 to 2016-17 illustrate the improved performance of the new approach in comparison with the baseline AMSR2 algorithm estimates and approach the performance of the model assimilation-based approach of GlobSnow. Given the variation in estimation power of SWE by different land surface/climate models and selected satellite-derived passive microwave approaches, SMSA provides SWE estimates that are independent of real or near real

  13. Anatomically Correct Surface Recovery

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Nielsen, Jannik Boll; Larsen, Rasmus

    2015-01-01

    using the learned statistics. A quantitative evaluation is performed on a data set of 10 laser scans of ear canal impressions with minimal noise and artificial holes. We also present a qualitative evaluation on authentic partial scans from an actual direct in ear scanner prototype. Compared to a state...

  14. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  15. PS Booster Orbit Correction

    CERN Document Server

    Chanel, M; Rumolo, G; Tomás, R; CERN. Geneva. AB Department

    2008-01-01

    At the end of the 2007 run, orbit measurements were carried out in the 4 rings of the PS Booster (PSB) for different working points and beam energies. The aim of these measurements was to provide the necessary input data for a PSB realignment campaign during the 2007/2008 shutdown. Currently, only very few corrector magnets can be operated reliably in the PSB; therefore the orbit correction has to be achieved by displacing (horizontally and vertically) and/or tilting some of the defocusing quadrupoles (QDs). In this report we first describe the orbit measurements, followed by a detailed explanation of the orbit correction strategy. Results and conclusions are presented in the last section.

  16. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  17. Land-use regression with long-term satellite-based greenness index and culture-specific sources to model PM2.5 spatial-temporal variability.

    Science.gov (United States)

    Wu, Chih-Da; Chen, Yu-Cheng; Pan, Wen-Chi; Zeng, Yu-Ting; Chen, Mu-Jean; Guo, Yue Leon; Lung, Shih-Chun Candice

    2017-05-01

    This study utilized a long-term satellite-based vegetation index, and considered culture-specific emission sources (temples and Chinese restaurants) with Land-use Regression (LUR) modelling to estimate the spatial-temporal variability of PM 2.5 using data from Taipei metropolis, which exhibits typical Asian city characteristics. Annual average PM 2.5 concentrations from 2006 to 2012 of 17 air quality monitoring stations established by Environmental Protection Administration of Taiwan were used for model development. PM 2.5 measurements from 2013 were used for external data verification. Monthly Normalized Difference Vegetation Index (NDVI) images coupled with buffer analysis were used to assess the spatial-temporal variations of greenness surrounding the monitoring sites. The distribution of temples and Chinese restaurants were included to represent the emission contributions from incense and joss money burning, and gas cooking, respectively. Spearman correlation coefficient and stepwise regression were used for LUR model development, and 10-fold cross-validation and external data verification were applied to verify the model reliability. The results showed a strongly negative correlation (r: -0.71 to -0.77) between NDVI and PM 2.5 while temples (r: 0.52 to 0.66) and Chinese restaurants (r: 0.31 to 0.44) were positively correlated to PM 2.5 concentrations. With the adjusted model R 2 of 0.89, a cross-validated adj-R 2 of 0.90, and external validated R 2 of 0.83, the high explanatory power of the resultant model was confirmed. Moreover, the averaged NDVI within a 1750 m circular buffer (p < 0.01), the number of Chinese restaurants within a 1750 m buffer (p < 0.01), and the number of temples within a 750 m buffer (p = 0.06) were selected as important predictors during the stepwise selection procedures. According to the partial R 2 , NDVI explained 66% of PM 2.5 variation and was the dominant variable in the developed model. We suggest future studies

  18. SATELLITE BASED EDUCATION AND TRAINING IN REMOTE SENSING AND GEO-INFORMATION: AN E-LEARNING APPROACH TO MEET THE GROWING DEMANDS IN INDIA

    Directory of Open Access Journals (Sweden)

    P. L. N. Raju

    2012-07-01

    Full Text Available One of the prime activities of Indian Space Research Organisation's (ISRO Space Program is providing satellite communication services, viz., television broadcasting, mobile communication, cyclone disaster warning and rescue operations etc. so as to improve their economic conditions, disseminate technical / scientific knowledge to improve the agriculture production and education for rural people of India. ISRO, along with National Aeronautical and Space Administration (NASA conducted experimental satellite communication project i.e. Satellite Instructional Television Experiment (SITE using NASA’s Advanced Telecommunication Satellite (i.e. ATS 6 with an objective to educate poor people of India via satellite broadcasting in 1975 and 1976, covering more than 2600 villages in six states of India and territories. Over the years India built communication satellites indigenously to meet the communication requirements of India. This has further lead to launch of an exclusive satellite from ISRO for educational purposes i.e. EDUSAT in 2004 through which rich audio-video content is transmitted / received, recreating virtual classes through interactivity. Indian Institute of Remote Sensing (IIRS established in 1966, a premier institute in south East Asia in disseminating Remote Sensing (RS and Geographical Information System (GIS, mainly focusing on contact based programs. But expanded the scope with satellite based Distance Learning Programs for Universities, utilizing the dedicated communication satellite i.e. EDUSAT in 2007. IIRS conducted successfully eight Distance Learning Programs in the last five years and training more than 6000 students mainly at postgraduate level from more than 60 universities /Institutions spread across India. IIRS obtained feedback and improved the programs on the continuous basis. Expanded the scope of IIRS outreach program to train user departments tailor made in any of the applications of Remote Sensing and

  19. The Feasibility of Tropospheric and Total Ozone Determination Using a Fabry-perot Interferometer as a Satellite-based Nadir-viewing Atmospheric Sensor. Ph.D. Thesis

    Science.gov (United States)

    Larar, Allen Maurice

    1993-01-01

    Monitoring of the global distribution of tropospheric ozone (O3) is desirable for enhanced scientific understanding as well as to potentially lessen the ill-health impacts associated with exposure to elevated concentrations in the lower atmosphere. Such a capability can be achieved using a satellite-based device making high spectral resolution measurements with high signal-to-noise ratios; this would enable observation in the pressure-broadened wings of strong O3 lines while minimizing the impact of undesirable signal contributions associated with, for example, the terrestrial surface, interfering species, and clouds. The Fabry-Perot Interferometer (FPI) provides high spectral resolution and high throughput capabilities that are essential for this measurement task. Through proper selection of channel spectral regions, the FPI optimized for tropospheric O3 measurements can simultaneously observe a stratospheric component and thus the total O3 column abundance. Decreasing stratospheric O3 concentrations may lead to an increase in biologically harmful solar ultraviolet radiation reaching the earth's surface, which is detrimental to health. In this research, a conceptual instrument design to achieve the desired measurement has been formulated. This involves a double-etalon fixed-gap series configuration FPI along with an ultra-narrow bandpass filter to achieve single-order operation with an overall spectral resolution of approximately .068 cm(exp -1). A spectral region of about 1 cm(exp -1) wide centered at 1054.73 cm(exp -1) within the strong 9.6 micron ozone infrared band is sampled with 24 spectral channels. Other design characteristics include operation from a nadir-viewing satellite configuration utilizing a 9 inch (diameter) telescope and achieving horizontal spatial resolution with a 50 km nadir footprint. A retrieval technique has been implemented and is demonstrated for a tropical atmosphere possessing enhanced tropospheric ozone amounts. An error analysis

  20. Satellite Based Live and Interactive Distance Learning Program in the Field of Geoinformatics - a Perspective of Indian Institute of Remote Sensing, India

    Science.gov (United States)

    Raju, P. L. N.; Gupta, P. K.; Roy, P. S.

    2011-09-01

    Geoinformatics is a highly specialized discipline that deals with Remote Sensing, Geographical Information System (GIS), Global Positioning System (GPS) and field surveys for assessing, quantification, development and management of resources, planning and infrastructure development, utility services etc. Indian Institute of Remote Sensing (IIRS), a premier institute and one of its kinds has played a key role for capacity Building in this specialized area since its inception in 1966. Realizing the large demand, IIRS has started outreach program in basics of Remote Sensing, GIS and GPS for universities and institutions. EDUSAT (Educational Satellite) is the communication satellite built and launched by ISRO in 2004 exclusively for serving the educational sector to meet the demand for an interactive satellite based distance education system for the country. IIRS has used EDUSAT (shifted to INSAT 4 CR recently due to termination of services from EDUSAT) for its distance learning program to impart basic training in Remote Sensing, GIS and GPS, catering to the universities spread across India. The EDUSAT based training is following similar to e-learning method but has advantage of live interaction sessions between teacher and the students when the lecture is delivered using EDUSAT satellite communication. Because of its good quality reception the interactions are not constrained due to bandwidth problems of Internet. National Natural Resource Management System, Department of Space, Government of India, under Standing Committee in Training and Technology funded this unique program to conduct the basic training in Geoinformatics. IIRS conducts 6 weeks basic training course on "Remote Sensing, GIS and GPS" regularly since the year 2007. The course duration is spread over the period of 3 months beginning with the start of the academic year (1st semester) i.e., July to December every year, for university students. IIRS has utilized EDUSAT satellite for conducting 4 six weeks

  1. Motion correction in thoracic positron emission tomography

    CERN Document Server

    Gigengack, Fabian; Dawood, Mohammad; Schäfers, Klaus P

    2015-01-01

    Respiratory and cardiac motion leads to image degradation in Positron Emission Tomography (PET), which impairs quantification. In this book, the authors present approaches to motion estimation and motion correction in thoracic PET. The approaches for motion estimation are based on dual gating and mass-preserving image registration (VAMPIRE) and mass-preserving optical flow (MPOF). With mass-preservation, image intensity modulations caused by highly non-rigid cardiac motion are accounted for. Within the image registration framework different data terms, different variants of regularization and parametric and non-parametric motion models are examined. Within the optical flow framework, different data terms and further non-quadratic penalization are also discussed. The approaches for motion correction particularly focus on pipelines in dual gated PET. A quantitative evaluation of the proposed approaches is performed on software phantom data with accompanied ground-truth motion information. Further, clinical appl...

  2. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  3. Brain Image Motion Correction

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Benjaminsen, Claus; Larsen, Rasmus

    2015-01-01

    The application of motion tracking is wide, including: industrial production lines, motion interaction in gaming, computer-aided surgery and motion correction in medical brain imaging. Several devices for motion tracking exist using a variety of different methodologies. In order to use such devices...... offset and tracking noise in medical brain imaging. The data are generated from a phantom mounted on a rotary stage and have been collected using a Siemens High Resolution Research Tomograph for positron emission tomography. During acquisition the phantom was tracked with our latest tracking prototype...

  4. Scatter and attenuation correction in SPECT

    International Nuclear Information System (INIS)

    Ljungberg, Michael

    2004-01-01

    The adsorbed dose is related to the activity uptake in the organ and its temporal distribution. Measured count rate with scintillation cameras is related to activity through the system sensitivity, cps/MBq. By accounting for physical processes and imaging limitations we can measure the activity at different time points. Correction for physical factor, such as attenuation and scatter is required for accurate quantitation. Both planar and SPECT imaging can be used to estimate activities for radiopharmaceutical dosimetry. Planar methods have been the most widely used but is a 2D technique. With accurate modelling for imagine in iterative reconstruction, SPECT methods will prove to be more accurate

  5. Decay correction methods in dynamic PET studies

    International Nuclear Information System (INIS)

    Chen, K.; Reiman, E.; Lawson, M.

    1995-01-01

    In order to reconstruct positron emission tomography (PET) images in quantitative dynamic studies, the data must be corrected for radioactive decay. One of the two commonly used methods ignores physiological processes including blood flow that occur at the same time as radioactive decay; the other makes incorrect use of time-accumulated PET counts. In simulated dynamic PET studies using 11 C-acetate and 18 F-fluorodeoxyglucose (FDG), these methods are shown to result in biased estimates of the time-activity curve (TAC) and model parameters. New methods described in this article provide significantly improved parameter estimates in dynamic PET studies

  6. Software correction of scatter coincidence in positron CT

    International Nuclear Information System (INIS)

    Endo, M.; Iinuma, T.A.

    1984-01-01

    This paper describes a software correction of scatter coincidence in positron CT which is based on an estimation of scatter projections from true projections by an integral transform. Kernels for the integral transform are projected distributions of scatter coincidences for a line source at different positions in a water phantom and are calculated by Klein-Nishina's formula. True projections of any composite object can be determined from measured projections by iterative applications of the integral transform. The correction method was tested in computer simulations and phantom experiments with Positologica. The results showed that effects of scatter coincidence are not negligible in the quantitation of images, but the correction reduces them significantly. (orig.)

  7. A versatile atomic number correction for electron-probe microanalysis

    International Nuclear Information System (INIS)

    Love, G.; Cox, M.G.; Scott, V.D.

    1978-01-01

    A new atomic number correction is proposed for quantitative electron-probe microanalysis. Analytical expressions for the stopping power S and back-scatter R factors are derived which take into account atomic number of the target, incident electron energy and overvoltage; the latter expression is established using Monte Carlo calculations. The correct procedures for evaluating S and R for multi-element specimens are described. The new method, which overcomes some limitations inherent in earlier atomic number corrections, may readily be used where specimens are inclined to the electron beam. (author)

  8. Quantitative investment analysis

    CERN Document Server

    DeFusco, Richard

    2007-01-01

    In the "Second Edition" of "Quantitative Investment Analysis," financial experts Richard DeFusco, Dennis McLeavey, Jerald Pinto, and David Runkle outline the tools and techniques needed to understand and apply quantitative methods to today's investment process.

  9. Rigour in quantitative research.

    Science.gov (United States)

    Claydon, Leica Sarah

    2015-07-22

    This article which forms part of the research series addresses scientific rigour in quantitative research. It explores the basis and use of quantitative research and the nature of scientific rigour. It examines how the reader may determine whether quantitative research results are accurate, the questions that should be asked to determine accuracy and the checklists that may be used in this process. Quantitative research has advantages in nursing, since it can provide numerical data to help answer questions encountered in everyday practice.

  10. Quantitation: clinical applications

    International Nuclear Information System (INIS)

    Britton, K.E.

    1982-01-01

    Single photon emission tomography may be used quantitatively if its limitations are recognized and quantitation is made in relation to some reference area on the image. Relative quantitation is discussed in outline in relation to the liver, brain and pituitary, thyroid, adrenals, and heart. (U.K.)

  11. RCRA corrective action and closure

    International Nuclear Information System (INIS)

    1995-02-01

    This information brief explains how RCRA corrective action and closure processes affect one another. It examines the similarities and differences between corrective action and closure, regulators' interests in RCRA facilities undergoing closure, and how the need to perform corrective action affects the closure of DOE's permitted facilities and interim status facilities

  12. Rethinking political correctness.

    Science.gov (United States)

    Ely, Robin J; Meyerson, Debra E; Davidson, Martin N

    2006-09-01

    Legal and cultural changes over the past 40 years ushered unprecedented numbers of women and people of color into companies' professional ranks. Laws now protect these traditionally underrepresented groups from blatant forms of discrimination in hiring and promotion. Meanwhile, political correctness has reset the standards for civility and respect in people's day-to-day interactions. Despite this obvious progress, the authors' research has shown that political correctness is a double-edged sword. While it has helped many employees feel unlimited by their race, gender, or religion,the PC rule book can hinder people's ability to develop effective relationships across race, gender, and religious lines. Companies need to equip workers with skills--not rules--for building these relationships. The authors offer the following five principles for healthy resolution of the tensions that commonly arise over difference: Pause to short-circuit the emotion and reflect; connect with others, affirming the importance of relationships; question yourself to identify blind spots and discover what makes you defensive; get genuine support that helps you gain a broader perspective; and shift your mind-set from one that says, "You need to change," to one that asks, "What can I change?" When people treat their cultural differences--and related conflicts and tensions--as opportunities to gain a more accurate view of themselves, one another, and the situation, trust builds and relationships become stronger. Leaders should put aside the PC rule book and instead model and encourage risk taking in the service of building the organization's relational capacity. The benefits will reverberate through every dimension of the company's work.

  13. Quantitative whole body scintigraphy - a simplified approach

    International Nuclear Information System (INIS)

    Marienhagen, J.; Maenner, P.; Bock, E.; Schoenberger, J.; Eilles, C.

    1996-01-01

    In this paper we present investigations on a simplified method of quantitative whole body scintigraphy by using a dual head LFOV-gamma camera and a calibration algorithm without the need of additional attenuation or scatter correction. Validation of this approach to the anthropomorphic phantom as well as in patient studies showed a high accuracy concerning quantification of whole body activity (102.8% and 97.72%, resp.), by contrast organ activities were recovered with an error range up to 12%. The described method can be easily performed using commercially available software packages and is recommendable especially for quantitative whole body scintigraphy in a clinical setting. (orig.) [de

  14. Correction to toporek (2014).

    Science.gov (United States)

    2015-01-01

    Reports an error in "Pedagogy of the privileged: Review of Deconstructing Privilege: Teaching and Learning as Allies in the Classroom" by Rebecca L. Toporek (Cultural Diversity and Ethnic Minority Psychology, 2014[Oct], Vol 20[4], 621-622). This article was originally published online incorrectly as a Brief Report. The article authored by Rebecca L. Toporek has been published correctly as a Book Review in the October 2014 print publication (Vol. 20, No. 4, pp. 621-622. http://dx.doi.org/10.1037/a0036529). (The following abstract of the original article appeared in record 2014-42484-006.) Reviews the book, Deconstructing Privilege: Teaching and Learning as Allies in the Classroom edited by Kim A. Case (2013). The purpose of this book is to provide a collection of resources for those teaching about privilege directly, much of this volume may be useful for expanding the context within which educators teach all aspects of psychology. Understanding the history and systems of psychology, clinical practice, research methods, assessment, and all the core areas of psychology could be enhanced by consideration of the structural framework through which psychology has developed and is maintained. The book presents a useful guide for educators, and in particular, those who teach about systems of oppression and privilege directly. For psychologists, this guide provides scholarship and concrete strategies for facilitating students' awareness of multiple dimensions of privilege across content areas. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  15. Radiation protection: A correction

    International Nuclear Information System (INIS)

    1972-01-01

    An error in translation inadvertently distorted the sense of a paragraph in the article entitled 'Ecological Aspects of Radiation Protection', by Dr. P. Recht, which appeared in the Bulletin, Volume 14, No. 2 earlier this year. In the English text the error appears on Page 28, second paragraph, which reads, as published: 'An instance familiar to radiation protection specialists, which has since come to be regarded as a classic illustration of this approach, is the accidental release at the Windscale nuclear centre in the north of England.' In the French original of this text no reference was made, or intended, to the accidental release which took place in 1957; the reference was to the study of the critical population group exposed to routine releases from the centre, as the footnote made clear. A more correct translation of the relevant sentence reads: 'A classic example of this approach, well-known to radiation protection specialists, is that of releases from the Windscale nuclear centre, in the north of England.' A second error appeared in the footnote already referred to. In all languages, the critical population group studied in respect of the Windscale releases is named as that of Cornwall; the reference should be, of course, to that part of the population of Wales who eat laver bread. (author)

  16. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  17. Cross plane scattering correction

    International Nuclear Information System (INIS)

    Shao, L.; Karp, J.S.

    1990-01-01

    Most previous scattering correction techniques for PET are based on assumptions made for a single transaxial plane and are independent of axial variations. These techniques will incorrectly estimate the scattering fraction for volumetric PET imaging systems since they do not take the cross-plane scattering into account. In this paper, the authors propose a new point source scattering deconvolution method (2-D). The cross-plane scattering is incorporated into the algorithm by modeling a scattering point source function. In the model, the scattering dependence both on axial and transaxial directions is reflected in the exponential fitting parameters and these parameters are directly estimated from a limited number of measured point response functions. The authors' results comparing the standard in-plane point source deconvolution to the authors' cross-plane source deconvolution show that for a small source, the former technique overestimates the scatter fraction in the plane of the source and underestimate the scatter fraction in adjacent planes. In addition, the authors also propose a simple approximation technique for deconvolution

  18. Unconventional scaling of the anomalous Hall effect accompanying electron localization correction in the dirty regime

    KAUST Repository

    Lu, Y. M.; Cai, J. W.; Guo, Zaibing; Zhang, Xixiang

    2013-01-01

    Pt films. The relationship between electron transport and temperature reveals a quantitatively insignificant Coulomb interaction in these films, while the temperature dependent anomalous Hall conductivity experiences quantum correction from electron

  19. Quantitative analysis chemistry

    International Nuclear Information System (INIS)

    Ko, Wansuk; Lee, Choongyoung; Jun, Kwangsik; Hwang, Taeksung

    1995-02-01

    This book is about quantitative analysis chemistry. It is divided into ten chapters, which deal with the basic conception of material with the meaning of analysis chemistry and SI units, chemical equilibrium, basic preparation for quantitative analysis, introduction of volumetric analysis, acid-base titration of outline and experiment examples, chelate titration, oxidation-reduction titration with introduction, titration curve, and diazotization titration, precipitation titration, electrometric titration and quantitative analysis.

  20. Quantitative Algebraic Reasoning

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Panangaden, Prakash; Plotkin, Gordon

    2016-01-01

    We develop a quantitative analogue of equational reasoning which we call quantitative algebra. We define an equality relation indexed by rationals: a =ε b which we think of as saying that “a is approximately equal to b up to an error of ε”. We have 4 interesting examples where we have a quantitative...... equational theory whose free algebras correspond to well known structures. In each case we have finitary and continuous versions. The four cases are: Hausdorff metrics from quantitive semilattices; pWasserstein metrics (hence also the Kantorovich metric) from barycentric algebras and also from pointed...

  1. Quantitative autoradiography of neurochemicals

    International Nuclear Information System (INIS)

    Rainbow, T.C.; Biegon, A.; Bleisch, W.V.

    1982-01-01

    Several new methods have been developed that apply quantitative autoradiography to neurochemistry. These methods are derived from the 2-deoxyglucose (2DG) technique of Sokoloff (1), which uses quantitative autoradiography to measure the rate of glucose utilization in brain structures. The new methods allow the measurement of the rate of cerbral protein synthesis and the levels of particular neurotransmitter receptors by quantitative autoradiography. As with the 2DG method, the new techniques can measure molecular levels in micron-sized brain structures; and can be used in conjunction with computerized systems of image processing. It is possible that many neurochemical measurements could be made by computerized analysis of quantitative autoradiograms

  2. Food systems in correctional settings

    DEFF Research Database (Denmark)

    Smoyer, Amy; Kjær Minke, Linda

    management of food systems may improve outcomes for incarcerated people and help correctional administrators to maximize their health and safety. This report summarizes existing research on food systems in correctional settings and provides examples of food programmes in prison and remand facilities......Food is a central component of life in correctional institutions and plays a critical role in the physical and mental health of incarcerated people and the construction of prisoners' identities and relationships. An understanding of the role of food in correctional settings and the effective......, including a case study of food-related innovation in the Danish correctional system. It offers specific conclusions for policy-makers, administrators of correctional institutions and prison-food-service professionals, and makes proposals for future research....

  3. Corrective justice and contract law

    Directory of Open Access Journals (Sweden)

    Martín Hevia

    2010-06-01

    Full Text Available This article suggests that the central aspects of contract law in various jurisdictions can be explained within the idea of corrective justice. The article is divided into three parts. The first part distinguishes between corrective justice and distributive justice. The second part describes contract law. The third part focuses on actions for breach of contract and within that context reflects upon the idea of corrective justice.

  4. Corrective justice and contract law

    OpenAIRE

    Martín Hevia

    2010-01-01

    This article suggests that the central aspects of contract law in various jurisdictions can be explained within the idea of corrective justice. The article is divided into three parts. The first part distinguishes between corrective justice and distributive justice. The second part describes contract law. The third part focuses on actions for breach of contract and within that context reflects upon the idea of corrective justice.

  5. Digital radiography: a quantitative approach

    International Nuclear Information System (INIS)

    Retraint, F.

    2004-01-01

    'Full-text:' In a radiograph the value of each pixel is related to the material thickness crossed by the x-rays. Using this relationship, an object can be characterized by parameters such as depth, surface and volume. Assuming a locally linear detector response and using a radiograph of reference object, the quantitative thickness map of object can be obtained by applying offset and gain corrections. However, for an acquisition system composed of cooled CCD camera optically coupled to a scintillator screen, the radiographic image formation process generates some bias which prevent from obtaining the quantitative information: non uniformity of the x-ray source, beam hardening, Compton scattering, scintillator screen, optical system response. In a first section, we propose a complete model of the radiographic image formation process taking account of these biases. In a second section, we present an inversion scheme of this model for a single material object, which enables to obtain the thickness map of the object crossed by the x-rays. (author)

  6. Retrieval of High-Resolution Atmospheric Particulate Matter Concentrations from Satellite-Based Aerosol Optical Thickness over the Pearl River Delta Area, China

    Directory of Open Access Journals (Sweden)

    Lili Li

    2015-06-01

    Full Text Available Satellite remote sensing offers an effective approach to estimate indicators of air quality on a large scale. It is critically significant for air quality monitoring in areas experiencing rapid urbanization and consequently severe air pollution, like the Pearl River Delta (PRD in China. This paper starts with examining ground observations of particulate matter (PM and the relationship between PM10 (particles smaller than 10 μm and aerosol optical thickness (AOT by analyzing observations on the sampling sites in the PRD. A linear regression (R2 = 0.51 is carried out using MODIS-derived 500 m-resolution AOT and PM10 concentration from monitoring stations. Data of atmospheric boundary layer (ABL height and relative humidity are used to make vertical and humidity corrections on AOT. Results after correction show higher correlations (R2 = 0.55 between extinction coefficient and PM10. However, coarse spatial resolution of meteorological data affects the smoothness of retrieved maps, which suggests high-resolution and accurate meteorological data are critical to increase retrieval accuracy of PM. Finally, the model provides the spatial distribution maps of instantaneous and yearly average PM10 over the PRD. It is proved that observed PM10 is more relevant to yearly mean AOT than instantaneous values.

  7. Unpacking Corrections in Mobile Instruction

    DEFF Research Database (Denmark)

    Levin, Lena; Cromdal, Jakob; Broth, Mathias

    2017-01-01

    that the practice of unpacking the local particulars of corrections (i) provides for the instructional character of the interaction, and (ii) is highly sensitive to the relevant physical and mobile contingencies. These findings contribute to the existing literature on the interactional organisation of correction...

  8. Atmospheric correction of satellite data

    Science.gov (United States)

    Shmirko, Konstantin; Bobrikov, Alexey; Pavlov, Andrey

    2015-11-01

    Atmosphere responses for more than 90% of all radiation measured by satellite. Due to this, atmospheric correction plays an important role in separating water leaving radiance from the signal, evaluating concentration of various water pigments (chlorophyll-A, DOM, CDOM, etc). The elimination of atmospheric intrinsic radiance from remote sensing signal referred to as atmospheric correction.

  9. Stress Management in Correctional Recreation.

    Science.gov (United States)

    Card, Jaclyn A.

    Current economic conditions have created additional sources of stress in the correctional setting. Often, recreation professionals employed in these settings also add to inmate stress. One of the major factors limiting stress management in correctional settings is a lack of understanding of the value, importance, and perceived freedom, of leisure.…

  10. NLO QCD Corrections to Drell-Yan in TeV-scale Gravity Models

    International Nuclear Information System (INIS)

    Mathews, Prakash; Ravindran, V.

    2006-01-01

    In TeV scale gravity models, we present the NLO-QCD corrections for the double differential cross sections in the scattering angle for dilepton production at hadron colliders. The quantitative impact of QCD corrections for extra dimension searches at LHC and Tevatron are investigated for both ADD and RS models through K-factors. We also show how the inclusion of QCD corrections to NLO stabilises the cross section with respect to renormalisation and factorisation scale variations

  11. Quantitative analysis by computer controlled X-ray fluorescence spectrometer

    International Nuclear Information System (INIS)

    Balasubramanian, T.V.; Angelo, P.C.

    1981-01-01

    X-ray fluorescence spectroscopy has become a widely accepted method in the metallurgical field for analysis of both minor and major elements. As encountered in many other analytical techniques, the problem of matrix effect generally known as the interelemental effects is to be dealt with effectively in order to make the analysis accurate. There are several methods by which the effects of matrix on the analyte are minimised or corrected for and the mathematical correction is one among them. In this method the characteristic secondary X-ray intensities are measured from standard samples and correction coefficients. If any, for interelemental effects are evaluated by mathematical calculations. This paper describes attempts to evaluate the correction coefficients for interelemental effects by multiple linear regression programmes using a computer for the quantitative analysis of stainless steel and a nickel base cast alloy. The quantitative results obtained using this method for a standard stainless steel sample are compared with the given certified values. (author)

  12. Correcting sample drift using Fourier harmonics.

    Science.gov (United States)

    Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Reyes, D F; Braza, V; Yañez, A; Nuñez-Moraleda, B; González, D; Galindo, P L

    2018-07-01

    During image acquisition of crystalline materials by high-resolution scanning transmission electron microscopy, the sample drift could lead to distortions and shears that hinder their quantitative analysis and characterization. In order to measure and correct this effect, several authors have proposed different methodologies making use of series of images. In this work, we introduce a methodology to determine the drift angle via Fourier analysis by using a single image based on the measurements between the angles of the second Fourier harmonics in different quadrants. Two different approaches, that are independent of the angle of acquisition of the image, are evaluated. In addition, our results demonstrate that the determination of the drift angle is more accurate by using the measurements of non-consecutive quadrants when the angle of acquisition is an odd multiple of 45°. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  14. Partial Volume Effects correction in emission tomography

    International Nuclear Information System (INIS)

    Le Pogam, Adrien

    2010-01-01

    Partial Volume Effects (PVE) designates the blur commonly found in nuclear medicine images and this PhD work is dedicated to their correction with the objectives of qualitative and quantitative improvement of such images. PVE arise from the limited spatial resolution of functional imaging with either Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT). They can be defined as a signal loss in tissues of size similar to the Full Width at Half Maximum (FWHM) of the PSF of the imaging device. In addition, PVE induce activity cross contamination between adjacent structures with different tracer uptakes. This can lead to under or over estimation of the real activity of such analyzed regions. Various methodologies currently exist to compensate or even correct for PVE and they may be classified depending on their place in the processing chain: either before, during or after the image reconstruction process, as well as their dependency on co-registered anatomical images with higher spatial resolution, for instance Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel-based and post-reconstruction approach was chosen for this work to avoid regions of interest definition and dependency on proprietary reconstruction developed by each manufacturer, in order to improve the PVE correction. Two different contributions were carried out in this work: the first one is based on a multi-resolution methodology in the wavelet domain using the higher resolution details of a co-registered anatomical image associated to the functional dataset to correct. The second one is the improvement of iterative deconvolution based methodologies by using tools such as directional wavelets and curvelets extensions. These various developed approaches were applied and validated using synthetic, simulated and clinical images, for instance with neurology and oncology applications in mind. Finally, as currently available PET/CT scanners incorporate more

  15. How to simplify transmission-based scatter correction for clinical application

    International Nuclear Information System (INIS)

    Baccarne, V.; Hutton, B.F.

    1998-01-01

    Full text: The performances of ordered subsets (OS) EM reconstruction including attenuation, scatter and spatial resolution correction are evaluated using cardiac Monte Carlo data. We demonstrate how simplifications in the scatter model allow one to correct SPECT data for scatter in terms of quantitation and quality in a reasonable time. Initial reconstruction of the 20% window is performed including attenuation correction (broad beam μ values), to estimate the activity quantitatively (accuracy 3%), but not spatially. A rough reconstruction with 2 iterations (subset size: 8) is sufficient for subsequent scatter correction. Estimation of primary photons is obtained by projecting the previous distribution including attenuation (narrow beam μ values). Estimation of the scatter is obtained by convolving the primary estimates by a depth dependent scatter kernel, and scaling the result by a factor calculated from the attenuation map. The correction can be accelerated by convolving several adjacent planes with the same kernel, and using an average scaling factor. Simulation of the effects of the collimator during the scatter correction was demonstrated to be unnecessary. Final reconstruction is performed using 6 iterations OSEM, including attenuation (narrow beam μ values) and spatial resolution correction. Scatter correction is implemented by incorporating the estimated scatter as a constant offset in the forward projection step. The total correction + reconstruction (64 proj. 40x128 pixel) takes 38 minutes on a Sun Sparc 20. Quantitatively, the accuracy is 7% in a reconstructed slice. The SNR inside the whole myocardium (defined from the original object), is equal to 2.1 and 2.3 - in the corrected and the primary slices respectively. The scatter correction preserves the myocardium to ventricle contrast (primary: 0.79, corrected: 0.82). These simplifications allow acceleration of correction without influencing the quality of the result

  16. Mitigating Satellite-Based Fire Sampling Limitations in Deriving Biomass Burning Emission Rates: Application to WRF-Chem Model Over the Northern sub-Saharan African Region

    Science.gov (United States)

    Wang, Jun; Yue, Yun; Wang, Yi; Ichoku, Charles; Ellison, Luke; Zeng, Jing

    2018-01-01

    Largely used in several independent estimates of fire emissions, fire products based on MODIS sensors aboard the Terra and Aqua polar-orbiting satellites have a number of inherent limitations, including (a) inability to detect fires below clouds, (b) significant decrease of detection sensitivity at the edge of scan where pixel sizes are much larger than at nadir, and (c) gaps between adjacent swaths in tropical regions. To remedy these limitations, an empirical method is developed here and applied to correct fire emission estimates based on MODIS pixel level fire radiative power measurements and emission coefficients from the Fire Energetics and Emissions Research (FEER) biomass burning emission inventory. The analysis was performed for January 2010 over the northern sub-Saharan African region. Simulations from WRF-Chem model using original and adjusted emissions are compared with the aerosol optical depth (AOD) products from MODIS and AERONET as well as aerosol vertical profile from CALIOP data. The comparison confirmed an 30-50% improvement in the model simulation performance (in terms of correlation, bias, and spatial pattern of AOD with respect to observations) by the adjusted emissions that not only increases the original emission amount by a factor of two but also results in the spatially continuous estimates of instantaneous fire emissions at daily time scales. Such improvement cannot be achieved by simply scaling the original emission across the study domain. Even with this improvement, a factor of two underestimations still exists in the modeled AOD, which is within the current global fire emissions uncertainty envelope.

  17. Automatic computation of radiative corrections

    International Nuclear Information System (INIS)

    Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Nakazawa, N.; Kaneko, T.

    1997-01-01

    Automated systems are reviewed focusing on their general structure and requirement specific to the calculation of radiative corrections. Detailed description of the system and its performance is presented taking GRACE as a concrete example. (author)

  18. Publisher Correction: On our bookshelf

    Science.gov (United States)

    Karouzos, Marios

    2018-03-01

    In the version of this Books and Arts originally published, the book title Spectroscopy for Amateur Astronomy was incorrect; it should have read Spectroscopy for Amateur Astronomers. This has now been corrected.

  19. Self-correcting quantum computers

    International Nuclear Information System (INIS)

    Bombin, H; Chhajlany, R W; Horodecki, M; Martin-Delgado, M A

    2013-01-01

    Is the notion of a quantum computer (QC) resilient to thermal noise unphysical? We address this question from a constructive perspective and show that local quantum Hamiltonian models provide self-correcting QCs. To this end, we first give a sufficient condition on the connectedness of excitations for a stabilizer code model to be a self-correcting quantum memory. We then study the two main examples of topological stabilizer codes in arbitrary dimensions and establish their self-correcting capabilities. Also, we address the transversality properties of topological color codes, showing that six-dimensional color codes provide a self-correcting model that allows the transversal and local implementation of a universal set of operations in seven spatial dimensions. Finally, we give a procedure for initializing such quantum memories at finite temperature. (paper)

  20. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  1. Libertarian Anarchism Is Apodictically Correct

    OpenAIRE

    Redford, James

    2011-01-01

    James Redford, "Libertarian Anarchism Is Apodictically Correct", Social Science Research Network (SSRN), Dec. 15, 2011, 9 pp., doi:10.2139/ssrn.1972733. ABSTRACT: It is shown that libertarian anarchism (i.e., consistent liberalism) is unavoidably true.

  2. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  3. Spelling Correction in User Interfaces.

    Science.gov (United States)

    1982-12-20

    conventional typescript -oriented command language, where most com- mands consist of a verb followed by a sequence of arguments. Most user terminals are...and explanations. not part of the typescripts . 2 SPFE.LING CORRLC1iON IN USR IN"RFAC’S 2. Design Issues We were prompted to look for a new correction...remaining 73% led us to wonder what other mechanisms might permit further corrections while retaining the typescript -style interface. Most of the other

  4. Developments in quantitative electron probe microanalysis

    International Nuclear Information System (INIS)

    Tixier, R.

    1977-01-01

    A study of the range of validity of the formulae for corrections used with massive specimen analysis is made. The method used is original; we have shown that it was possible to use a property of invariability of corrected intensity ratios for standards. This invariance property provides a test for the self consistency of the theory. The theoretical and experimental conditions required for quantitative electron probe microanalysis of thin transmission electron microscope specimens are examined. The correction formulae for atomic number, absorption and fluorescence effects are calculated. Several examples of experimental results are given, relative to the quantitative analysis of intermetallic precipitates and carbides in steels. Advances in applications of electron probe instruments related to the use of computer and the present development of fully automated instruments are reviewed. The necessary statistics for measurements of X ray count data are studied. Estimation procedure and tests are developed. These methods are used to perform a statistical check of electron probe microanalysis measurements and to reject rogue values. An estimator of the confidence interval of the apparent concentration is derived. Formulae were also obtained to optimize the counting time in order to obtain the best precision in a minimum amount of time [fr

  5. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  6. Quantitative assessment of breast density from mammograms

    International Nuclear Information System (INIS)

    Jamal, N.; Ng, K.H.

    2004-01-01

    Full text: It is known that breast density is increasingly used as a risk factor for breast cancer. This study was undertaken to develop and validate a semi-automated computer technique for the quantitative assessment of breast density from digitised mammograms. A computer technique had been developed using MATLAB (Version 6.1) based GUI applications. This semi-automated image analysis tool consists of gradient correction, segmentation of breast region from background, segmentation of fibroglandular and adipose region within the breast area and calculation of breast density. The density is defined as the percentage of fibroglandular tissue area divided by the total breast area in the mammogram. This technique was clinically validated with 122 normal mammograms; these were subjectively evaluated and classified according to the five parenchyma patterns of the Tabar's scheme (Class I- V) by a consultant radiologist. There was a statistical significant correlation between the computer technique and subjective classification (r 2 = 0.84, p<0.05). 71.3% of subjective classification was correctly classified using the computer technique. We had developed a computer technique for the quantitative assessment of breast density and validated its accuracy for computerized classification based on Tabar's scheme. This quantitative tool is useful for the evaluation of a large dataset of mammograms to predict breast cancer risk based on density. Furthermore it has the potential to provide an early marker for success or failure in chemoprevention studies such as hormonal replacement therapy. Copyright (2004) Australasian College of Physical Scientists and Engineers in Medicine

  7. Height drift correction in non-raster atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Travis R. [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ziegler, Dominik [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Brune, Christoph [Institute for Computational and Applied Mathematics, University of Münster (Germany); Chen, Alex [Statistical and Applied Mathematical Sciences Institute, Research Triangle Park, NC 27709 (United States); Farnham, Rodrigo; Huynh, Nen; Chang, Jen-Mei [Department of Mathematics and Statistics, California State University Long Beach, Long Beach, CA 90840 (United States); Bertozzi, Andrea L., E-mail: bertozzi@math.ucla.edu [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ashby, Paul D., E-mail: pdashby@lbl.gov [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2014-02-01

    We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In conventional raster scanning drift is usually corrected by subtracting a fitted polynomial from each scan line, but sample tilt or large topographic features can result in severe artifacts. Our method uses self-intersecting scan paths to distinguish drift from topographic features. Observing the height differences when passing the same position at different times enables the reconstruction of a continuous function of drift. We show that a small number of self-intersections is adequate for automatic and reliable drift correction. Additionally, we introduce a fitness function which provides a quantitative measure of drift correctability for any arbitrary scan shape. - Highlights: • We propose a novel height drift correction method for non-raster SPM. • Self-intersecting scans enable the distinction of drift from topographic features. • Unlike conventional techniques our method is unsupervised and tilt-invariant. • We introduce a fitness measure to quantify correctability for general scan paths.

  8. Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    CERN Document Server

    Zaidi, H

    2000-01-01

    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...

  9. Quantitative secondary electron detection

    Science.gov (United States)

    Agrawal, Jyoti; Joy, David C.; Nayak, Subuhadarshi

    2018-05-08

    Quantitative Secondary Electron Detection (QSED) using the array of solid state devices (SSD) based electron-counters enable critical dimension metrology measurements in materials such as semiconductors, nanomaterials, and biological samples (FIG. 3). Methods and devices effect a quantitative detection of secondary electrons with the array of solid state detectors comprising a number of solid state detectors. An array senses the number of secondary electrons with a plurality of solid state detectors, counting the number of secondary electrons with a time to digital converter circuit in counter mode.

  10. [Methods of quantitative proteomics].

    Science.gov (United States)

    Kopylov, A T; Zgoda, V G

    2007-01-01

    In modern science proteomic analysis is inseparable from other fields of systemic biology. Possessing huge resources quantitative proteomics operates colossal information on molecular mechanisms of life. Advances in proteomics help researchers to solve complex problems of cell signaling, posttranslational modification, structure and functional homology of proteins, molecular diagnostics etc. More than 40 various methods have been developed in proteomics for quantitative analysis of proteins. Although each method is unique and has certain advantages and disadvantages all these use various isotope labels (tags). In this review we will consider the most popular and effective methods employing both chemical modifications of proteins and also metabolic and enzymatic methods of isotope labeling.

  11. Surgical correction of postoperative astigmatism

    Directory of Open Access Journals (Sweden)

    Lindstrom Richard

    1990-01-01

    Full Text Available The photokeratoscope has increased the understanding of the aspheric nature of the cornea as well as a better understanding of normal corneal topography. This has significantly affected the development of newer and more predictable models of surgical astigmatic correction. Relaxing incisions effectively flatten the steeper meridian an equivalent amount as they steepen the flatter meridian. The net change in spherical equivalent is, therefore, negligible. Poor predictability is the major limitation of relaxing incisions. Wedge resection can correct large degrees of postkeratoplasty astigmatism, Resection of 0.10 mm of tissue results in approximately 2 diopters of astigmatic correction. Prolonged postoperative rehabilitation and induced irregular astigmatism are limitations of the procedure. Transverse incisions flatten the steeper meridian an equivalent amount as they steepen the flatter meridian. Semiradial incisions result in two times the amount of flattening in the meridian of the incision compared to the meridian 90 degrees away. Combination of transverse incisions with semiradial incisions describes the trapezoidal astigmatic keratotomy. This procedure may correct from 5.5 to 11.0 diopters dependent upon the age of the patient. The use of the surgical keratometer is helpful in assessing a proper endpoint during surgical correction of astigmatism.

  12. Fully 3D refraction correction dosimetry system

    International Nuclear Information System (INIS)

    Manjappa, Rakesh; Makki, S Sharath; Kanhirodan, Rajan; Kumar, Rajesh; Vasu, Ram Mohan

    2016-01-01

    medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space. (paper)

  13. Fully 3D refraction correction dosimetry system.

    Science.gov (United States)

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  14. Corrections.

    Science.gov (United States)

    1994-05-27

    In "Women in Science: Some Books of the Year" (11 March, p. 1458) the name of the senior editor of second edition of The History of Women and Science, Health, and Technology should have been given as Phyllis Holman Weisbard, and the name of the editor of the first edition should have been given as Susan Searing. Also, the statement that the author of A Matter of Choices: Memoirs of a Female Physicist, Fay Ajzenberg-Selove, is now retired was incorrect.

  15. Corrections.

    Science.gov (United States)

    2016-02-01

    In the October In Our Unit article by Cooper et al, “Against All Odds: Preventing Pressure Ulcers in High-Risk Cardiac Surgery Patients” (Crit Care Nurse. 2015;35[5]:76–82), there was an error in the reference citation on page 82. At the top of that page, reference 18 cited on the second line should be reference 23, which also should be added to the References list: 23. AHRQ website. Prevention and treatment program integrates actionable reports into practice, significantly reducing pressure ulcers in nursing home residents. November 2008. https://innovations.ahrq.gov/profiles/prevention-and-treatment-program-integrates-actionable-reports-practice-significantly. Accessed November 18, 2015

  16. Correction.

    Science.gov (United States)

    2015-06-01

    Gillon R. Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics. J Med Ethics 2015;41:111–6. The author misrepresented Beauchamp and Childress when he wrote: ‘My own view (unlike Beauchamp and Childress who explicitly state that they make no such claim ( p. 421)1, is that all moral agents whether or not they are doctors or otherwise involved in healthcare have these prima facie moral obligations; but in the context of answering the question ‘what is it to do good medical ethics ?’ my claim is limited to the ethical obligations of doctors’. The author intended and should have written the following: ‘My own view, unlike Beauchamp and Childress who explicitly state that they make no such claim (p.421)1 is that these four prima facie principles can provide a basic moral framework not only for medical ethics but for ethics in general’.

  17. Correction.

    Science.gov (United States)

    2015-03-01

    In the January 2015 issue of Cyberpsychology, Behavior, and Social Networking (vol. 18, no. 1, pp. 3–7), the article "Individual Differences in Cyber Security Behaviors: An Examination of Who Is Sharing Passwords." by Prof. Monica Whitty et al., has an error in wording in the abstract. The sentence in question was originally printed as: Contrary to our hypotheses, we found older people and individuals who score high on self-monitoring were more likely to share passwords. It should read: Contrary to our hypotheses, we found younger people and individuals who score high on self-monitoring were more likely to share passwords. The authors wish to apologize for the error.

  18. Correction

    CERN Multimedia

    2007-01-01

    From left to right: Luis, Carmen, Mario, Christian and José listening to speeches by theorists Alvaro De Rújula and Luis Alvarez-Gaumé (right) at their farewell gathering on 15 May.We unfortunately cut out a part of the "Word of thanks" from the team retiring from Restaurant No. 1. The complete message is published below: Dear friends, You are the true "nucleus" of CERN. Every member of this extraordinary human mosaic will always remain in our affections and in our thoughts. We have all been very touched by your spontaneous generosity. Arrivederci, Mario Au revoir,Christian Hasta Siempre Carmen, José and Luis PS: Lots of love to the theory team and to the hidden organisers. So long!

  19. Correction

    Science.gov (United States)

    2014-01-01

    In the meeting report "Strategies to observe and understand processes and drivers in the biogeosphere," published in the 14 January 2014 issue of Eos (95(2), 16, doi:10.1002/2014EO020004), an incorrect affiliation was listed for one coauthor. Michael Young is with the University of Texas at Austin.

  20. Data-driven motion correction in brain SPECT

    International Nuclear Information System (INIS)

    Kyme, A.Z.; Hutton, B.F.; Hatton, R.L.; Skerrett, D.W.

    2002-01-01

    Patient motion can cause image artifacts in SPECT despite restraining measures. Data-driven detection and correction of motion can be achieved by comparison of acquired data with the forward-projections. By optimising the orientation of the reconstruction, parameters can be obtained for each misaligned projection and applied to update this volume using a 3D reconstruction algorithm. Digital and physical phantom validation was performed to investigate this approach. Noisy projection data simulating at least one fully 3D patient head movement during acquisition were constructed by projecting the digital Huffman brain phantom at various orientations. Motion correction was applied to the reconstructed studies. The importance of including attenuation effects in the estimation of motion and the need for implementing an iterated correction were assessed in the process. Correction success was assessed visually for artifact reduction, and quantitatively using a mean square difference (MSD) measure. Physical Huffman phantom studies with deliberate movements introduced during the acquisition were also acquired and motion corrected. Effective artifact reduction in the simulated corrupt studies was achieved by motion correction. Typically the MSD ratio between the corrected and reference studies compared to the corrupted and reference studies was > 2. Motion correction could be achieved without inclusion of attenuation effects in the motion estimation stage, providing simpler implementation and greater efficiency. Moreover the additional improvement with multiple iterations of the approach was small. Improvement was also observed in the physical phantom data, though the technique appeared limited here by an object symmetry. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  1. Quantitative Decision Support Requires Quantitative User Guidance

    Science.gov (United States)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  2. Benthic Habitat Mapping Using Multispectral High-Resolution Imagery: Evaluation of Shallow Water Atmospheric Correction Techniques

    Directory of Open Access Journals (Sweden)

    Francisco Eugenio

    2017-11-01

    Full Text Available Remote multispectral data can provide valuable information for monitoring coastal water ecosystems. Specifically, high-resolution satellite-based imaging systems, as WorldView-2 (WV-2, can generate information at spatial scales needed to implement conservation actions for protected littoral zones. However, coastal water-leaving radiance arriving at the space-based sensor is often small as compared to reflected radiance. In this work, complex approaches, which usually use an accurate radiative transfer code to correct the atmospheric effects, such as FLAASH, ATCOR and 6S, have been implemented for high-resolution imagery. They have been assessed in real scenarios using field spectroradiometer data. In this context, the three approaches have achieved excellent results and a slightly superior performance of 6S model-based algorithm has been observed. Finally, for the mapping of benthic habitats in shallow-waters marine protected environments, a relevant application of the proposed atmospheric correction combined with an automatic deglinting procedure is presented. This approach is based on the integration of a linear mixing model of benthic classes within the radiative transfer model of the water. The complete methodology has been applied to selected ecosystems in the Canary Islands (Spain but the obtained results allow the robust mapping of the spatial distribution and density of seagrass in coastal waters and the analysis of multitemporal variations related to the human activity and climate change in littoral zones.

  3. Quantitative analysis of thallium-201 myocardial scintigraphy

    International Nuclear Information System (INIS)

    Kanemoto, Nariaki; Hoer, G.; Johost, S.; Maul, F.-D.; Standke, R.

    1981-01-01

    The method of quantitative analysis of thallium-201 myocardial scintigraphy using computer assisted technique was described. Calculated indices are washout factor, vitality index and redistribution factor. Washout factor is the ratio of counts at certain period of time after exercise and immediately after exercise. This value is neccessary for the evaluation of redistribution to the ischemic areas in serial imagings to correct the Tl-201 washout from the myocardium under the assumption that the washout is constant in the whole myocardium. Vitality index is the ratio between the Tl-201 uptake in the region of interest and that of the maximum. Redistribution factor is the ratio of the redistribution in the region of interest in serial imagings after exercise to that of immediately after exercise. Four examples of exercise Tl-201 myocardial scintigrams and the quantitative analyses before and after the percutaneous transluminal coronary angioplasty were presented. (author)

  4. Quantitative sputter profiling at surfaces and interfaces

    International Nuclear Information System (INIS)

    Kirschner, J.; Etzkorn, H.W.

    1981-01-01

    The key problem in quantitative sputter profiling, that of a sliding depth scale has been solved by combined Auger/X-ray microanalysis. By means of this technique and for the model system Ge/Si (amorphous) the following questions are treated quantitatively: shape of the sputter profiles when sputtering through an interface and origin of their asymmetry; precise location of the interface plane on the depth profile; broadening effects due to limited depth of information and their correction; origin and amount of bombardment induced broadening for different primary ions and energies; depth dependence of the broadening, and basic limits to depth resolution. Comparisons are made to recent theoretical calculations based on recoil mixing in the collision cascade and very good agreement is found

  5. Universality of quantum gravity corrections.

    Science.gov (United States)

    Das, Saurya; Vagenas, Elias C

    2008-11-28

    We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.

  6. String-Corrected Black Holes

    Energy Technology Data Exchange (ETDEWEB)

    Hubeny, V.

    2005-01-12

    We investigate the geometry of four dimensional black hole solutions in the presence of stringy higher curvature corrections to the low energy effective action. For certain supersymmetric two charge black holes these corrections drastically alter the causal structure of the solution, converting seemingly pathological null singularities into timelike singularities hidden behind a finite area horizon. We establish, analytically and numerically, that the string-corrected two-charge black hole metric has the same Penrose diagram as the extremal four-charge black hole. The higher derivative terms lead to another dramatic effect--the gravitational force exerted by a black hole on an inertial observer is no longer purely attractive. The magnitude of this effect is related to the size of the compactification manifold.

  7. Quantitative determination of minor and trace elements in rocks and soils by spark source mass spectrometry

    International Nuclear Information System (INIS)

    Ure, A.M.; Bacon, J.R.

    1978-01-01

    Experimental details are given of the quantitative determination of minor and trace elements in rocks and soils by spark source mass spectrometry. The effects of interfering species, and corrections that can be applied, are discussed. (U.K.)

  8. SELF CORRECTION WORKS BETTER THAN TEACHER CORRECTION IN EFL SETTING

    Directory of Open Access Journals (Sweden)

    Azizollah Dabaghi

    2012-11-01

    Full Text Available Learning a foreign language takes place step by step, during which mistakes are to be expected in all stages of learning. EFL learners are usually afraid of making mistakes which prevents them from being receptive and responsive. Overcoming fear of mistakes depends on the way mistakes are rectified. It is believed that autonomy and learner-centeredness suggest that in some settings learner's self-correction of mistakes might be more beneficial for language learning than teacher's correction. This assumption has been the subject of debates for some time. Some researchers believe that correction whether that of teacher's or on behalf of learners is effective in showing them how their current interlanguage differs from the target (Long &Robinson, 1998. Others suggest that correcting the students whether directly or through recasts are ambiguous and may be perceived by the learner as confirmation of meaning rather than feedback on form (Lyster, 1998a. This study is intended to investigate the effects of correction on Iranian intermediate EFL learners' writing composition in Payam Noor University. For this purpose, 90 English majoring students, studying at Isfahan Payam Noor University were invited to participate at the experiment. They all received a sample of TOFEL test and a total number of 60 participants whose scores were within the range of one standard deviation below and above the mean were divided into two equal groups; experimental and control. The experimental group went through some correction during the experiment while the control group remained intact and the ordinary processes of teaching went on. Each group received twelve sessions of two hour classes every week on advanced writing course in which some activities of Modern English (II were selected. Then after the treatment both groups received an immediate test as post-test and the experimental group took the second post-test as the delayed recall test with the same design as the

  9. Extending Quantitative Easing

    DEFF Research Database (Denmark)

    Hallett, Andrew Hughes; Fiedler, Salomon; Kooths, Stefan

    The notes in this compilation address the pros and cons associated with the extension of ECB quantitative easing programme of asset purchases. The notes have been requested by the Committee on Economic and Monetary Affairs as an input for the February 2017 session of the Monetary Dialogue....

  10. Quantitative Moessbauer analysis

    International Nuclear Information System (INIS)

    Collins, R.L.

    1978-01-01

    The quantitative analysis of Moessbauer data, as in the measurement of Fe 3+ /Fe 2+ concentration, has not been possible because of the different mean square velocities (x 2 ) of Moessbauer nuclei at chemically different sites. A method is now described which, based on Moessbauer data at several temperatures, permits the comparison of absorption areas at (x 2 )=0. (Auth.)

  11. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  12. Self-correcting Multigrid Solver

    International Nuclear Information System (INIS)

    Lewandowski, Jerome L.V.

    2004-01-01

    A new multigrid algorithm based on the method of self-correction for the solution of elliptic problems is described. The method exploits information contained in the residual to dynamically modify the source term (right-hand side) of the elliptic problem. It is shown that the self-correcting solver is more efficient at damping the short wavelength modes of the algebraic error than its standard equivalent. When used in conjunction with a multigrid method, the resulting solver displays an improved convergence rate with no additional computational work

  13. Brane cosmology with curvature corrections

    International Nuclear Information System (INIS)

    Kofinas, Georgios; Maartens, Roy; Papantonopoulos, Eleftherios

    2003-01-01

    We study the cosmology of the Randall-Sundrum brane-world where the Einstein-Hilbert action is modified by curvature correction terms: a four-dimensional scalar curvature from induced gravity on the brane, and a five-dimensional Gauss-Bonnet curvature term. The combined effect of these curvature corrections to the action removes the infinite-density big bang singularity, although the curvature can still diverge for some parameter values. A radiation brane undergoes accelerated expansion near the minimal scale factor, for a range of parameters. This acceleration is driven by the geometric effects, without an inflation field or negative pressures. At late times, conventional cosmology is recovered. (author)

  14. Pixels and patterns: A satellite-based investigation of changes to urban features in the Sanya Region, Hainan Special Economic Zone, China

    Science.gov (United States)

    Millward, Andrew Allan

    Throughout most of China, and particularly in the coastal areas of its south, ecological resources and traditional culture are viewed by many to be negatively impacted by accelerating urbanization. As a result, achieving an appropriate balance between development and environmental protection has become a significant problem facing policy-makers in these urbanizing areas. The establishment of a Special Economic Zone in the Chinese Province of Hainan has made its coastal areas attractive locations for business and commerce. Development activities that support a burgeoning tourism industry, but which are damaging the environment, are now prominent components of the landscape in the Sanya Region of Hainan. In this study, patterns of urban growth in the Sanya Region of Hainan Province are investigated. Specifically, using several forms of satellite imagery, statistical tools and ancillary data, urban morphology and changes to the extent and spatial arrangement of urban features are researched and documented. A twelve-year chronology of data was collected which consists of four dates of satellite imagery (1987, 1991, 1997, 1999) acquired by three different satellite sensors (SPOT 2 HRV, Landsat 5 TM, Landsat 7 ETM+). A method of assessing inter-temporal variance in unchanged features is developed as a surrogate for traditional evaluations of change detection that require spatially accurate and time-specific data. Results reveal that selective PCA using visible bands with the exclusion of an ocean mask yield the most interpretable components representative of landscape urbanization in the Sanya Region. The geostatistical approach of variography is employed to measure spatial dependence and to test for the presence of directional change in urban morphology across a time series of satellite images. Interpreted time-series geostatistics identify and quantify landscape structure, and changes to structure, and provide a valuable quantitative description of landscape change

  15. Critical Quantitative Inquiry in Context

    Science.gov (United States)

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  16. A Generalized Correction for Attenuation.

    Science.gov (United States)

    Petersen, Anne C.; Bock, R. Darrell

    Use of the usual bivariate correction for attenuation with more than two variables presents two statistical problems. This pairwise method may produce a covariance matrix which is not at least positive semi-definite, and the bivariate procedure does not consider the possible influences of correlated errors among the variables. The method described…

  17. Entropic corrections to Newton's law

    International Nuclear Information System (INIS)

    Setare, M R; Momeni, D; Myrzakulov, R

    2012-01-01

    In this short paper, we calculate separately the generalized uncertainty principle (GUP) and self-gravitational corrections to Newton's gravitational formula. We show that for a complete description of the GUP and self-gravity effects, both the temperature and entropy must be modified. (paper)

  18. 'Correction of unrealizable service choreographies’

    NARCIS (Netherlands)

    Mancioppi, M.

    2015-01-01

    This thesis is devoted to the detection and correction of design flaws affecting service choreographies. Service choreographies are models that specify how software services are composed in a decentralized, message-driven fashion. In particular, this work focuses on flaws that compromise the

  19. Multilingual text induced spelling correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a multilingual, language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from raw text corpora, without supervision, and contains word unigrams

  20. The correct "ball bearings" data.

    Science.gov (United States)

    Caroni, C

    2002-12-01

    The famous data on fatigue failure times of ball bearings have been quoted incorrectly from Lieblein and Zelen's original paper. The correct data include censored values, as well as non-fatigue failures that must be handled appropriately. They could be described by a mixture of Weibull distributions, corresponding to different modes of failure.

  1. Interaction and self-correction

    DEFF Research Database (Denmark)

    Satne, Glenda Lucila

    2014-01-01

    and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self...

  2. CORRECTIVE ACTION IN CAR MANUFACTURING

    Directory of Open Access Journals (Sweden)

    H. Rohne

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this paper the important .issues involved in successfully implementing corrective action systems in quality management are discussed. The work is based on experience in implementing and operating such a system in an automotive manufacturing enterprise in South Africa. The core of a corrective action system is good documentation, supported by a computerised information system. Secondly, a systematic problem solving methodology is essential to resolve the quality related problems identified by the system. In the following paragraphs the general corrective action process is discussed and the elements of a corrective action system are identified, followed by a more detailed discussion of each element. Finally specific results from the application are discussed.

    AFRIKAANSE OPSOMMING: Belangrike oorwegings by die suksesvolle implementering van korrektiewe aksie stelsels in gehaltebestuur word in hierdie artikel bespreek. Die werk is gebaseer op ondervinding in die implementering en bedryf van so 'n stelsel by 'n motorvervaardiger in Suid Afrika. Die kern van 'n korrektiewe aksie stelsel is goeie dokumentering, gesteun deur 'n gerekenariseerde inligtingstelsel. Tweedens is 'n sistematiese probleemoplossings rnetodologie nodig om die gehalte verwante probleme wat die stelsel identifiseer aan te spreek. In die volgende paragrawe word die algemene korrektiewe aksie proses bespreek en die elemente van die korrektiewe aksie stelsel geidentifiseer. Elke element word dan in meer besonderhede bespreek. Ten slotte word spesifieke resultate van die toepassing kortliks behandel.

  3. Rank error-correcting pairs

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto; Pellikaan, Ruud

    2017-01-01

    Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...

  4. Analysis of dynamical corrections to baryon magnetic moments

    International Nuclear Information System (INIS)

    Ha, Phuoc; Durand, Loyal

    2003-01-01

    We present and analyze QCD corrections to the baryon magnetic moments in terms of the one-, two-, and three-body operators which appear in the effective field theory developed in our recent papers. The main corrections are extended Thomas-type corrections associated with the confining interactions in the baryon. We investigate the contributions of low-lying angular excitations to the baryon magnetic moments quantitatively and show that they are completely negligible. When the QCD corrections are combined with the nonquark model contributions of the meson loops, we obtain a model which describes the baryon magnetic moments within a mean deviation of 0.04 μ N . The nontrivial interplay of the two types of corrections to the quark-model magnetic moments is analyzed in detail, and explains why the quark model is so successful. In the course of these calculations, we parametrize the general spin structure of the j=(1/2) + baryon wave functions in a form which clearly displays the symmetry properties and the internal angular momentum content of the wave functions, and allows us to use spin-trace methods to calculate the many spin matrix elements which appear in the expressions for the baryon magnetic moments. This representation may be useful elsewhere

  5. Corrective measures evaluation report for technical area-v groundwater.

    Energy Technology Data Exchange (ETDEWEB)

    Witt, Johnathan L (North Wind, Inc., Idaho Falls, ID); Orr, Brennon R. (North Wind, Inc., Idaho Falls, ID); Dettmers, Dana L. (North Wind, Inc., Idaho Falls, ID); Hall, Kevin A. (North Wind, Inc., Idaho Falls, ID); Howard, Hope (North Wind, Inc., Idaho Falls, ID)

    2005-07-01

    This Corrective Measures Evaluation Report was prepared as directed by the Compliance Order on Consent issued by the New Mexico Environment Department to document the process of selecting the preferred remedial alternative for contaminated groundwater at Technical Area V. Supporting information includes background information about the site conditions and potential receptors and an overview of work performed during the Corrective Measures Evaluation. Evaluation of remedial alternatives included identification and description of four remedial alternatives, an overview of the evaluation criteria and approach, qualitative and quantitative evaluation of remedial alternatives, and selection of the preferred remedial alternative. As a result of the Corrective Measures Evaluation, it was determined that monitored natural attenuation of all contaminants of concern (trichloroethene, tetrachloroethene, and nitrate) was the preferred remedial alternative for implementation as the corrective measure to remediate contaminated groundwater at Technical Area V of Sandia National Laboratories/New Mexico. Finally, design criteria to meet cleanup goals and objectives and the corrective measures implementation schedule for the preferred remedial alternative are presented.

  6. Applied quantitative finance

    CERN Document Server

    Chen, Cathy; Overbeck, Ludger

    2017-01-01

    This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, state-of-the-art treatment of cutting-edge methods and topics, such as collateralized debt obligations, the high-frequency analysis of market liquidity, and realized volatility. The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchain-based currencies, have become popular b ut are theoretically challenging...

  7. Quantitative skeletal scintiscanning

    International Nuclear Information System (INIS)

    Haushofer, R.

    1982-01-01

    330 patients were examined by skeletal scintiscanning with sup(99m)Tc pyrophosphate and sup(99m)methylene diphosphonate in the years between 1977 and 1979. Course control examinations were carried out in 12 patients. The collective of patients presented with primary skeletal tumours, metastases, inflammatory and degenerative skeletal diseases. Bone scintiscanning combined with the ''region of interest'' technique was found to be an objective and reproducible technique for quantitative measurement of skeletal radioactivity concentrations. The validity of nuclear skeletal examinations can thus be enhanced as far as diagnosis, course control, and differential diagnosis are concerned. Quantitative skeletal scintiscanning by means of the ''region of interest'' technique has opened up a new era in skeletal diagnosis by nuclear methods. (orig./MG) [de

  8. Quantitative FDG in depression

    Energy Technology Data Exchange (ETDEWEB)

    Chua, P.; O`Keefe, G.J.; Egan, G.F.; Berlangieri, S.U.; Tochon-Danguy, H.J.; Mckay, W.J.; Morris, P.L.P.; Burrows, G.D. [Austin Hospital, Melbourne, VIC (Australia). Dept of Psychiatry and Centre for PET

    1998-03-01

    Full text: Studies of regional cerebral glucose metabolism (rCMRGlu) using positron emission tomography (PET) in patients with affective disorders have consistently demonstrated reduced metabolism in the frontal regions. Different quantitative and semi-quantitative rCMRGlu regions of interest (ROI) comparisons, e.g. absolute metabolic rates, ratios of dorsolateral prefrontal cortex (DLPFC) to ipsilateral hemisphere cortex, have been reported. These studies suffered from the use of a standard brain atlas to define ROls, whereas in this case study, the individual``s magnetic resonance imaging (MRI) scan was registered with the PET scan to enable accurate neuroanatomical ROI definition for the subject. The patient is a 36-year-old female with a six-week history of major depression (HAM-D = 34, MMSE = 28). A quantitative FDG PET study and an MRI scan were performed. Six MRI-guided ROls (DLPFC, PFC, whole hemisphere) were defined. The average rCMRGlu in the DLPFC (left = 28.8 + 5.8 mol/100g/min; right = 25.6 7.0 mol/100g/min) were slightly reduced compared to the ipsilateral hemispherical rate (left = 30.4 6.8 mol/100g/min; right = 29.5 7.2 mol/100g/min). The ratios of DLPFC to ipsilateral hemispheric rate were close to unity (left = 0.95 0.29; right 0.87 0.32). The right to left DLPFC ratio did not show any significant asymmetry (0.91 0.30). These results do not correlate with earlier published results reporting decreased left DLPFC rates compared to right DLPFC, although our results will need to be replicated with a group of depressed patients. Registration of PET and MRI studies is necessary in ROI-based quantitative FDG PET studies to allow for the normal anatomical variation among individuals, and thus is essential for accurate comparison of rCMRGlu between individuals.

  9. Quantitative FDG in depression

    International Nuclear Information System (INIS)

    Chua, P.; O'Keefe, G.J.; Egan, G.F.; Berlangieri, S.U.; Tochon-Danguy, H.J.; Mckay, W.J.; Morris, P.L.P.; Burrows, G.D.

    1998-01-01

    Full text: Studies of regional cerebral glucose metabolism (rCMRGlu) using positron emission tomography (PET) in patients with affective disorders have consistently demonstrated reduced metabolism in the frontal regions. Different quantitative and semi-quantitative rCMRGlu regions of interest (ROI) comparisons, e.g. absolute metabolic rates, ratios of dorsolateral prefrontal cortex (DLPFC) to ipsilateral hemisphere cortex, have been reported. These studies suffered from the use of a standard brain atlas to define ROls, whereas in this case study, the individual''s magnetic resonance imaging (MRI) scan was registered with the PET scan to enable accurate neuroanatomical ROI definition for the subject. The patient is a 36-year-old female with a six-week history of major depression (HAM-D = 34, MMSE = 28). A quantitative FDG PET study and an MRI scan were performed. Six MRI-guided ROls (DLPFC, PFC, whole hemisphere) were defined. The average rCMRGlu in the DLPFC (left = 28.8 + 5.8 mol/100g/min; right = 25.6 7.0 mol/100g/min) were slightly reduced compared to the ipsilateral hemispherical rate (left = 30.4 6.8 mol/100g/min; right = 29.5 7.2 mol/100g/min). The ratios of DLPFC to ipsilateral hemispheric rate were close to unity (left = 0.95 0.29; right 0.87 0.32). The right to left DLPFC ratio did not show any significant asymmetry (0.91 0.30). These results do not correlate with earlier published results reporting decreased left DLPFC rates compared to right DLPFC, although our results will need to be replicated with a group of depressed patients. Registration of PET and MRI studies is necessary in ROI-based quantitative FDG PET studies to allow for the normal anatomical variation among individuals, and thus is essential for accurate comparison of rCMRGlu between individuals

  10. Quantitative traits and diversification.

    Science.gov (United States)

    FitzJohn, Richard G

    2010-12-01

    Quantitative traits have long been hypothesized to affect speciation and extinction rates. For example, smaller body size or increased specialization may be associated with increased rates of diversification. Here, I present a phylogenetic likelihood-based method (quantitative state speciation and extinction [QuaSSE]) that can be used to test such hypotheses using extant character distributions. This approach assumes that diversification follows a birth-death process where speciation and extinction rates may vary with one or more traits that evolve under a diffusion model. Speciation and extinction rates may be arbitrary functions of the character state, allowing much flexibility in testing models of trait-dependent diversification. I test the approach using simulated phylogenies and show that a known relationship between speciation and a quantitative character could be recovered in up to 80% of the cases on large trees (500 species). Consistent with other approaches, detecting shifts in diversification due to differences in extinction rates was harder than when due to differences in speciation rates. Finally, I demonstrate the application of QuaSSE to investigate the correlation between body size and diversification in primates, concluding that clade-specific differences in diversification may be more important than size-dependent diversification in shaping the patterns of diversity within this group.

  11. Tracer kinetic modelling of receptor data with mathematical metabolite correction

    International Nuclear Information System (INIS)

    Burger, C.; Buck, A.

    1996-01-01

    Quantitation of metabolic processes with dynamic positron emission tomography (PET) and tracer kinetic modelling relies on the time course of authentic ligand in plasma, i.e. the input curve. The determination of the latter often requires the measurement of labelled metabilites, a laborious procedure. In this study we examined the possibility of mathematical metabolite correction, which might obviate the need for actual metabolite measurements. Mathematical metabilite correction was implemented by estimating the input curve together with kinetic tissue parameters. The general feasibility of the approach was evaluated in a Monte Carlo simulation using a two tissue compartment model. The method was then applied to a series of five human carbon-11 iomazenil PET studies. The measured cerebral tissue time-activity curves were fitted with a single tissue compartment model. For mathematical metabolite correction the input curve following the peak was approximated by a sum of three decaying exponentials, the amplitudes and characteristic half-times of which were then estimated by the fitting routine. In the simulation study the parameters used to generate synthetic tissue time-activity curves (K 1 -k 4 ) were refitted with reasonable identifiability when using mathematical metabolite correciton. Absolute quantitation of distribution volumes was found to be possible provided that the metabolite and the kinetic models are adequate. If the kinetic model is oversimplified, the linearity of the correlation between true and estimated distribution volumes is still maintained, although the linear regression becomes dependent on the input curve. These simulation results were confirmed when applying mathematical metabolite correction to the 11 C iomazenil study. Estimates of the distribution volume calculated with a measured input curve were linearly related to the estimates calculated using mathematical metabolite correction with correlation coefficients >0.990. (orig./MG)

  12. Synchronous atmospheric radiation correction of GF-2 satellite multispectral image

    Science.gov (United States)

    Bian, Fuqiang; Fan, Dongdong; Zhang, Yan; Wang, Dandan

    2018-02-01

    GF-2 remote sensing products have been widely used in many fields for its high-quality information, which provides technical support for the the macroeconomic decisions. Atmospheric correction is the necessary part in the data preprocessing of the quantitative high resolution remote sensing, which can eliminate the signal interference in the radiation path caused by atmospheric scattering and absorption, and reducting apparent reflectance into real reflectance of the surface targets. Aiming at the problem that current research lack of atmospheric date which are synchronization and region matching of the surface observation image, this research utilize the MODIS Level 1B synchronous data to simulate synchronized atmospheric condition, and write programs to implementation process of aerosol retrieval and atmospheric correction, then generate a lookup table of the remote sensing image based on the radioactive transfer model of 6S (second simulation of a satellite signal in the solar spectrum) to correct the atmospheric effect of multispectral image from GF-2 satellite PMS-1 payload. According to the correction results, this paper analyzes the pixel histogram of the reflectance spectrum of the 4 spectral bands of PMS-1, and evaluates the correction results of different spectral bands. Then conducted a comparison experiment on the same GF-2 image based on the QUAC. According to the different targets respectively statistics the average value of NDVI, implement a comparative study of NDVI from two different results. The degree of influence was discussed by whether to adopt synchronous atmospheric date. The study shows that the result of the synchronous atmospheric parameters have significantly improved the quantitative application of the GF-2 remote sensing data.

  13. Correcting ligands, metabolites, and pathways

    Directory of Open Access Journals (Sweden)

    Vriend Gert

    2006-11-01

    Full Text Available Abstract Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry and that a considerable number (about one third had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and

  14. Quantitative image fusion in infrared radiometry

    Science.gov (United States)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  15. The method of quantitative X-ray microanalysis of fine inclusions in copper

    International Nuclear Information System (INIS)

    Morawiec, H.; Kubica, L.; Piszczek, J.

    1978-01-01

    The method of correction for the matrix effect in quantitative x-ray microanalysis was presented. The application of the method was discussed on the example of quantitative analysis of fine inclusions of Cu 2 S and Cu 2 O in copper. (author)

  16. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization.

    Science.gov (United States)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-09-01

    In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum and pons in HRRT brain images have been reported. The two main sources of the problem with MAP-TR are poor bone/soft tissue segmentation below the brain and overestimation of bone mass in the skull. We developed the new transmission processing with total variation (TXTV) method that introduces scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT scanner using TXTV to the GE Advance scanner images and found high quantitative correspondence. TXTV has been used to reconstruct more than 4000 HRRT scans at seven different sites with no reports of biases. TXTV-based reconstruction is recommended for human brain scans on the HRRT.

  17. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  18. Personalized recommendation with corrected similarity

    International Nuclear Information System (INIS)

    Zhu, Xuzhen; Tian, Hui; Cai, Shimin

    2014-01-01

    Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)

  19. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  20. Corrective action program reengineering project

    International Nuclear Information System (INIS)

    Vernick, H.R.

    1996-01-01

    A series of similar refueling floor events that occurred during the early 1990s prompted Susquehanna steam electric station (SSES) management to launch a broad-based review of how the Nuclear Department conducts business. This was accomplished through the formation of several improvement initiative teams. Clearly, one of the key areas that benefited from this management initiative was the corrective action program. The corrective action improvement team was charged with taking a comprehensive look at how the Nuclear Department identified and resolved problems. The 10-member team included management and bargaining unit personnel as well as an external management consultant. This paper provides a summary of this self-assessment initiative, including a discussion of the issues identified, opportunities for improvement, and subsequent completed or planned actions

  1. Corrected body surface potential mapping.

    Science.gov (United States)

    Krenzke, Gerhard; Kindt, Carsten; Hetzer, Roland

    2007-02-01

    In the method for body surface potential mapping described here, the influence of thorax shape on measured ECG values is corrected. The distances of the ECG electrodes from the electrical heart midpoint are determined using a special device for ECG recording. These distances are used to correct the ECG values as if they had been measured on the surface of a sphere with a radius of 10 cm with its midpoint localized at the electrical heart midpoint. The equipotential lines of the electrical heart field are represented on the virtual surface of such a sphere. It is demonstrated that the character of a dipole field is better represented if the influence of the thorax shape is reduced. The site of the virtual reference electrode is also important for the dipole character of the representation of the electrical heart field.

  2. Interaction and Self-Correction

    Directory of Open Access Journals (Sweden)

    Glenda Lucila Satne

    2014-07-01

    Full Text Available In this paper I address the question of how to account for the normative dimension involved in conceptual competence in a naturalistic framework. First, I present what I call the Naturalist Challenge (NC, referring to both the phylogenetic and ontogenetic dimensions of conceptual possession and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self-correction that I develop with the help of the interactionist theory of mutual understanding arising from recent developments in Phenomenology and Developmental Psychology.

  3. EPS Young Physicist Prize - CORRECTION

    CERN Multimedia

    2009-01-01

    The original text for the article 'Prizes aplenty in Krakow' in Bulletin 30-31 assigned the award of the EPS HEPP Young Physicist Prize to Maurizio Pierini. In fact he shared the prize with Niki Saoulidou of Fermilab, who was rewarded for her contribution to neutrino physics, as the article now correctly indicates. We apologise for not having named Niki Saoulidou in the original article.

  4. Publisher Correction: Eternal blood vessels

    Science.gov (United States)

    Hindson, Jordan

    2018-05-01

    This article was originally published with an incorrect reference for the original article. The reference has been amended. Please see the correct reference below. Qiu, Y. et al. Microvasculature-on-a-chip for the long-term study of endothelial barrier dysfunction and microvascular obstruction in disease. Nat. Biomed. Eng. https://doi.org/10.1038/s41551-018-0224-z (2018)

  5. A new correction method for determination on carbohydrates in lignocellulosic biomass.

    Science.gov (United States)

    Li, Hong-Qiang; Xu, Jian

    2013-06-01

    The accurate determination on the key components in lignocellulosic biomass is the premise of pretreatment and bioconversion. Currently, the widely used 72% H2SO4 two-step hydrolysis quantitative saccharification (QS) procedure uses loss coefficient of monosaccharide standards to correct monosaccharide loss in the secondary hydrolysis (SH) of QS and may result in excessive correction. By studying the quantitative relationships of glucose and xylose losses during special hydrolysis conditions and the HMF and furfural productions, a simple correction on the monosaccharide loss from both PH and SH was established by using HMF and furfural as the calibrators. This method was used to the component determination on corn stover, Miscanthus and cotton stalk (raw materials and pretreated) and compared to the NREL method. It has been proved that this method can avoid excessive correction on the samples with high-carbohydrate contents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Including gauge corrections to thermal leptogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Huetig, Janine

    2013-05-17

    . Furthermore, we have computed the Majorana neutrino production rate itself in chapter 6 to test our numerical procedure. In this context we have calculated the tree-level result as well as the gauge corrected result for the Majorana neutrino production rate. Finally in chapter 7, we have implemented the Majorana neutrino ladder rung diagram into our setup for leptogenesis: As a first consideration, we have collected all gauge corrected diagrams up to three-loop order for the asymmetry-causing two-loop diagrams. However, the results of chap. 5 showed that it is not sufficient to just include diagrams up to three-loop level. Due to the necessity of resumming all n-loop diagrams, we have constructed a cylindrical diagram that fulfils this condition. This diagram is the link between the Majorana neutrino ladder rung diagram calculated before on the one hand and the lepton asymmetry on the other. Therefore we have been able to derive a complete expression for the integrated lepton number matrix including all leading order corrections. The numerical analysis of this lepton number matrix needs a great computational effort since for the resulting eight-dimensional integral two ordinary differential equations have to be computed for each point the routine evaluates. Thus the result remains yet inaccessible. Research perspectives: Summarising, this thesis provides the basis for a systematic inclusion of gauge interactions in thermal leptogenesis scenarios. As a next step, one should evaluate the expression for the integrated lepton number numerically to gain a value, which can be used for comparison to earlier results such as the solutions of the Boltzmann equations as well as the Kadanoff-Baym ansatz with the implemented Standard Model widths. This numerical result would be the first quantitative number, which contains leading order corrections due to all interactions of the Majorana neutrino with the Standard Model particles. Further corrections by means of including washout effects

  7. Including gauge corrections to thermal leptogenesis

    International Nuclear Information System (INIS)

    Huetig, Janine

    2013-01-01

    . Furthermore, we have computed the Majorana neutrino production rate itself in chapter 6 to test our numerical procedure. In this context we have calculated the tree-level result as well as the gauge corrected result for the Majorana neutrino production rate. Finally in chapter 7, we have implemented the Majorana neutrino ladder rung diagram into our setup for leptogenesis: As a first consideration, we have collected all gauge corrected diagrams up to three-loop order for the asymmetry-causing two-loop diagrams. However, the results of chap. 5 showed that it is not sufficient to just include diagrams up to three-loop level. Due to the necessity of resumming all n-loop diagrams, we have constructed a cylindrical diagram that fulfils this condition. This diagram is the link between the Majorana neutrino ladder rung diagram calculated before on the one hand and the lepton asymmetry on the other. Therefore we have been able to derive a complete expression for the integrated lepton number matrix including all leading order corrections. The numerical analysis of this lepton number matrix needs a great computational effort since for the resulting eight-dimensional integral two ordinary differential equations have to be computed for each point the routine evaluates. Thus the result remains yet inaccessible. Research perspectives: Summarising, this thesis provides the basis for a systematic inclusion of gauge interactions in thermal leptogenesis scenarios. As a next step, one should evaluate the expression for the integrated lepton number numerically to gain a value, which can be used for comparison to earlier results such as the solutions of the Boltzmann equations as well as the Kadanoff-Baym ansatz with the implemented Standard Model widths. This numerical result would be the first quantitative number, which contains leading order corrections due to all interactions of the Majorana neutrino with the Standard Model particles. Further corrections by means of including washout effects

  8. An overview of correctional psychiatry.

    Science.gov (United States)

    Metzner, Jeffrey; Dvoskin, Joel

    2006-09-01

    Supermax facilities may be an unfortunate and unpleasant necessity in modern corrections. Because of the serious dangers posed by prison gangs, they are unlikely to disappear completely from the correctional landscape any time soon. But such units should be carefully reserved for those inmates who pose the most serious danger to the prison environment. Further, the constitutional duty to provide medical and mental health care does not end at the supermax door. There is a great deal of common ground between the opponents of such environments and those who view them as a necessity. No one should want these expensive beds to be used for people who could be more therapeutically and safely managed in mental health treatment environments. No one should want people with serious mental illnesses to be punished for their symptoms. Finally, no one wants these units to make people more, instead of less, dangerous. It is in everyone's interests to learn as much as possible about the potential of these units for good and for harm. Corrections is a profession, and professions base their practices on data. If we are to avoid the most egregious and harmful effects of supermax confinement, we need to understand them far better than we currently do. Though there is a role for advocacy from those supporting or opposed to such environments, there is also a need for objective, scientifically rigorous study of these units and the people who live there.

  9. Invitation to a forum: architecting operational `next generation' earth monitoring satellites based on best modeling, existing sensor capabilities, with constellation efficiencies to secure trusted datasets for the next 20 years

    Science.gov (United States)

    Helmuth, Douglas B.; Bell, Raymond M.; Grant, David A.; Lentz, Christopher A.

    2012-09-01

    Architecting the operational Next Generation of earth monitoring satellites based on matured climate modeling, reuse of existing sensor & satellite capabilities, attention to affordability and evolutionary improvements integrated with constellation efficiencies - becomes our collective goal for an open architectural design forum. Understanding the earth's climate and collecting requisite signatures over the next 30 years is a shared mandate by many of the world's governments. But there remains a daunting challenge to bridge scientific missions to 'operational' systems that truly support the demands of decision makers, scientific investigators and global users' requirements for trusted data. In this paper we will suggest an architectural structure that takes advantage of current earth modeling examples including cross-model verification and a first order set of critical climate parameters and metrics; that in turn, are matched up with existing space borne collection capabilities and sensors. The tools used and the frameworks offered are designed to allow collaborative overlays by other stakeholders nominating different critical parameters and their own treaded connections to existing international collection experience. These aggregate design suggestions will be held up to group review and prioritized as potential constellation solutions including incremental and spiral developments - including cost benefits and organizational opportunities. This Part IV effort is focused on being an inclusive 'Next Gen Constellation' design discussion and is the natural extension to earlier papers.

  10. Total binding energy of heavy positive ions including density treatment of Darwin and Breit corrections

    International Nuclear Information System (INIS)

    Hill, S.H.; Grout, P.J.; March, N.H.

    1987-01-01

    Previous work on the relativistic Thomas-Fermi treatment of total energies of neutral atoms is first generalised to heavy positive ions. To facilitate quantitative contact with the numerical predictions of Dirac-Fock theory, Darwin and Breit corrections are expressed in terms of electron density, and computed using input again from relativistic Thomas-Fermi theory. These corrections significantly improve the agreement between the two seemingly very different theories. (author)

  11. Quantitative ion implantation

    International Nuclear Information System (INIS)

    Gries, W.H.

    1976-06-01

    This is a report of the study of the implantation of heavy ions at medium keV-energies into electrically conducting mono-elemental solids, at ion doses too small to cause significant loss of the implanted ions by resputtering. The study has been undertaken to investigate the possibility of accurate portioning of matter in submicrogram quantities, with some specific applications in mind. The problem is extensively investigated both on a theoretical level and in practice. A mathematical model is developed for calculating the loss of implanted ions by resputtering as a function of the implanted ion dose and the sputtering yield. Numerical data are produced therefrom which permit a good order-of-magnitude estimate of the loss for any ion/solid combination in which the ions are heavier than the solid atoms, and for any ion energy from 10 to 300 keV. The implanted ion dose is measured by integration of the ion beam current, and equipment and techniques are described which make possible the accurate integration of an ion current in an electromagnetic isotope separator. The methods are applied to two sample cases, one being a stable isotope, the other a radioisotope. In both cases independent methods are used to show that the implantation is indeed quantitative, as predicted. At the same time the sample cases are used to demonstrate two possible applications for quantitative ion implantation, viz. firstly for the manufacture of calibration standards for instrumental micromethods of elemental trace analysis in metals, and secondly for the determination of the half-lives of long-lived radioisotopes by a specific activity method. It is concluded that the present study has advanced quantitative ion implantation to the state where it can be successfully applied to the solution of problems in other fields

  12. Quantitative cardiac computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Thelen, M.; Dueber, C.; Wolff, P.; Erbel, R.; Hoffmann, T.

    1985-06-01

    The scope and limitations of quantitative cardiac CT have been evaluated in a series of experimental and clinical studies. The left ventricular muscle mass was estimated by computed tomography in 19 dogs (using volumetric methods, measurements in two axes and planes and reference volume). There was good correlation with anatomical findings. The enddiastolic volume of the left ventricle was estimated in 22 patients with cardiomyopathies; using angiography as a reference, CT led to systematic under-estimation. It is also shown that ECG-triggered magnetic resonance tomography results in improved visualisation and may be expected to improve measurements of cardiac morphology.

  13. F# for quantitative finance

    CERN Document Server

    Astborg, Johan

    2013-01-01

    To develop your confidence in F#, this tutorial will first introduce you to simpler tasks such as curve fitting. You will then advance to more complex tasks such as implementing algorithms for trading semi-automation in a practical scenario-based format.If you are a data analyst or a practitioner in quantitative finance, economics, or mathematics and wish to learn how to use F# as a functional programming language, this book is for you. You should have a basic conceptual understanding of financial concepts and models. Elementary knowledge of the .NET framework would also be helpful.

  14. Quantitative performance monitoring

    International Nuclear Information System (INIS)

    Heller, A.S.

    1987-01-01

    In the recently published update of NUREG/CR 3883, it was shown that Japanese plants of size and design similar to those in the US have significantly fewer trips in a given year of operation. One way to reduce such imbalance is the efficient use of available plant data. Since plant data are recorded and monitored continuously for management feedback and timely resolution of problems, this data should be actively used to increase the efficiency of operations and, ultimately, for a reduction of plant trips in power plants. A great deal of information is lost, however, if the analytical tools available for the data evaluation are misapplied or not adopted at all. This paper deals with a program developed to use quantitative techniques to monitor personnel performance in an operating power plant. Visual comparisons of ongoing performance with predetermined quantitative performance goals are made. A continuous feedback is provided to management for early detection of adverse trends and timely resolution of problems. Ultimately, costs are reduced through effective resource management and timely decision making

  15. Quantitative clinical radiobiology

    International Nuclear Information System (INIS)

    Bentzen, S.M.

    1993-01-01

    Based on a series of recent papers, a status is given of our current ability to quantify the radiobiology of human tumors and normal tissues. Progress has been made in the methods of analysis. This includes the introduction of 'direct' (maximum likelihood) analysis, incorporation of latent-time in the analyses, and statistical approaches to allow for the many factors of importance in predicting tumor-control probability of normal-tissue complications. Quantitative clinical radiobiology of normal tissues is reviewed with emphasis on fractionation sensitivity, repair kinetics, regeneration, latency, and the steepness of dose-response curves. In addition, combined modality treatment, functional endpoints, and the search for a correlation between the occurrence of different endpoints in the same individual are discussed. For tumors, quantitative analyses of fractionation sensitivity, repair kinetics, reoxygenation, and regeneration are reviewed. Other factors influencing local control are: Tumor volume, histopathologic differentiation and hemoglobin concentration. Also, the steepness of the dose-response curve for tumors is discussed. Radiobiological strategies for improving radiotherapy are discussed with emphasis on non-standard fractionation and individualization of treatment schedules. (orig.)

  16. Atmospheric correction of APEX hyperspectral data

    Directory of Open Access Journals (Sweden)

    Sterckx Sindy

    2016-03-01

    Full Text Available Atmospheric correction plays a crucial role among the processing steps applied to remotely sensed hyperspectral data. Atmospheric correction comprises a group of procedures needed to remove atmospheric effects from observed spectra, i.e. the transformation from at-sensor radiances to at-surface radiances or reflectances. In this paper we present the different steps in the atmospheric correction process for APEX hyperspectral data as applied by the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. The MODerate resolution atmospheric TRANsmission program (MODTRAN is used to determine the source of radiation and for applying the actual atmospheric correction. As part of the overall correction process, supporting algorithms are provided in order to derive MODTRAN configuration parameters and to account for specific effects, e.g. correction for adjacency effects, haze and shadow correction, and topographic BRDF correction. The methods and theory underlying these corrections and an example of an application are presented.

  17. Adaptive aberration correction using a triode hyperbolic electron mirror

    International Nuclear Information System (INIS)

    Fitzgerald, J.P.S.; Word, R.C.; Koenenkamp, R.

    2011-01-01

    A converging electron mirror can be used to compensate spherical and chromatic aberrations in an electron microscope. This paper presents an analytical solution to a novel triode (three electrode) hyperbolic mirror as an improvement to the well-known diode (two electrode) hyperbolic mirror for aberration correction. A weakness of the diode mirror is a lack of flexibility in changing the chromatic and spherical aberration coefficients independently without changes in the mirror geometry. In order to remove this limitation, a third electrode can be added. We calculate the optical properties of the resulting triode mirror analytically on the basis of a simple model field distribution. We present the optical properties-the object/image distance, z 0 , and the coefficients of spherical and chromatic aberration, C s and C c , of both mirror types from an analysis of electron trajectories in the mirror field. From this analysis, we demonstrate that while the properties of both designs are similar, the additional parameters in the triode mirror improve the range of aberration that can be corrected. The triode mirror is also able to provide a dynamic adjustment range of chromatic aberration for fixed spherical aberration and focal length, or any permutation of these three parameters. While the dynamic range depends on the values of aberration correction needed, a nominal 10% tuning range is possible for most configurations accompanied by less than 1% change in the other two properties. -- Highlights: → Electrostatic aberration correction for chromatic and spherical aberration in electron optics. → Simultaneous correction of spherical and chromatic aberrations over a wide, adjustable range. → Analytic and quantitative description of correction parameters.

  18. Misalignment corrections in optical interconnects

    Science.gov (United States)

    Song, Deqiang

    Optical interconnects are considered a promising solution for long distance and high bitrate data transmissions, outperforming electrical interconnects in terms of loss and dispersion. Due to the bandwidth and distance advantage of optical interconnects, longer links have been implemented with optics. Recent studies show that optical interconnects have clear advantages even at very short distances---intra system interconnects. The biggest challenge for such optical interconnects is the alignment tolerance. Many free space optical components require very precise assembly and installation, and therefore the overall cost could be increased. This thesis studied the misalignment tolerance and possible alignment correction solutions for optical interconnects at backplane or board level. First the alignment tolerance for free space couplers was simulated and the result indicated the most critical alignments occur between the VCSEL, waveguide and microlens arrays. An in-situ microlens array fabrication method was designed and experimentally demonstrated, with no observable misalignment with the waveguide array. At the receiver side, conical lens arrays were proposed to replace simple microlens arrays for a larger angular alignment tolerance. Multilayer simulation models in CodeV were built to optimized the refractive index and shape profiles of the conical lens arrays. Conical lenses fabricated with micro injection molding machine and fiber etching were characterized. Active component VCSOA was used to correct misalignment in optical connectors between the board and backplane. The alignment correction capability were characterized for both DC and AC (1GHz) optical signal. The speed and bandwidth of the VCSOA was measured and compared with a same structure VCSEL. Based on the optical inverter being studied in our lab, an all-optical flip-flop was demonstrated using a pair of VCSOAs. This memory cell with random access ability can store one bit optical signal with set or

  19. Satellite-based monitoring of cotton evapotranspiration

    Science.gov (United States)

    Dalezios, Nicolas; Dercas, Nicholas; Tarquis, Ana Maria

    2016-04-01

    Water for agricultural use represents the largest share among all water uses. Vulnerability in agriculture is influenced, among others, by extended periods of water shortage in regions exposed to droughts. Advanced technological approaches and methodologies, including remote sensing, are increasingly incorporated for the assessment of irrigation water requirements. In this paper, remote sensing techniques are integrated for the estimation and monitoring of crop evapotranspiration ETc. The study area is Thessaly central Greece, which is a drought-prone agricultural region. Cotton fields in a small agricultural sub-catchment in Thessaly are used as an experimental site. Daily meteorological data and weekly field data are recorded throughout seven (2004-2010) growing seasons for the computation of reference evapotranspiration ETo, crop coefficient Kc and cotton crop ETc based on conventional data. Satellite data (Landsat TM) for the corresponding period are processed to estimate cotton crop coefficient Kc and cotton crop ETc and delineate its spatiotemporal variability. The methodology is applied for monitoring Kc and ETc during the growing season in the selected sub-catchment. Several error statistics are used showing very good agreement with ground-truth observations.

  20. Satellite-based assessment of grassland yields

    Science.gov (United States)

    Grant, K.; Siegmund, R.; Wagner, M.; Hartmann, S.

    2015-04-01

    Cutting date and frequency are important parameters determining grassland yields in addition to the effects of weather, soil conditions, plant composition and fertilisation. Because accurate and area-wide data of grassland yields are currently not available, cutting frequency can be used to estimate yields. In this project, a method to detect cutting dates via surface changes in radar images is developed. The combination of this method with a grassland yield model will result in more reliable and regional-wide numbers of grassland yields. For the test-phase of the monitoring project, a study area situated southeast of Munich, Germany, was chosen due to its high density of managed grassland. For determining grassland cutting robust amplitude change detection techniques are used evaluating radar amplitude or backscatter statistics before and after the cutting event. CosmoSkyMed and Sentinel-1A data were analysed. All detected cuts were verified according to in-situ measurements recorded in a GIS database. Although the SAR systems had various acquisition geometries, the amount of detected grassland cut was quite similar. Of 154 tested grassland plots, covering in total 436 ha, 116 and 111 cuts were detected using CosmoSkyMed and Sentinel-1A radar data, respectively. Further improvement of radar data processes as well as additional analyses with higher sample number and wider land surface coverage will follow for optimisation of the method and for validation and generalisation of the results of this feasibility study. The automation of this method will than allow for an area-wide and cost efficient cutting date detection service improving grassland yield models.

  1. Satellite based Ocean Forecasting, the SOFT project

    Science.gov (United States)

    Stemmann, L.; Tintoré, J.; Moneris, S.

    2003-04-01

    The knowledge of future oceanic conditions would have enormous impact on human marine related areas. For such reasons, a number of international efforts are being carried out to obtain reliable and manageable ocean forecasting systems. Among the possible techniques that can be used to estimate the near future states of the ocean, an ocean forecasting system based on satellite imagery is developped through the Satelitte based Ocean ForecasTing project (SOFT). SOFT, established by the European Commission, considers the development of a forecasting system of the ocean space-time variability based on satellite data by using Artificial Intelligence techniques. This system will be merged with numerical simulation approaches, via assimilation techniques, to get a hybrid SOFT-numerical forecasting system of improved performance. The results of the project will provide efficient forecasting of sea-surface temperature structures, currents, dynamic height, and biological activity associated to chlorophyll fields. All these quantities could give valuable information on the planning and management of human activities in marine environments such as navigation, fisheries, pollution control, or coastal management. A detailed identification of present or new needs and potential end-users concerned by such an operational tool is being performed. The project would study solutions adapted to these specific needs.

  2. A satellite-based global landslide model

    Directory of Open Access Journals (Sweden)

    A. Farahmand

    2013-05-01

    Full Text Available Landslides are devastating phenomena that cause huge damage around the world. This paper presents a quasi-global landslide model derived using satellite precipitation data, land-use land cover maps, and 250 m topography information. This suggested landslide model is based on the Support Vector Machines (SVM, a machine learning algorithm. The National Aeronautics and Space Administration (NASA Goddard Space Flight Center (GSFC landslide inventory data is used as observations and reference data. In all, 70% of the data are used for model development and training, whereas 30% are used for validation and verification. The results of 100 random subsamples of available landslide observations revealed that the suggested landslide model can predict historical landslides reliably. The average error of 100 iterations of landslide prediction is estimated to be approximately 7%, while approximately 2% false landslide events are observed.

  3. Satellite-based Tropical Cyclone Monitoring Capabilities

    Science.gov (United States)

    Hawkins, J.; Richardson, K.; Surratt, M.; Yang, S.; Lee, T. F.; Sampson, C. R.; Solbrig, J.; Kuciauskas, A. P.; Miller, S. D.; Kent, J.

    2012-12-01

    Satellite remote sensing capabilities to monitor tropical cyclone (TC) location, structure, and intensity have evolved by utilizing a combination of operational and research and development (R&D) sensors. The microwave imagers from the operational Defense Meteorological Satellite Program [Special Sensor Microwave/Imager (SSM/I) and the Special Sensor Microwave Imager Sounder (SSMIS)] form the "base" for structure observations due to their ability to view through upper-level clouds, modest size swaths and ability to capture most storm structure features. The NASA TRMM microwave imager and precipitation radar continue their 15+ yearlong missions in serving the TC warning and research communities. The cessation of NASA's QuikSCAT satellite after more than a decade of service is sorely missed, but India's OceanSat-2 scatterometer is now providing crucial ocean surface wind vectors in addition to the Navy's WindSat ocean surface wind vector retrievals. Another Advanced Scatterometer (ASCAT) onboard EUMETSAT's MetOp-2 satellite is slated for launch soon. Passive microwave imagery has received a much needed boost with the launch of the French/Indian Megha Tropiques imager in September 2011, basically greatly supplementing the very successful NASA TRMM pathfinder with a larger swath and more frequent temporal sampling. While initial data issues have delayed data utilization, current news indicates this data will be available in 2013. Future NASA Global Precipitation Mission (GPM) sensors starting in 2014 will provide enhanced capabilities. Also, the inclusion of the new microwave sounder data from the NPP ATMS (Oct 2011) will assist in mapping TC convective structures. The National Polar orbiting Partnership (NPP) program's VIIRS sensor includes a day night band (DNB) with the capability to view TC cloud structure at night when sufficient lunar illumination exits. Examples highlighting this new capability will be discussed in concert with additional data fusion efforts.

  4. Digital, Satellite-Based Aeronautical Communication

    Science.gov (United States)

    Davarian, F.

    1989-01-01

    Satellite system relays communication between aircraft and stations on ground. System offers better coverage with direct communication between air and ground, costs less and makes possible new communication services. Carries both voice and data. Because many data exchanged between aircraft and ground contain safety-related information, probability of bit errors essential.

  5. Bias correction for magnetic resonance images via joint entropy regularization.

    Science.gov (United States)

    Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang

    2014-01-01

    Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.

  6. Correcting slightly less simple movements

    Directory of Open Access Journals (Sweden)

    M.P. Aivar

    2005-01-01

    Full Text Available Many studies have analysed how goal directed movements are corrected in response to changes in the properties of the target. However, only simple movements to single targets have been used in those studies, so little is known about movement corrections under more complex situations. Evidence from studies that ask for movements to several targets in sequence suggests that whole sequences of movements are planned together. Planning related segments of a movement together makes it possible to optimise the whole sequence, but it means that some parts are planned quite long in advance, so that it is likely that they will have to be modified. In the present study we examined how people respond to changes that occur while they are moving to the first target of a sequence. Subjects moved a stylus across a digitising tablet. They moved from a specified starting point to two targets in succession. The first of these targets was always at the same position but it could have one of two sizes. The second target could be in one of two different positions and its size was different in each case. On some trials the first target changed size, and on some others the second target changed size and position, as soon as the subject started to move. When the size of the first target changed the subjects slowed down the first segment of their movements. Even the peak velocity, which was only about 150 ms after the change in size, was lower. Beside this fast response to the change itself, the dwell time at the first target was also affected: its duration increased after the change. Changing the size and position of the second target did not influence the first segment of the movement, but also increased the dwell time. The dwell time was much longer for a small target, irrespective of its initial size. If subjects knew in advance which target could change, they moved faster than if they did not know which could change. Taken together, these

  7. Correction of gene expression data

    DEFF Research Database (Denmark)

    Darbani Shirvanehdeh, Behrooz; Stewart, C. Neal, Jr.; Noeparvar, Shahin

    2014-01-01

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies....... For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce...

  8. Correct Linearization of Einstein's Equations

    Directory of Open Access Journals (Sweden)

    Rabounski D.

    2006-06-01

    Full Text Available Regularly Einstein's equations can be reduced to a wave form (linearly dependent from the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel's symbols. As shown here, the origin of the problem is that one uses the general covariant theory of measurement. Here the wave form of Einstein's equations is obtained in the terms of Zelmanov's chronometric invariants (physically observable projections on the observer's time line and spatial section. The obtained equations depend on solely the second derivatives even if gravitation, the space rotation and Christoffel's symbols. The correct linearization proves: the Einstein equations are completely compatible with weak waves of the metric.

  9. Neutron borehole logging correction technique

    International Nuclear Information System (INIS)

    Goldman, L.H.

    1978-01-01

    In accordance with an illustrative embodiment of the present invention, a method and apparatus is disclosed for logging earth formations traversed by a borehole in which an earth formation is irradiated with neutrons and gamma radiation produced thereby in the formation and in the borehole is detected. A sleeve or shield for capturing neutrons from the borehole and producing gamma radiation characteristic of that capture is provided to give an indication of the contribution of borehole capture events to the total detected gamma radiation. It is then possible to correct from those borehole effects the total detected gamma radiation and any earth formation parameters determined therefrom

  10. A promising hybrid approach to SPECT attenuation correction

    International Nuclear Information System (INIS)

    Lewis, N.H.; Faber, T.L.; Corbett, J.R.; Stokely, E.M.

    1984-01-01

    Most methods for attenuation compensation in SPECT either rely on the assumption of uniform attenuation, or use slow iteration to achieve accuracy. However, hybrid methods that combine iteration with simple multiplicative correction can accommodate nonuniform attenuation, and such methods converge faster than other iterative techniques. The authors evaluated two such methods, which differ in use of a damping factor to control convergence. Both uniform and nonuniform attenuation were modeled, using simulated and phantom data for a rotating gamma camera. For simulations done with 360 0 data and the correct attenuation map, activity levels were reconstructed to within 5% of the correct values after one iteration. Using 180 0 data, reconstructed levels in regions representing lesion and background were within 5% of the correct values in three iterations; however, further iterations were needed to eliminate the characteristic streak artifacts. The damping factor had little effect on 360 0 reconstruction, but was needed for convergence with 180 0 data. For both cold- and hot-lesion models, image contrast was better from the hybrid methods than from the simpler geometric-mean corrector. Results from the hybrid methods were comparable to those obtained using the conjugate-gradient iterative method, but required 50-100% less reconstruction time. The relative speed of the hybrid methods, and their accuracy in reconstructing photon activity in the presence of nonuniform attenuation, make them promising tools for quantitative SPECT reconstruction

  11. Evaluation of cast creep occurring during simulated clubfoot correction.

    Science.gov (United States)

    Cohen, Tamara L; Altiok, Haluk; Wang, Mei; McGrady, Linda M; Krzak, Joseph; Graf, Adam; Tarima, Sergey; Smith, Peter A; Harris, Gerald F

    2013-08-01

    The Ponseti method is a widely accepted and highly successful conservative treatment of pediatric clubfoot involving weekly manipulations and cast applications. Qualitative assessments have indicated the potential success of the technique with cast materials other than standard plaster of Paris. However, guidelines for clubfoot correction based on the mechanical response of these materials have yet to be investigated. The current study sought to characterize and compare the ability of three standard cast materials to maintain the Ponseti-corrected foot position by evaluating cast creep response. A dynamic cast testing device, built to model clubfoot correction, was wrapped in plaster of Paris, semi-rigid fiberglass, and rigid fiberglass. Three-dimensional motion responses to two joint stiffnesses were recorded. Rotational creep displacement and linearity of the limb-cast composite were analyzed. Minimal change in position over time was found for all materials. Among cast materials, the rotational creep displacement was significantly different (p creep displacement occurred in the plaster of Paris (2.0°), then the semi-rigid fiberglass (1.0°), and then the rigid fiberglass (0.4°). Torque magnitude did not affect creep displacement response. Analysis of normalized rotation showed quasi-linear viscoelastic behavior. This study provided a mechanical evaluation of cast material performance as used for clubfoot correction. Creep displacement dependence on cast material and insensitivity to torque were discovered. This information may provide a quantitative and mechanical basis for future innovations for clubfoot care.

  12. [A new method of calibration and positioning in quantitative analysis of multicomponents by single marker].

    Science.gov (United States)

    He, Bing; Yang, Shi-Yan; Zhang, Yan

    2012-12-01

    This paper aims to establish a new method of calibration and positioning in quantitative analysis of multicomponents by single marker (QAMS), using Shuanghuanglian oral liquid as the research object. Establishing relative correction factors with reference chlorogenic acid to other 11 active components (neochlorogenic acid, cryptochlorogenic acid, cafferic acid, forsythoside A, scutellarin, isochlorogenic acid B, isochlorogenic acid A, isochlorogenic acid C, baicalin and phillyrin wogonoside) in Shuanghuanglian oral liquid by 3 correction methods (multipoint correction, slope correction and quantitative factor correction). At the same time chromatographic peak was positioned by linear regression method. Only one standard uas used to determine the content of 12 components in Shuanghuanglian oral liquid, in stead of needing too many reference substance in quality control. The results showed that within the linear ranges, no significant differences were found in the quantitative results of 12 active constituents in 3 batches of Shuanghuanglian oral liquid determined by 3 correction methods and external standard method (ESM) or standard curve method (SCM). And this method is simpler and quicker than literature methods. The results were accurate and reliable, and had good reproducibility. While the positioning chromatographic peaks by linear regression method was more accurate than relative retention time in literature. The slope and the quantitative factor correction controlling the quality of Chinese traditional medicine is feasible and accurate.

  13. Quantitative immunoassays for diagnosis and carrier detection in cystic fibrosis

    International Nuclear Information System (INIS)

    Bullock, S.; Hayward, C.; Manson, J.; Brock, D.J.H.; Raeburn, J.A.

    1982-01-01

    Quantitative immunoprecipitation and immunoradiometric assays have been developed for a protein present in the serum of cystic fibrosis homozygotes, and to a lesser extent in the serum of heterozygotes. When tested on a panel of sera from 14 cystic fibrosis patients, 29 heterozygotes and 23 controls, the immunoprecipitation assay allowed correct assignments to be made on 94% of occasions with one batch of antiserum and 95% with another. With the same panel of sera, the immunoradiometric assay allowed 94% correct assignments. It is suggested that such accuracy is the maximum that can be expected in the present state of knowledge of cystic fibrosis. (author)

  14. Progress towards in vitro quantitative imaging of human femur using compound quantitative ultrasonic tomography

    International Nuclear Information System (INIS)

    Lasaygues, Philippe; Ouedraogo, Edgard; Lefebvre, Jean-Pierre; Gindre, Marcel; Talmant, Marilyne; Laugier, Pascal

    2005-01-01

    The objective of this study is to make cross-sectional ultrasonic quantitative tomography of the diaphysis of long bones. Ultrasonic propagation in bones is affected by the severe mismatch between the acoustic properties of this biological solid and those of the surrounding soft medium, namely, the soft tissues in vivo or water in vitro. Bone imaging is then a nonlinear inverse-scattering problem. In this paper, we showed that in vitro quantitative images of sound velocities in a human femur cross section could be reconstructed by combining ultrasonic reflection tomography (URT), which provides images of the macroscopic structure of the bone, and ultrasonic transmission tomography (UTT), which provides quantitative images of the sound velocity. For the shape, we developed an image-processing tool to extract the external and internal boundaries and cortical thickness measurements. For velocity mapping, we used a wavelet analysis tool adapted to ultrasound, which allowed us to detect precisely the time of flight from the transmitted signals. A brief review of the ultrasonic tomography that we developed using correction algorithms of the wavepaths and compensation procedures are presented. Also shown are the first results of our analyses on models and specimens of long bone using our new iterative quantitative protocol

  15. Correction of rotational distortion for catheter-based en face OCT and OCT angiography

    Science.gov (United States)

    Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.

    2015-01-01

    We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133

  16. Quantitative EDXS: Influence of geometry on a four detector system

    International Nuclear Information System (INIS)

    Kraxner, Johanna; Schäfer, Margit; Röschel, Otto; Kothleitner, Gerald; Haberfehlner, Georg; Paller, Manuel; Grogger, Werner

    2017-01-01

    The influence of the geometry on quantitative energy dispersive X-ray spectrometry (EDXS) analysis is determined for a ChemiSTEM system (Super-X) in combination with a low-background double-tilt specimen holder. For the first time a combination of experimental measurements with simulations is used to determine the positions of the individual detectors of a Super-X system. These positions allow us to calculate the detector's solid angles and estimate the amount of detector shadowing and its influence on quantitative EDXS analysis, including absorption correction using the ζ-factor method. Both shadowing by the brass portions and the beryllium specimen carrier of the holder severely affect the quantification of low to medium atomic number elements. A multi-detector system is discussed in terms of practical consequences of the described effects, and a quantitative evaluation of a Fayalit sample is demonstrated. Corrections and suggestions for minimizing systematic errors are discussed to improve quantitative methods for a multi-detector system. - Highlights: • Geometrical issues for EDXS quantification on a Super-X system. • Realistic model of a specimen holder using X-ray computed tomography. • Determination of the exact detector positions of a Super-X system. • Influence of detector shadowing and Be specimen carrier on quantitative EDXS.

  17. Boomerang pattern correction of gynecomastia.

    Science.gov (United States)

    Hurwitz, Dennis J

    2015-02-01

    After excess skin and fat are removed, a body-lift suture advances skin and suspends ptotic breasts, the mons pubis, and buttocks. For women, the lift includes sculpturing adiposity. While some excess fat may need removal, muscular men should receive a deliberate effort to achieve generalized tight skin closure to reveal superficial muscular bulk. For skin to be tightly bound to muscle, the excess needs to be removed both horizontally and vertically. To aesthetically accomplish that goal, a series of oblique elliptical excisions have been designed. Twenty-four consecutive patients received boomerang pattern correction of gynecomastia. In the last 12 patients, a J torsoplasty extension replaced the transverse upper body lift. Indirect undermining and the opposing force of a simultaneous abdominoplasty obliterate the inframammary fold. To complete effacement of the entire torso in 11 patients, an abdominoplasty was extended by oblique excisions over bulging flanks. Satisfactory improvement was observed in all 24 boomerang cases. A disgruntled patient was displeased with distorted nipples after revision surgery. Scar maturation in the chest is lengthy, with scars taking years to flatten and fade. Complications were limited and no major revisions were needed. In selected patients, comprehensive body contouring surgery consists of a boomerang correction of gynecomastia. J torsoplasty with an abdominoplasty and oblique excisions of the flanks has proven to be a practical means to achieve aesthetic goals. Gender-specific body lift surgery that goes far beyond the treatment of gynecomastia best serves the muscular male patient after massive weight loss. Therapeutic, IV.

  18. Automatic quantitative metallography

    International Nuclear Information System (INIS)

    Barcelos, E.J.B.V.; Ambrozio Filho, F.; Cunha, R.C.

    1976-01-01

    The quantitative determination of metallographic parameters is analysed through the description of Micro-Videomat automatic image analysis system and volumetric percentage of perlite in nodular cast irons, porosity and average grain size in high-density sintered pellets of UO 2 , and grain size of ferritic steel. Techniques adopted are described and results obtained are compared with the corresponding ones by the direct counting process: counting of systematic points (grid) to measure volume and intersections method, by utilizing a circunference of known radius for the average grain size. The adopted technique for nodular cast iron resulted from the small difference of optical reflectivity of graphite and perlite. Porosity evaluation of sintered UO 2 pellets is also analyzed [pt

  19. Quantitative assessment of 201TlCl myocardial SPECT

    International Nuclear Information System (INIS)

    Uehara, Toshiisa

    1987-01-01

    Clinical evaluation of the quantitative analysis of Tl-201 myocardial tomography by SPECT (Single Photon Emission Computed Tomography) was performed in comparison with visual evaluation. The method of quantitative analysis has been already reported in our previous paper. In this study, the program of re-standardization in the case of lateral myocardial infarction was added. This program was useful mainly for the evaluation of lesions in the left circumflex coronary artery. Regarding the degree of diagnostic accuracy of myocardial infarction in general, quantitative evaluation of myocardial SPECT images was highest followed by visual evaluation of myocardial SPECT images, and visual evaluation of myocardial planar images. However, in the case of anterior myocardial infarction, visual evaluation of myocardial SPECT images has almost the same detectability as quantitative evaluation of myocardial SPECT images. In the case of infero-posterior myocardial infarction, quantitative evaluation was superior to visual evaluation. As for specificity, quantitative evaluation of SPECT images was slightly inferior to visual evaluation of SPECT images. An infarction map was made by quantitative analysis and this enabled us to determine the infarction site, extent and degree according to easily recognizable patterns. As a result, the responsible coronary artery lesion could be inferred correctly and the calculated infarction score could be correlated with the residual left ventricular function after myocardial infarction. (author)

  20. Automatic Power Factor Correction Using Capacitive Bank

    OpenAIRE

    Mr.Anant Kumar Tiwari,; Mrs. Durga Sharma

    2014-01-01

    The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC) using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is ...

  1. Practical estimate of gradient nonlinearity for implementation of apparent diffusion coefficient bias correction.

    Science.gov (United States)

    Malkyarenko, Dariya I; Chenevert, Thomas L

    2014-12-01

    To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.

  2. 9 CFR 416.15 - Corrective Actions.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Corrective Actions. 416.15 Section 416... SANITATION § 416.15 Corrective Actions. (a) Each official establishment shall take appropriate corrective... the procedures specified therein, or the implementation or maintenance of the Sanitation SOP's, may...

  3. Working toward Literacy in Correctional Education ESL

    Science.gov (United States)

    Gardner, Susanne

    2014-01-01

    Correctional Education English as a Second Language (ESL) literacy programs vary from state to state, region to region. Some states enroll their correctional ESL students in adult basic education (ABE) classes; other states have separate classes and programs. At the Maryland Correctional Institution in Jessup, the ESL class is a self-contained…

  4. 78 FR 59798 - Small Business Subcontracting: Correction

    Science.gov (United States)

    2013-09-30

    ... SMALL BUSINESS ADMINISTRATION 13 CFR Part 125 RIN 3245-AG22 Small Business Subcontracting: Correction AGENCY: U.S. Small Business Administration. ACTION: Correcting amendments. SUMMARY: This document... business subcontracting to implement provisions of the Small Business Jobs Act of 2010. This correction...

  5. Correction magnet power supplies for APS machine

    International Nuclear Information System (INIS)

    Kang, Y.G.

    1991-04-01

    A number of correction magnets are required for the advanced photon source (APS) machine to correct the beam. There are five kinds of correction magnets for the storage ring, two for the injector synchrotron, and two for the positron accumulator ring (PAR). Table I shoes a summary of the correction magnet power supplies for the APS machine. For the storage ring, the displacement of the quadrupole magnets due to the low frequency vibration below 25 Hz has the most significant effect on the stability of the positron closed orbit. The primary external source of the low frequency vibration is the ground motion of approximately 20 μm amplitude, with frequency components concentrated below 10 Hz. These low frequency vibrations can be corrected by using the correction magnets, whose field strengths are controlled individually through the feedback loop comprising the beam position monitoring system. The correction field require could be either positive or negative. Thus for all the correction magnets, bipolar power supplies (BPSs) are required to produce both polarities of correction fields. Three different types of BPS are used for all the correction magnets. Type I BPSs cover all the correction magnets for the storage ring, except for the trim dipoles. The maximum output current of the Type I BPS is 140 Adc. A Type II BPS powers a trim dipole, and its maximum output current is 60 Adc. The injector synchrotron and PAR correction magnets are powered form Type III BPSs, whose maximum output current is 25 Adc

  6. Forward induction reasoning and correct beliefs

    NARCIS (Netherlands)

    Perea y Monsuwé, Andrés

    2017-01-01

    All equilibrium concepts implicitly make a correct beliefs assumption, stating that a player believes that his opponents are correct about his first-order beliefs. In this paper we show that in many dynamic games of interest, this correct beliefs assumption may be incompatible with a very basic form

  7. A quantum correction to chaos

    Energy Technology Data Exchange (ETDEWEB)

    Fitzpatrick, A. Liam [Department of Physics, Boston University,590 Commonwealth Avenue, Boston, MA 02215 (United States); Kaplan, Jared [Department of Physics and Astronomy, Johns Hopkins University,3400 N. Charles St, Baltimore, MD 21218 (United States)

    2016-05-12

    We use results on Virasoro conformal blocks to study chaotic dynamics in CFT{sub 2} at large central charge c. The Lyapunov exponent λ{sub L}, which is a diagnostic for the early onset of chaos, receives 1/c corrections that may be interpreted as λ{sub L}=((2π)/β)(1+(12/c)). However, out of time order correlators receive other equally important 1/c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ{sub L} that emerges at large c, focusing on CFT{sub 2} and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.

  8. Radiative corrections in bumblebee electrodynamics

    Directory of Open Access Journals (Sweden)

    R.V. Maluf

    2015-10-01

    Full Text Available We investigate some quantum features of the bumblebee electrodynamics in flat spacetimes. The bumblebee field is a vector field that leads to a spontaneous Lorentz symmetry breaking. For a smooth quadratic potential, the massless excitation (Nambu–Goldstone boson can be identified as the photon, transversal to the vacuum expectation value of the bumblebee field. Besides, there is a massive excitation associated with the longitudinal mode and whose presence leads to instability in the spectrum of the theory. By using the principal-value prescription, we show that no one-loop radiative corrections to the mass term is generated. Moreover, the bumblebee self-energy is not transverse, showing that the propagation of the longitudinal mode cannot be excluded from the effective theory.

  9. A quantum correction to chaos

    International Nuclear Information System (INIS)

    Fitzpatrick, A. Liam; Kaplan, Jared

    2016-01-01

    We use results on Virasoro conformal blocks to study chaotic dynamics in CFT_2 at large central charge c. The Lyapunov exponent λ_L, which is a diagnostic for the early onset of chaos, receives 1/c corrections that may be interpreted as λ_L=((2π)/β)(1+(12/c)). However, out of time order correlators receive other equally important 1/c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ_L that emerges at large c, focusing on CFT_2 and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.

  10. Electromagnetic corrections to baryon masses

    International Nuclear Information System (INIS)

    Durand, Loyal; Ha, Phuoc

    2005-01-01

    We analyze the electromagnetic contributions to the octet and decuplet baryon masses using the heavy-baryon approximation in chiral effective field theory and methods we developed in earlier analyses of the baryon masses and magnetic moments. Our methods connect simply to Morpurgo's general parametrization of the electromagnetic contributions and to semirelativistic quark models. Our calculations are carried out including the one-loop mesonic corrections to the basic electromagnetic interactions, so to two loops overall. We find that to this order in the chiral loop expansion there are no three-body contributions. The Coleman-Glashow relation and other sum rules derived in quark models with only two-body terms therefore continue to hold, and violations involve at least three-loop processes and can be expected to be quite small. We present the complete formal results and some estimates of the matrix elements here. Numerical calculations will be presented separately

  11. [Surgical correction of cleft palate].

    Science.gov (United States)

    Kimura, F T; Pavia Noble, A; Soriano Padilla, F; Soto Miranda, A; Medellín Rodríguez, A

    1990-04-01

    This study presents a statistical review of corrective surgery for cleft palate, based on cases treated at the maxillo-facial surgery units of the Pediatrics Hospital of the Centro Médico Nacional and at Centro Médico La Raza of the National Institute of Social Security of Mexico, over a five-year period. Interdisciplinary management as performed at the Cleft-Palate Clinic, in an integrated approach involving specialists in maxillo-facial surgery, maxillar orthopedics, genetics, social work and mental hygiene, pursuing to reestablish the stomatological and psychological functions of children afflicted by cleft palate, is amply described. The frequency and classification of the various techniques practiced in that service are described, as well as surgical statistics for 188 patients, which include a total of 256 palate surgeries performed from March 1984 to March 1989, applying three different techniques and proposing a combination of them in a single surgical time, in order to avoid complementary surgery.

  12. Quantitative analysis of X-band weather radar attenuation correction accuracy

    NARCIS (Netherlands)

    Berne, A.D.; Uijlenhoet, R.

    2006-01-01

    At short wavelengths, especially C-, X-, and K-band, weather radar signals arc attenuated by the precipitation along their paths. This constitutes a major source of error for radar rainfall estimation, in particular for intense precipitation. A recently developed stochastic simulator of range

  13. Distortion of depth perception in virtual environments using stereoscopic displays: quantitative assessment and corrective measures

    Science.gov (United States)

    Kleiber, Michael; Winkelholz, Carsten

    2008-02-01

    The aim of the presented research was to quantify the distortion of depth perception when using stereoscopic displays. The visualization parameters of the used virtual reality system such as perspective, haploscopic separation and width of stereoscopic separation were varied. The experiment was designed to measure distortion in depth perception according to allocentric frames of reference. The results of the experiments indicate that some of the parameters have an antithetic effect which allows to compensate the distortion of depth perception for a range of depths. In contrast to earlier research which reported underestimation of depth perception we found that depth was overestimated when using true projection parameters according to the position of the eyes of the user and display geometry.

  14. Effects of attenuation map accuracy on attenuation-corrected micro-SPECT images

    NARCIS (Netherlands)

    Wu, C.; Gratama van Andel, H.A.; Laverman, P.; Boerman, O.C.; Beekman, F.J.

    2013-01-01

    Background In single-photon emission computed tomography (SPECT), attenuation of photon flux in tissue affects quantitative accuracy of reconstructed images. Attenuation maps derived from X-ray computed tomography (CT) can be employed for attenuation correction. The attenuation coefficients as well

  15. Geometrical conditions at the quantitative neutronographic texture analysis

    International Nuclear Information System (INIS)

    Tobisch, J.; Kleinstueck, K.

    1975-10-01

    The beam geometry for measuring quantitative pole figures by a neutronographic texture diffractometer is explained for transmission and reflection arrangement of spherical samples and sheets. For given dimensions of counter aperture the maximum possible cross sections of the incident beam are calculated as a function of sample dimensions and the Bragg angle theta. Methods for the calculation of absorption factors and volume correction are given. Under special conditions advantages result in the transmission case for sample motion into the direction +α. (author)

  16. Quantitive DNA Fiber Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Chun-Mei; Wang, Mei; Greulich-Bode, Karin M.; Weier, Jingly F.; Weier, Heinz-Ulli G.

    2008-01-28

    Several hybridization-based methods used to delineate single copy or repeated DNA sequences in larger genomic intervals take advantage of the increased resolution and sensitivity of free chromatin, i.e., chromatin released from interphase cell nuclei. Quantitative DNA fiber mapping (QDFM) differs from the majority of these methods in that it applies FISH to purified, clonal DNA molecules which have been bound with at least one end to a solid substrate. The DNA molecules are then stretched by the action of a receding meniscus at the water-air interface resulting in DNA molecules stretched homogeneously to about 2.3 kb/{micro}m. When non-isotopically, multicolor-labeled probes are hybridized to these stretched DNA fibers, their respective binding sites are visualized in the fluorescence microscope, their relative distance can be measured and converted into kilobase pairs (kb). The QDFM technique has found useful applications ranging from the detection and delineation of deletions or overlap between linked clones to the construction of high-resolution physical maps to studies of stalled DNA replication and transcription.

  17. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  18. Simplified correction of g-value measurements

    DEFF Research Database (Denmark)

    Duer, Karsten

    1998-01-01

    been carried out using a detailed physical model based on ISO9050 and prEN410 but using polarized data for non-normal incidence. This model is only valid for plane, clear glazings and therefor not suited for corrections of measurements performed on complex glazings. To investigate a more general...... correction procedure the results from the measurements on the Interpane DGU have been corrected using the principle outlined in (Rosenfeld, 1996). This correction procedure is more general as corrections can be carried out without a correct physical model of the investigated glazing. On the other hand...... the way this “general” correction procedure is used is not always in accordance to the physical conditions....

  19. Quantitative sexing (Q-Sexing) and relative quantitative sexing (RQ ...

    African Journals Online (AJOL)

    samer

    Key words: Polymerase chain reaction (PCR), quantitative real time polymerase chain reaction (qPCR), quantitative sexing, Siberian tiger. INTRODUCTION. Animal molecular sexing .... 43:3-12. Ellegren H (1996). First gene on the avian W chromosome (CHD) provides a tag for universal sexing of non-ratite birds. Proc.

  20. Pulse compressor with aberration correction

    Energy Technology Data Exchange (ETDEWEB)

    Mankos, Marian [Electron Optica, Inc., Palo Alto, CA (United States)

    2015-11-30

    In this SBIR project, Electron Optica, Inc. (EOI) is developing an electron mirror-based pulse compressor attachment to new and retrofitted dynamic transmission electron microscopes (DTEMs) and ultrafast electron diffraction (UED) cameras for improving the temporal resolution of these instruments from the characteristic range of a few picoseconds to a few nanoseconds and beyond, into the sub-100 femtosecond range. The improvement will enable electron microscopes and diffraction cameras to better resolve the dynamics of reactions in the areas of solid state physics, chemistry, and biology. EOI’s pulse compressor technology utilizes the combination of electron mirror optics and a magnetic beam separator to compress the electron pulse. The design exploits the symmetry inherent in reversing the electron trajectory in the mirror in order to compress the temporally broadened beam. This system also simultaneously corrects the chromatic and spherical aberration of the objective lens for improved spatial resolution. This correction will be found valuable as the source size is reduced with laser-triggered point source emitters. With such emitters, it might be possible to significantly reduce the illuminated area and carry out ultrafast diffraction experiments from small regions of the sample, e.g. from individual grains or nanoparticles. During phase I, EOI drafted a set of candidate pulse compressor architectures and evaluated the trade-offs between temporal resolution and electron bunch size to achieve the optimum design for two particular applications with market potential: increasing the temporal and spatial resolution of UEDs, and increasing the temporal and spatial resolution of DTEMs. Specialized software packages that have been developed by MEBS, Ltd. were used to calculate the electron optical properties of the key pulse compressor components: namely, the magnetic prism, the electron mirror, and the electron lenses. In the final step, these results were folded

  1. qF-SSOP: real-time optical property corrected fluorescence imaging

    Science.gov (United States)

    Valdes, Pablo A.; Angelo, Joseph P.; Choi, Hak Soo; Gioux, Sylvain

    2017-01-01

    Fluorescence imaging is well suited to provide image guidance during resections in oncologic and vascular surgery. However, the distorting effects of tissue optical properties on the emitted fluorescence are poorly compensated for on even the most advanced fluorescence image guidance systems, leading to subjective and inaccurate estimates of tissue fluorophore concentrations. Here we present a novel fluorescence imaging technique that performs real-time (i.e., video rate) optical property corrected fluorescence imaging. We perform full field of view simultaneous imaging of tissue optical properties using Single Snapshot of Optical Properties (SSOP) and fluorescence detection. The estimated optical properties are used to correct the emitted fluorescence with a quantitative fluorescence model to provide quantitative fluorescence-Single Snapshot of Optical Properties (qF-SSOP) images with less than 5% error. The technique is rigorous, fast, and quantitative, enabling ease of integration into the surgical workflow with the potential to improve molecular guidance intraoperatively. PMID:28856038

  2. Deterministic quantitative risk assessment development

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)

    2009-07-01

    Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)

  3. Rulison Site corrective action report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC`s Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation.

  4. Rulison Site corrective action report

    International Nuclear Information System (INIS)

    1996-09-01

    Project Rulison was a joint US Atomic Energy Commission (AEC) and Austral Oil Company (Austral) experiment, conducted under the AEC's Plowshare Program, to evaluate the feasibility of using a nuclear device to stimulate natural gas production in low-permeability gas-producing geologic formations. The experiment was conducted on September 10, 1969, and consisted of detonating a 40-kiloton nuclear device at a depth of 2,568 m below ground surface (BGS). This Corrective Action Report describes the cleanup of petroleum hydrocarbon- and heavy-metal-contaminated sediments from an old drilling effluent pond and characterization of the mud pits used during drilling of the R-EX well at the Rulison Site. The Rulison Site is located approximately 65 kilometers (40 miles) northeast of Grand Junction, Colorado. The effluent pond was used for the storage of drilling mud during drilling of the emplacement hole for the 1969 gas stimulation test conducted by the AEC. This report also describes the activities performed to determine whether contamination is present in mud pits used during the drilling of well R-EX, the gas production well drilled at the site to evaluate the effectiveness of the detonation in stimulating gas production. The investigation activities described in this report were conducted during the autumn of 1995, concurrent with the cleanup of the drilling effluent pond. This report describes the activities performed during the soil investigation and provides the analytical results for the samples collected during that investigation

  5. Metrics with vanishing quantum corrections

    International Nuclear Information System (INIS)

    Coley, A A; Hervik, S; Gibbons, G W; Pope, C N

    2008-01-01

    We investigate solutions of the classical Einstein or supergravity equations that solve any set of quantum corrected Einstein equations in which the Einstein tensor plus a multiple of the metric is equated to a symmetric conserved tensor T μν (g αβ , ∂ τ g αβ , ∂ τ ∂ σ g αβ , ...,) constructed from sums of terms, the involving contractions of the metric and powers of arbitrary covariant derivatives of the curvature tensor. A classical solution, such as an Einstein metric, is called universal if, when evaluated on that Einstein metric, T μν is a multiple of the metric. A Ricci flat classical solution is called strongly universal if, when evaluated on that Ricci flat metric, T μν vanishes. It is well known that pp-waves in four spacetime dimensions are strongly universal. We focus attention on a natural generalization; Einstein metrics with holonomy Sim(n - 2) in which all scalar invariants are zero or constant. In four dimensions we demonstrate that the generalized Ghanam-Thompson metric is weakly universal and that the Goldberg-Kerr metric is strongly universal; indeed, we show that universality extends to all four-dimensional Sim(2) Einstein metrics. We also discuss generalizations to higher dimensions

  6. Fundamental quantitative security in quantum key generation

    International Nuclear Information System (INIS)

    Yuen, Horace P.

    2010-01-01

    We analyze the fundamental security significance of the quantitative criteria on the final generated key K in quantum key generation including the quantum criterion d, the attacker's mutual information on K, and the statistical distance between her distribution on K and the uniform distribution. For operational significance a criterion has to produce a guarantee on the attacker's probability of correctly estimating some portions of K from her measurement, in particular her maximum probability of identifying the whole K. We distinguish between the raw security of K when the attacker just gets at K before it is used in a cryptographic context and its composition security when the attacker may gain further information during its actual use to help get at K. We compare both of these securities of K to those obtainable from conventional key expansion with a symmetric key cipher. It is pointed out that a common belief in the superior security of a quantum generated K is based on an incorrect interpretation of d which cannot be true, and the security significance of d is uncertain. Generally, the quantum key distribution key K has no composition security guarantee and its raw security guarantee from concrete protocols is worse than that of conventional ciphers. Furthermore, for both raw and composition security there is an exponential catch-up problem that would make it difficult to quantitatively improve the security of K in a realistic protocol. Some possible ways to deal with the situation are suggested.

  7. A multiresolution image based approach for correction of partial volume effects in emission tomography

    International Nuclear Information System (INIS)

    Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Rest, C Cheze-Le; Visvikis, D

    2006-01-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'a trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI in

  8. Quantitative X-ray microanalysis of biological specimens

    International Nuclear Information System (INIS)

    Roomans, G.M.

    1988-01-01

    Qualitative X-ray microanalysis of biological specimens requires an approach that is somewhat different from that used in the materials sciences. The first step is deconvolution and background subtraction on the obtained spectrum. The further treatment depends on the type of specimen: thin, thick, or semithick. For thin sections, the continuum method of quantitation is most often used, but it should be combined with an accurate correction for extraneous background. However, alternative methods to determine local mass should also be considered. In the analysis of biological bulk specimens, the ZAF-correction method appears to be less useful, primarily because of the uneven surface of biological specimens. The peak-to-local background model may be a more adequate method for thick specimens that are not mounted on a thick substrate. Quantitative X-ray microanalysis of biological specimens generally requires the use of standards that preferably should resemble the specimen in chemical and physical properties. Special problems in biological microanalysis include low count rates, specimen instability and mass loss, extraneous contributions to the spectrum, and preparative artifacts affecting quantitation. A relatively recent development in X-ray microanalysis of biological specimens is the quantitative determination of local water content

  9. Issues in Quantitative Analysis of Ultraviolet Imager (UV) Data: Airglow

    Science.gov (United States)

    Germany, G. A.; Richards, P. G.; Spann, J. F.; Brittnacher, M. J.; Parks, G. K.

    1999-01-01

    The GGS Ultraviolet Imager (UVI) has proven to be especially valuable in correlative substorm, auroral morphology, and extended statistical studies of the auroral regions. Such studies are based on knowledge of the location, spatial, and temporal behavior of auroral emissions. More quantitative studies, based on absolute radiometric intensities from UVI images, require a more intimate knowledge of the instrument behavior and data processing requirements and are inherently more difficult than studies based on relative knowledge of the oval location. In this study, UVI airglow observations are analyzed and compared with model predictions to illustrate issues that arise in quantitative analysis of UVI images. These issues include instrument calibration, long term changes in sensitivity, and imager flat field response as well as proper background correction. Airglow emissions are chosen for this study because of their relatively straightforward modeling requirements and because of their implications for thermospheric compositional studies. The analysis issues discussed here, however, are identical to those faced in quantitative auroral studies.

  10. Improved Correction of Misclassification Bias With Bootstrap Imputation.

    Science.gov (United States)

    van Walraven, Carl

    2018-07-01

    Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.

  11. Quantitative Nuclear Medicine. Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Ouyang, J.; El Fakhri, G. [Massachusetts General Hospital and Harvard Medical School, Boston (United States)

    2014-12-15

    Planar imaging is still used in clinical practice although tomographic imaging (single photon emission computed tomography (SPECT) and positron emission tomography (PET)) is becoming more established. In this chapter, quantitative methods for both imaging techniques are presented. Planar imaging is limited to single photon. For both SPECT and PET, the focus is on the quantitative methods that can be applied to reconstructed images.

  12. Mastering R for quantitative finance

    CERN Document Server

    Berlinger, Edina; Badics, Milán; Banai, Ádám; Daróczi, Gergely; Dömötör, Barbara; Gabler, Gergely; Havran, Dániel; Juhász, Péter; Margitai, István; Márkus, Balázs; Medvegyev, Péter; Molnár, Julia; Szucs, Balázs Árpád; Tuza, Ágnes; Vadász, Tamás; Váradi, Kata; Vidovics-Dancs, Ágnes

    2015-01-01

    This book is intended for those who want to learn how to use R's capabilities to build models in quantitative finance at a more advanced level. If you wish to perfectly take up the rhythm of the chapters, you need to be at an intermediate level in quantitative finance and you also need to have a reasonable knowledge of R.

  13. Improved correction for the tissue fraction effect in lung PET/CT imaging

    Science.gov (United States)

    Holman, Beverley F.; Cuplov, Vesna; Millner, Lynn; Hutton, Brian F.; Maher, Toby M.; Groves, Ashley M.; Thielemans, Kris

    2015-09-01

    Recently, there has been an increased interest in imaging different pulmonary disorders using PET techniques. Previous work has shown, for static PET/CT, that air content in the lung influences reconstructed image values and that it is vital to correct for this ‘tissue fraction effect’ (TFE). In this paper, we extend this work to include the blood component and also investigate the TFE in dynamic imaging. CT imaging and PET kinetic modelling are used to determine fractional air and blood voxel volumes in six patients with idiopathic pulmonary fibrosis. These values are used to illustrate best and worst case scenarios when interpreting images without correcting for the TFE. In addition, the fractional volumes were used to determine correction factors for the SUV and the kinetic parameters. These were then applied to the patient images. The kinetic parameters K1 and Ki along with the static parameter SUV were all found to be affected by the TFE with both air and blood providing a significant contribution to the errors. Without corrections, errors range from 34-80% in the best case and 29-96% in the worst case. In the patient data, without correcting for the TFE, regions of high density (fibrosis) appeared to have a higher uptake than lower density (normal appearing tissue), however this was reversed after air and blood correction. The proposed correction methods are vital for quantitative and relative accuracy. Without these corrections, images may be misinterpreted.

  14. Improved correction for the tissue fraction effect in lung PET/CT imaging

    International Nuclear Information System (INIS)

    Holman, Beverley F; Cuplov, Vesna; Millner, Lynn; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris; Maher, Toby M

    2015-01-01

    Recently, there has been an increased interest in imaging different pulmonary disorders using PET techniques. Previous work has shown, for static PET/CT, that air content in the lung influences reconstructed image values and that it is vital to correct for this ‘tissue fraction effect’ (TFE). In this paper, we extend this work to include the blood component and also investigate the TFE in dynamic imaging. CT imaging and PET kinetic modelling are used to determine fractional air and blood voxel volumes in six patients with idiopathic pulmonary fibrosis. These values are used to illustrate best and worst case scenarios when interpreting images without correcting for the TFE. In addition, the fractional volumes were used to determine correction factors for the SUV and the kinetic parameters. These were then applied to the patient images. The kinetic parameters K 1 and K i along with the static parameter SUV were all found to be affected by the TFE with both air and blood providing a significant contribution to the errors. Without corrections, errors range from 34–80% in the best case and 29–96% in the worst case. In the patient data, without correcting for the TFE, regions of high density (fibrosis) appeared to have a higher uptake than lower density (normal appearing tissue), however this was reversed after air and blood correction. The proposed correction methods are vital for quantitative and relative accuracy. Without these corrections, images may be misinterpreted. (paper)

  15. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    Science.gov (United States)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  16. Quantitative Imaging with a Mobile Phone Microscope

    Science.gov (United States)

    Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  17. Quantitative imaging with a mobile phone microscope.

    Directory of Open Access Journals (Sweden)

    Arunan Skandarajah

    Full Text Available Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications.

  18. Quantitative Analysis of Renogram

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Keun Chul [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    1969-03-15

    value are useful for the differentiation of various renal diseases, however, qualitative analysis of the renogram with one or two parameters is not accurate. 3) In bilateral non-functioning kidney groups, a positive correlation between anemia and nitrogen retention was observed, although the quantitative assessment of the degree of non-functioning was impossible.

  19. Quantitative Analysis of Renogram

    International Nuclear Information System (INIS)

    Choi, Keun Chul

    1969-01-01

    are useful for the differentiation of various renal diseases, however, qualitative analysis of the renogram with one or two parameters is not accurate. 3) In bilateral non-functioning kidney groups, a positive correlation between anemia and nitrogen retention was observed, although the quantitative assessment of the degree of non-functioning was impossible.

  20. A CT-based method for fully quantitative TI SPECT

    International Nuclear Information System (INIS)

    Willowson, Kathy; Bailey, Dale; Baldock, Clive

    2009-01-01

    Full text: Objectives: To develop and validate a method for quantitative 2 0 l TI SPECT data based on corrections derived from X-ray CT data, and to apply the method in the clinic for quantitative determination of recurrence of brain tumours. Method: A previously developed method for achieving quantitative SPECT with 9 9 m Tc based on corrections derived from xray CT data was extended to apply to 2 0 l Tl. Experimental validation was performed on a cylindrical phantom by comparing known injected activity and measured concentration to quantitative calculations. Further evaluation was performed on a RSI Striatal Brain Phantom containing three 'lesions' with activity to background ratios of 1: 1, 1.5: I and 2: I. The method was subsequently applied to a series of scans from patients with suspected recurrence of brain tumours (principally glioma) to determine an SUV-like measure (Standardised Uptake Value). Results: The total activity and concentration in the phantom were calculated to within 3% and I % of the true values, respectively. The calculated values for the concentration of activity in the background and corresponding lesions of the brain phantom (in increasing ratios) were found to be within 2%,10%,1% and 2%, respectively, of the true concentrations. Patient studies showed that an initial SUV greater than 1.5 corresponded to a 56% mortality rate in the first 12 months, as opposed to a 14% mortality rate for those with a SUV less than 1.5. Conclusion: The quantitative technique produces accurate results for the radionuclide 2 0 l Tl. Initial investigation in clinical brain SPECT suggests correlation between quantitative uptake and survival.

  1. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  2. Remote Sensing of Tropical Ecosystems: Atmospheric Correction and Cloud Masking Matter

    Science.gov (United States)

    Hilker, Thomas; Lyapustin, Alexei I.; Tucker, Compton J.; Sellers, Piers J.; Hall, Forrest G.; Wang, Yujie

    2012-01-01

    Tropical rainforests are significant contributors to the global cycles of energy, water and carbon. As a result, monitoring of the vegetation status over regions such as Amazonia has been a long standing interest of Earth scientists trying to determine the effect of climate change and anthropogenic disturbance on the tropical ecosystems and its feedback on the Earth's climate. Satellite-based remote sensing is the only practical approach for observing the vegetation dynamics of regions like the Amazon over useful spatial and temporal scales, but recent years have seen much controversy over satellite-derived vegetation states in Amazônia, with studies predicting opposite feedbacks depending on data processing technique and interpretation. Recent results suggest that some of this uncertainty could stem from a lack of quality in atmospheric correction and cloud screening. In this paper, we assess these uncertainties by comparing the current standard surface reflectance products (MYD09, MYD09GA) and derived composites (MYD09A1, MCD43A4 and MYD13A2 - Vegetation Index) from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Aqua satellite to results obtained from the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. MAIAC uses a new cloud screening technique, and novel aerosol retrieval and atmospheric correction procedures which are based on time-series and spatial analyses. Our results show considerable improvements of MAIAC processed surface reflectance compared to MYD09/MYD13 with noise levels reduced by a factor of up to 10. Uncertainties in the current MODIS surface reflectance product were mainly due to residual cloud and aerosol contamination which affected the Normalized Difference Vegetation Index (NDVI): During the wet season, with cloud cover ranging between 90 percent and 99 percent, conventionally processed NDVI was significantly depressed due to undetected clouds. A smaller reduction in NDVI due to increased

  3. High order corrections to the renormalon

    International Nuclear Information System (INIS)

    Faleev, S.V.

    1997-01-01

    High order corrections to the renormalon are considered. Each new type of insertion into the renormalon chain of graphs generates a correction to the asymptotics of perturbation theory of the order of ∝1. However, this series of corrections to the asymptotics is not the asymptotic one (i.e. the mth correction does not grow like m.). The summation of these corrections for the UV renormalon may change the asymptotics by a factor N δ . For the traditional IR renormalon the mth correction diverges like (-2) m . However, this divergence has no infrared origin and may be removed by a proper redefinition of the IR renormalon. On the other hand, for IR renormalons in hadronic event shapes one should naturally expect these multiloop contributions to decrease like (-2) -m . Some problems expected upon reaching the best accuracy of perturbative QCD are also discussed. (orig.)

  4. The satellite-based remote sensing of particulate matter (PM) in support to urban air quality: PM variability and hot spots within the Cordoba city (Argentina) as revealed by the high-resolution MAIAC-algorithm retrievals applied to a ten-years dataset (2

    Science.gov (United States)

    Della Ceca, Lara Sofia; Carreras, Hebe A.; Lyapustin, Alexei I.; Barnaba, Francesca

    2016-04-01

    Particulate matter (PM) is one of the major harmful pollutants to public health and the environment [1]. In developed countries, specific air-quality legislation establishes limit values for PM metrics (e.g., PM10, PM2.5) to protect the citizens health (e.g., European Commission Directive 2008/50, US Clean Air Act). Extensive PM measuring networks therefore exist in these countries to comply with the legislation. In less developed countries air quality monitoring networks are still lacking and satellite-based datasets could represent a valid alternative to fill observational gaps. The main PM (or aerosol) parameter retrieved from satellite is the 'aerosol optical depth' (AOD), an optical parameter quantifying the aerosol load in the whole atmospheric column. Datasets from the MODIS sensors on board of the NASA spacecrafts TERRA and AQUA are among the longest records of AOD from space. However, although extremely useful in regional and global studies, the standard 10 km-resolution MODIS AOD product is not suitable to be employed at the urban scale. Recently, a new algorithm called Multi-Angle Implementation of Atmospheric Correction (MAIAC) was developed for MODIS, providing AOD at 1 km resolution [2]. In this work, the MAIAC AOD retrievals over the decade 2003-2013 were employed to investigate the spatiotemporal variation of atmospheric aerosols over the Argentinean city of Cordoba and its surroundings, an area where a very scarce dataset of in situ PM data is available. The MAIAC retrievals over the city were firstly validated using a 'ground truth' AOD dataset from the Cordoba sunphotometer operating within the global AERONET network [3]. This validation showed the good performances of the MAIAC algorithm in the area. The satellite MAIAC AOD dataset was therefore employed to investigate the 10-years trend as well as seasonal and monthly patterns of particulate matter in the Cordoba city. The first showed a marked increase of AOD over time, particularly evident in

  5. Correction magnet power supplies for APS machine

    International Nuclear Information System (INIS)

    Kang, Y.G.

    1991-01-01

    The Advanced Photon Source machine requires a number of correction magnets; five kinds for the storage ring, two for the injector synchrotron, and two for the positron accumulator ring. Three types of bipolar power supply will be used for all the correction magnets. This paper describes the design aspects and considerations for correction magnet power supplies for the APS machine. 3 refs., 3 figs., 1 tab

  6. Quantum corrections to Schwarzschild black hole

    Energy Technology Data Exchange (ETDEWEB)

    Calmet, Xavier; El-Menoufi, Basem Kamal [University of Sussex, Department of Physics and Astronomy, Brighton (United Kingdom)

    2017-04-15

    Using effective field theory techniques, we compute quantum corrections to spherically symmetric solutions of Einstein's gravity and focus in particular on the Schwarzschild black hole. Quantum modifications are covariantly encoded in a non-local effective action. We work to quadratic order in curvatures simultaneously taking local and non-local corrections into account. Looking for solutions perturbatively close to that of classical general relativity, we find that an eternal Schwarzschild black hole remains a solution and receives no quantum corrections up to this order in the curvature expansion. In contrast, the field of a massive star receives corrections which are fully determined by the effective field theory. (orig.)

  7. Towards Compensation Correctness in Interactive Systems

    Science.gov (United States)

    Vaz, Cátia; Ferreira, Carla

    One fundamental idea of service-oriented computing is that applications should be developed by composing already available services. Due to the long running nature of service interactions, a main challenge in service composition is ensuring correctness of failure recovery. In this paper, we use a process calculus suitable for modelling long running transactions with a recovery mechanism based on compensations. Within this setting, we discuss and formally state correctness criteria for compensable processes compositions, assuming that each process is correct with respect to failure recovery. Under our theory, we formally interpret self-healing compositions, that can detect and recover from failures, as correct compositions of compensable processes.

  8. Class action litigation in correctional psychiatry.

    Science.gov (United States)

    Metzner, Jeffrey L

    2002-01-01

    Class action litigation has been instrumental in jail and prison reform during the past two decades. Correctional mental health systems have significantly benefited from such litigation. Forensic psychiatrists have been crucial in the litigation process and the subsequent evolution of correctional mental health care systems. This article summarizes information concerning basic demographics of correctional populations and costs of correctional health care and provides a brief history of such litigation. The role of psychiatric experts, with particular reference to standards of care, is described. Specifically discussed are issues relevant to suicide prevention, the prevalence of mentally ill inmates in supermax prisons, and discharge planning.

  9. Correctional Practitioners on Reentry: A Missed Perspective

    Directory of Open Access Journals (Sweden)

    Elaine Gunnison

    2015-06-01

    Full Text Available Much of the literature on reentry of formerly incarcerated individuals revolves around discussions of failures they incur during reintegration or the identification of needs and challenges that they have during reentry from the perspective of community corrections officers. The present research fills a gap in the reentry literature by examining the needs and challenges of formerly incarcerated individuals and what makes for reentry success from the perspective of correctional practitioners (i.e., wardens and non-wardens. The views of correctional practitioners are important to understand the level of organizational commitment to reentry and the ways in which social distance between correctional professionals and their clients may impact reentry success. This research reports on the results from an email survey distributed to a national sample of correctional officials listed in the American Correctional Association, 2012 Directory. Specifically, correctional officials were asked to report on needs and challenges facing formerly incarcerated individuals, define success, identify factors related to successful reentry, recount success stories, and report what could be done to assist them in successful outcomes. Housing and employment were raised by wardens and corrections officials as important needs for successful reentry. Corrections officials adopted organizational and systems perspectives in their responses and had differing opinions about social distance. Policy implications are presented.

  10. Evaluation of scatter correction using a single isotope for simultaneous emission and transmission data

    International Nuclear Information System (INIS)

    Yang, J.; Kuikka, J.T.; Vanninen, E.; Laensimies, E.; Kauppinen, T.; Patomaeki, L.

    1999-01-01

    Photon scatter is one of the most important factors degrading the quantitative accuracy of SPECT images. Many scatter correction methods have been proposed. The single isotope method was proposed by us. Aim: We evaluate the scatter correction method of improving the quality of images by acquiring emission and transmission data simultaneously with single isotope scan. Method: To evaluate the proposed scatter correction method, a contrast and linearity phantom was studied. Four female patients with fibromyalgia (FM) syndrome and four with chronic back pain (BP) were imaged. Grey-to-cerebellum (G/C) and grey-to-white matter (G/W) ratios were determined by one skilled operator for 12 regions of interest (ROIs) in each subject. Results: The linearity of activity response was improved after the scatter correction (r=0.999). The y-intercept value of the regression line was 0.036 (p [de

  11. Mixing quantitative with qualitative methods

    DEFF Research Database (Denmark)

    Morrison, Ann; Viller, Stephen; Heck, Tamara

    2017-01-01

    with or are considering, researching, or working with both quantitative and qualitative evaluation methods (in academia or industry), join us in this workshop. In particular, we look at adding quantitative to qualitative methods to build a whole picture of user experience. We see a need to discuss both quantitative...... and qualitative research because there is often a perceived lack of understanding of the rigor involved in each. The workshop will result in a White Paper on the latest developments in this field, within Australia and comparative with international work. We anticipate sharing submissions and workshop outcomes...

  12. Understanding quantitative research: part 1.

    Science.gov (United States)

    Hoe, Juanita; Hoare, Zoë

    This article, which is the first in a two-part series, provides an introduction to understanding quantitative research, basic statistics and terminology used in research articles. Critical appraisal of research articles is essential to ensure that nurses remain up to date with evidence-based practice to provide consistent and high-quality nursing care. This article focuses on developing critical appraisal skills and understanding the use and implications of different quantitative approaches to research. Part two of this article will focus on explaining common statistical terms and the presentation of statistical data in quantitative research.

  13. Quantitative EPR A Practitioners Guide

    CERN Document Server

    Eaton, Gareth R; Barr, David P; Weber, Ralph T

    2010-01-01

    This is the first comprehensive yet practical guide for people who perform quantitative EPR measurements. No existing book provides this level of practical guidance to ensure the successful use of EPR. There is a growing need in both industrial and academic research to provide meaningful and accurate quantitative EPR results. This text discusses the various sample, instrument and software related aspects required for EPR quantitation. Specific topics include: choosing a reference standard, resonator considerations (Q, B1, Bm), power saturation characteristics, sample positioning, and finally, putting all the factors together to obtain an accurate spin concentration of a sample.

  14. Quantitative interpretation of autoradiogram imaging

    International Nuclear Information System (INIS)

    Golenishchev, I.A.; Isaev, I.G.; Lyubakov, V.N.; Majorov, A.N.; Nosukhin, L.S.; Yampol'skij, V.V.

    1983-01-01

    An evaluation was made of autoradiograms using a television analyser. The fluctuations of the videosignal amplitude while maintaining the same optical density have monotonous character. The equation is given of the correction of values of the videosignal amplitude. Because of boundary effects the compensation of amplitude using the mentioned equation may be provided only for 70% of the area. These effects may be removed by introducing the matrix of corrections. (E.S.)

  15. MR Fingerprinting for Rapid Quantitative Abdominal Imaging.

    Science.gov (United States)

    Chen, Yong; Jiang, Yun; Pahwa, Shivani; Ma, Dan; Lu, Lan; Twieg, Michael D; Wright, Katherine L; Seiberlich, Nicole; Griswold, Mark A; Gulani, Vikas

    2016-04-01

    To develop a magnetic resonance (MR) "fingerprinting" technique for quantitative abdominal imaging. This HIPAA-compliant study had institutional review board approval, and informed consent was obtained from all subjects. To achieve accurate quantification in the presence of marked B0 and B1 field inhomogeneities, the MR fingerprinting framework was extended by using a two-dimensional fast imaging with steady-state free precession, or FISP, acquisition and a Bloch-Siegert B1 mapping method. The accuracy of the proposed technique was validated by using agarose phantoms. Quantitative measurements were performed in eight asymptomatic subjects and in six patients with 20 focal liver lesions. A two-tailed Student t test was used to compare the T1 and T2 results in metastatic adenocarcinoma with those in surrounding liver parenchyma and healthy subjects. Phantom experiments showed good agreement with standard methods in T1 and T2 after B1 correction. In vivo studies demonstrated that quantitative T1, T2, and B1 maps can be acquired within a breath hold of approximately 19 seconds. T1 and T2 measurements were compatible with those in the literature. Representative values included the following: liver, 745 msec ± 65 (standard deviation) and 31 msec ± 6; renal medulla, 1702 msec ± 205 and 60 msec ± 21; renal cortex, 1314 msec ± 77 and 47 msec ± 10; spleen, 1232 msec ± 92 and 60 msec ± 19; skeletal muscle, 1100 msec ± 59 and 44 msec ± 9; and fat, 253 msec ± 42 and 77 msec ± 16, respectively. T1 and T2 in metastatic adenocarcinoma were 1673 msec ± 331 and 43 msec ± 13, respectively, significantly different from surrounding liver parenchyma relaxation times of 840 msec ± 113 and 28 msec ± 3 (P < .0001 and P < .01) and those in hepatic parenchyma in healthy volunteers (745 msec ± 65 and 31 msec ± 6, P < .0001 and P = .021, respectively). A rapid technique for quantitative abdominal imaging was developed that allows simultaneous quantification of multiple tissue

  16. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Boehlecke, Robert

    2004-01-01

    The six bunkers included in CAU 204 were primarily used to monitor atmospheric testing or store munitions. The 'Corrective Action Investigation Plan (CAIP) for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada' (NNSA/NV, 2002a) provides information relating to the history, planning, and scope of the investigation; therefore, it will not be repeated in this CADD. This CADD identifies potential corrective action alternatives and provides a rationale for the selection of a recommended corrective action alternative for each CAS within CAU 204. The evaluation of corrective action alternatives is based on process knowledge and the results of investigative activities conducted in accordance with the CAIP (NNSA/NV, 2002a) that was approved prior to the start of the Corrective Action Investigation (CAI). Record of Technical Change (ROTC) No. 1 to the CAIP (approval pending) documents changes to the preliminary action levels (PALs) agreed to by the Nevada Division of Environmental Protection (NDEP) and DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO). This ROTC specifically discusses the radiological PALs and their application to the findings of the CAU 204 corrective action investigation. The scope of this CADD consists of the following: (1) Develop corrective action objectives; (2) Identify corrective action alternative screening criteria; (3) Develop corrective action alternatives; (4) Perform detailed and comparative evaluations of corrective action alternatives in relation to corrective action objectives and screening criteria; and (5) Recommend and justify a preferred corrective action alternative for each CAS within CAU 204

  17. 7 CFR 1730.25 - Corrective action.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Corrective action. 1730.25 Section 1730.25... AGRICULTURE ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Operations and Maintenance Requirements § 1730.25 Corrective action. (a) For any items on the RUS Form 300 rated unsatisfactory (i.e., 0 or 1) by the borrower...

  18. Fluorescence correction in electron probe microanalysis

    International Nuclear Information System (INIS)

    Castellano, Gustavo; Riveros, J.A.

    1987-01-01

    In this work, several expressions for characteristic fluorescence corrections are computed, for a compilation of experimental determinations on standard samples. Since this correction does not take significant values, the performance of the different models is nearly the same; this fact suggests the use of the simplest available expression. (Author) [es

  19. 9 CFR 417.3 - Corrective actions.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Corrective actions. 417.3 Section 417.3 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.3 Corrective actions. (a) The written HACCP plan...

  20. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step