Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm
Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.
2011-01-01
An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
Directory of Open Access Journals (Sweden)
Jesús A. Prieto-Amparan
2018-02-01
Full Text Available A key step in the processing of satellite imagery is the radiometric correction of images to account for reflectance that water vapor, atmospheric dust, and other atmospheric elements add to the images, causing imprecisions in variables of interest estimated at the earth’s surface level. That issue is important when performing spatiotemporal analyses to determine ecosystems’ productivity. In this study, three correction methods were applied to satellite images for the period 2010–2014. These methods were Atmospheric Correction for Flat Terrain 2 (ATCOR2, Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes (FLAASH, and Dark Object Substract 1 (DOS1. The images included 12 sub-scenes from the Landsat Thematic Mapper (TM and the Operational Land Imager (OLI sensors. The images corresponded to three Permanent Monitoring Sites (PMS of grasslands, ‘Teseachi’, ‘Eden’, and ‘El Sitio’, located in the state of Chihuahua, Mexico. After the corrections were applied to the images, they were evaluated in terms of their precision for biomass estimation. For that, biomass production was measured during the study period at the three PMS to calibrate production models developed with simple and multiple linear regression (SLR and MLR techniques. When the estimations were made with MLR, DOS1 obtained an R2 of 0.97 (p < 0.05 for 2012 and values greater than 0.70 (p < 0.05 during 2013–2014. The rest of the algorithms did not show significant results and DOS1, which is the simplest algorithm, resulted in the best biomass estimator. Thus, in the multitemporal analysis of grassland based on spectral information, it is not necessary to apply complex correction procedures. The maps of biomass production, elaborated from images corrected with DOS1, can be used as a reference point for the assessment of the grassland condition, as well as to determine the grazing capacity and thus the potential animal production in such ecosystems.
Atmospheric correction of APEX hyperspectral data
Directory of Open Access Journals (Sweden)
Sterckx Sindy
2016-03-01
Full Text Available Atmospheric correction plays a crucial role among the processing steps applied to remotely sensed hyperspectral data. Atmospheric correction comprises a group of procedures needed to remove atmospheric effects from observed spectra, i.e. the transformation from at-sensor radiances to at-surface radiances or reflectances. In this paper we present the different steps in the atmospheric correction process for APEX hyperspectral data as applied by the Central Data Processing Center (CDPC at the Flemish Institute for Technological Research (VITO, Mol, Belgium. The MODerate resolution atmospheric TRANsmission program (MODTRAN is used to determine the source of radiation and for applying the actual atmospheric correction. As part of the overall correction process, supporting algorithms are provided in order to derive MODTRAN configuration parameters and to account for specific effects, e.g. correction for adjacency effects, haze and shadow correction, and topographic BRDF correction. The methods and theory underlying these corrections and an example of an application are presented.
Directory of Open Access Journals (Sweden)
Pablito M. López-Serrano
2016-04-01
Full Text Available Solar radiation is affected by absorption and emission phenomena during its downward trajectory from the Sun to the Earth’s surface and during the upward trajectory detected by satellite sensors. This leads to distortion of the ground radiometric properties (reflectance recorded by satellite images, used in this study to estimate aboveground forest biomass (AGB. Atmospherically-corrected remote sensing data can be used to estimate AGB on a global scale and with moderate effort. The objective of this study was to evaluate four atmospheric correction algorithms (for surface reflectance, ATCOR2 (Atmospheric Correction for Flat Terrain, COST (Cosine of the Sun Zenith Angle, FLAASH (Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes and 6S (Second Simulation of Satellite Signal in the Solar, and one radiometric correction algorithm (for reflectance at the sensor ToA (Apparent Reflectance at the Top of Atmosphere to estimate AGB in temperate forest in the northeast of the state of Durango, Mexico. The AGB was estimated from Landsat 5 TM imagery and ancillary information from a digital elevation model (DEM using the non-parametric multivariate adaptive regression splines (MARS technique. Field reference data for the model training were collected by systematic sampling of 99 permanent forest growth and soil research sites (SPIFyS established during the winter of 2011. The following predictor variables were identified in the MARS model: Band 7, Band 5, slope (β, Wetness Index (WI, NDVI and MSAVI2. After cross-validation, 6S was found to be the optimal model for estimating AGB (R2 = 0.71 and RMSE = 33.5 Mg·ha−1; 37.61% of the average stand biomass. We conclude that atmospheric and radiometric correction of satellite images can be used along with non-parametric techniques to estimate AGB with acceptable accuracy.
Atmospheric correction over coastal waters using multilayer neural networks
Fan, Y.; Li, W.; Charles, G.; Jamet, C.; Zibordi, G.; Schroeder, T.; Stamnes, K. H.
2017-12-01
Standard atmospheric correction (AC) algorithms work well in open ocean areas where the water inherent optical properties (IOPs) are correlated with pigmented particles. However, the IOPs of turbid coastal waters may independently vary with pigmented particles, suspended inorganic particles, and colored dissolved organic matter (CDOM). In turbid coastal waters standard AC algorithms often exhibit large inaccuracies that may lead to negative water-leaving radiances (Lw) or remote sensing reflectance (Rrs). We introduce a new atmospheric correction algorithm for coastal waters based on a multilayer neural network (MLNN) machine learning method. We use a coupled atmosphere-ocean radiative transfer model to simulate the Rayleigh-corrected radiance (Lrc) at the top of the atmosphere (TOA) and the Rrs just above the surface simultaneously, and train a MLNN to derive the aerosol optical depth (AOD) and Rrs directly from the TOA Lrc. The SeaDAS NIR algorithm, the SeaDAS NIR/SWIR algorithm, and the MODIS version of the Case 2 regional water - CoastColour (C2RCC) algorithm are included in the comparison with AERONET-OC measurements. The results show that the MLNN algorithm significantly improves retrieval of normalized Lw in blue bands (412 nm and 443 nm) and yields minor improvements in green and red bands. These results indicate that the MLNN algorithm is suitable for application in turbid coastal waters. Application of the MLNN algorithm to MODIS Aqua images in several coastal areas also shows that it is robust and resilient to contamination due to sunglint or adjacency effects of land and cloud edges. The MLNN algorithm is very fast once the neural network has been properly trained and is therefore suitable for operational use. A significant advantage of the MLNN algorithm is that it does not need SWIR bands, which implies significant cost reduction for dedicated OC missions. A recent effort has been made to extend the MLNN AC algorithm to extreme atmospheric conditions
Coastal Zone Color Scanner atmospheric correction - Influence of El Chichon
Gordon, Howard R.; Castano, Diego J.
1988-01-01
The addition of an El Chichon-like aerosol layer in the stratosphere is shown to have very little effect on the basic CZCS atmospheric correction algorithm. The additional stratospheric aerosol is found to increase the total radiance exiting the atmosphere, thereby increasing the probability that the sensor will saturate. It is suggested that in the absence of saturation the correction algorithm should perform as well as in the absence of the stratospheric layer.
Applicability of Current Atmospheric Correction Techniques in the Red Sea
Tiwari, Surya Prakash
2016-10-26
Much of the Red Sea is considered to be a typical oligotrophic sea having very low chlorophyll-a concentrations. Few existing studies describe the variability of phytoplankton biomass in the Red Sea. This study evaluates the resulting chlorophyll-a values computed with different chlorophyll algorithms (e.g., Chl_OCI, Chl_Carder, Chl_GSM, and Chl_GIOP) using radiances derived from two different atmospheric correction algorithms (NASA standard and Singh and Shanmugam (2014)). The resulting satellite derived chlorophyll-a concentrations are compared with in situ chlorophyll values measured using the High-Performance Liquid Chromatography (HPLC). Statistical analyses are used to assess the performances of algorithms using the in situ measurements obtain in the Red Sea, to evaluate the approach to atmospheric correction and algorithm parameterization.
Applicability of Current Atmospheric Correction Techniques in the Red Sea
Tiwari, Surya Prakash; Ouhssain, Mustapha; Jones, Burton
2016-01-01
Much of the Red Sea is considered to be a typical oligotrophic sea having very low chlorophyll-a concentrations. Few existing studies describe the variability of phytoplankton biomass in the Red Sea. This study evaluates the resulting chlorophyll-a values computed with different chlorophyll algorithms (e.g., Chl_OCI, Chl_Carder, Chl_GSM, and Chl_GIOP) using radiances derived from two different atmospheric correction algorithms (NASA standard and Singh and Shanmugam (2014)). The resulting satellite derived chlorophyll-a concentrations are compared with in situ chlorophyll values measured using the High-Performance Liquid Chromatography (HPLC). Statistical analyses are used to assess the performances of algorithms using the in situ measurements obtain in the Red Sea, to evaluate the approach to atmospheric correction and algorithm parameterization.
International Nuclear Information System (INIS)
Ji Zhilong; Ma Yuanwei; Wang Dezhong
2014-01-01
Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)
Solving for the Surface: An Automated Approach to THEMIS Atmospheric Correction
Ryan, A. J.; Salvatore, M. R.; Smith, R.; Edwards, C. S.; Christensen, P. R.
2013-12-01
Here we present the initial results of an automated atmospheric correction algorithm for the Thermal Emission Imaging System (THEMIS) instrument, whereby high spectral resolution Thermal Emission Spectrometer (TES) data are queried to generate numerous atmospheric opacity values for each THEMIS infrared image. While the pioneering methods of Bandfield et al. [2004] also used TES spectra to atmospherically correct THEMIS data, the algorithm presented here is a significant improvement because of the reduced dependency on user-defined inputs for individual images. Additionally, this technique is particularly useful for correcting THEMIS images that have captured a range of atmospheric conditions and/or surface elevations, issues that have been difficult to correct for using previous techniques. Thermal infrared observations of the Martian surface can be used to determine the spatial distribution and relative abundance of many common rock-forming minerals. This information is essential to understanding the planet's geologic and climatic history. However, the Martian atmosphere also has absorptions in the thermal infrared which complicate the interpretation of infrared measurements obtained from orbit. TES has sufficient spectral resolution (143 bands at 10 cm-1 sampling) to linearly unmix and remove atmospheric spectral end-members from the acquired spectra. THEMIS has the benefit of higher spatial resolution (~100 m/pixel vs. 3x5 km/TES-pixel) but has lower spectral resolution (8 surface sensitive spectral bands). As such, it is not possible to isolate the surface component by unmixing the atmospheric contribution from the THEMIS spectra, as is done with TES. Bandfield et al. [2004] developed a technique using atmospherically corrected TES spectra as tie-points for constant radiance offset correction and surface emissivity retrieval. This technique is the primary method used to correct THEMIS but is highly susceptible to inconsistent results if great care in the
Atmospheric Correction Inter-Comparison Exercise
Directory of Open Access Journals (Sweden)
Georgia Doxani
2018-02-01
Full Text Available The Atmospheric Correction Inter-comparison eXercise (ACIX is an international initiative with the aim to analyse the Surface Reflectance (SR products of various state-of-the-art atmospheric correction (AC processors. The Aerosol Optical Thickness (AOT and Water Vapour (WV are also examined in ACIX as additional outputs of AC processing. In this paper, the general ACIX framework is discussed; special mention is made of the motivation to initiate the experiment, the inter-comparison protocol, and the principal results. ACIX is free and open and every developer was welcome to participate. Eventually, 12 participants applied their approaches to various Landsat-8 and Sentinel-2 image datasets acquired over sites around the world. The current results diverge depending on the sensors, products, and sites, indicating their strengths and weaknesses. Indeed, this first implementation of processor inter-comparison was proven to be a good lesson for the developers to learn the advantages and limitations of their approaches. Various algorithm improvements are expected, if not already implemented, and the enhanced performances are yet to be assessed in future ACIX experiments.
Wang, Menghua; Shi, Wei; Jiang, Lide
2012-01-16
A regional near-infrared (NIR) ocean normalized water-leaving radiance (nL(w)(λ)) model is proposed for atmospheric correction for ocean color data processing in the western Pacific region, including the Bohai Sea, Yellow Sea, and East China Sea. Our motivation for this work is to derive ocean color products in the highly turbid western Pacific region using the Geostationary Ocean Color Imager (GOCI) onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS). GOCI has eight spectral bands from 412 to 865 nm but does not have shortwave infrared (SWIR) bands that are needed for satellite ocean color remote sensing in the turbid ocean region. Based on a regional empirical relationship between the NIR nL(w)(λ) and diffuse attenuation coefficient at 490 nm (K(d)(490)), which is derived from the long-term measurements with the Moderate-resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, an iterative scheme with the NIR-based atmospheric correction algorithm has been developed. Results from MODIS-Aqua measurements show that ocean color products in the region derived from the new proposed NIR-corrected atmospheric correction algorithm match well with those from the SWIR atmospheric correction algorithm. Thus, the proposed new atmospheric correction method provides an alternative for ocean color data processing for GOCI (and other ocean color satellite sensors without SWIR bands) in the turbid ocean regions of the Bohai Sea, Yellow Sea, and East China Sea, although the SWIR-based atmospheric correction approach is still much preferred. The proposed atmospheric correction methodology can also be applied to other turbid coastal regions.
Atmospheric correction of satellite data
Shmirko, Konstantin; Bobrikov, Alexey; Pavlov, Andrey
2015-11-01
Atmosphere responses for more than 90% of all radiation measured by satellite. Due to this, atmospheric correction plays an important role in separating water leaving radiance from the signal, evaluating concentration of various water pigments (chlorophyll-A, DOM, CDOM, etc). The elimination of atmospheric intrinsic radiance from remote sensing signal referred to as atmospheric correction.
Case study of atmospheric correction on CCD data of HJ-1 satellite based on 6S model
International Nuclear Information System (INIS)
Xue, Xiaoiuan; Meng, Oingyan; Xie, Yong; Sun, Zhangli; Wang, Chang; Zhao, Hang
2014-01-01
In this study, atmospheric radiative transfer model 6S was used to simulate the radioactive transfer process in the surface-atmosphere-sensor. An algorithm based on the look-up table (LUT) founded by 6S model was used to correct (HJ-1) CCD image pixel by pixel. Then, the effect of atmospheric correction on CCD data of HJ-1 satellite was analyzed in terms of the spectral curves and evaluated against the measured reflectance acquired during HJ-1B satellite overpass, finally, the normalized difference vegetation index (NDVI) before and after atmospheric correction were compared. The results showed: (1) Atmospheric correction on CCD data of HJ-1 satellite can reduce the ''increase'' effect of the atmosphere. (2) Apparent reflectance are higher than those of surface reflectance corrected by 6S model in band1∼band3, but they are lower in the near-infrared band; the surface reflectance values corrected agree with the measured reflectance values well. (3)The NDVI increases significantly after atmospheric correction, which indicates the atmospheric correction can highlight the vegetation information
Energy Technology Data Exchange (ETDEWEB)
Borel, C.C.; Villeneuve, P.V.; Clodium, W.B.; Szymenski, J.J.; Davis, A.B.
1999-04-04
Deriving information about the Earth's surface requires atmospheric corrections of the measured top-of-the-atmosphere radiances. One possible path is to use atmospheric radiative transfer codes to predict how the radiance leaving the ground is affected by the scattering and attenuation. In practice the atmosphere is usually not well known and thus it is necessary to use more practical methods. The authors will describe how to find dark surfaces, estimate the atmospheric optical depth, estimate path radiance and identify thick clouds using thresholds on reflectance and NDVI and columnar water vapor. The authors describe a simple method to correct a visible channel contaminated by a thin cirrus clouds.
Atmospheric scattering corrections to solar radiometry
International Nuclear Information System (INIS)
Box, M.A.; Deepak, A.
1979-01-01
Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. In this paper we shall discuss the correction factors needed to account for the diffuse (i.e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle 0 ) and relatively clear skies (optical depths <0.4), it is shown that the total diffuse contributions represents approximately l% of the total intensity. It is assumed here that the main contributions to the diffuse radiation within the detector's view cone are due to single scattering by molecules and aerosols and multiple scattering by molecules alone, aerosol multiple scattering contributions being treated as negligibly small. The theory and the numerical results discussed in this paper will be helpful not only in making corrections to the measured optical depth data but also in designing improved solar radiometers
Directory of Open Access Journals (Sweden)
Nisha Rani
2017-07-01
Full Text Available Hyperspectral images have wide applications in the fields of geology, mineral exploration, agriculture, forestry and environmental studies etc. due to their narrow band width with numerous channels. However, these images commonly suffer from atmospheric effects, thereby limiting their use. In such a situation, atmospheric correction becomes a necessary pre-requisite for any further processing and accurate interpretation of spectra of different surface materials/objects. In the present study, two very advance atmospheric approaches i.e. QUAC and FLAASH have been applied on the hyperspectral remote sensing imagery. The spectra of vegetation, man-made structure and different minerals from the Gadag area of Karnataka, were extracted from the raw image and also from the QUAC and FLAASH corrected images. These spectra were compared among themselves and also with the existing USGS and JHU spectral library. FLAASH is rigorous atmospheric algorithm and requires various parameters to perform but it has capability to compensate the effects of atmospheric absorption. These absorption curves in any spectra play an important role in identification of the compositions. Therefore, the presence of unwanted absorption features can lead to wrong interpretation and identification of mineral composition. FLAASH also has an advantage of spectral polishing which provides smooth spectral curves which helps in accurate identification of composition of minerals. Therefore, this study recommends that FLAASH is better than QUAC for atmospheric correction and correct interpretation and identification of composition of any object or minerals.
Atmospheric Error Correction of the Laser Beam Ranging
Directory of Open Access Journals (Sweden)
J. Saydi
2014-01-01
Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.
Werdell, P. Jeremy; Franz, Bryan A.; Bailey, Sean W.
2010-01-01
The NASA Moderate Resolution Imaging Spectroradiometer onboard the Aqua platform (MODIS-Aqua) provides a viable data stream for operational water quality monitoring of Chesapeake Bay. Marine geophysical products from MODIS-Aqua depend on the efficacy of the atmospheric correction process, which can be problematic in coastal environments. The operational atmospheric correction algorithm for MODIS-Aqua requires an assumption of negligible near-infrared water-leaving radiance, nL(sub w)(NIR). This assumption progressively degrades with increasing turbidity and, as such, methods exist to account for non-negligible nL(sub w)(NIR) within the atmospheric correction process or to use alternate radiometric bands where the assumption is satisfied, such as those positioned within shortwave infrared (SWIR) region of the spectrum. We evaluated a decade-long time-series of nL(sub w)(lambda) from MODIS-Aqua in Chesapeake Bay derived using NIR and SWIR bands for atmospheric correction. Low signal-to-noise ratios (SNR) for the SWIR bands of MODIS-Aqua added noise errors to the derived radiances, which produced broad, flat frequency distributions of nL(sub w)(lambda) relative to those produced using the NIR bands. The SWIR approach produced an increased number of negative nL(sub w)(lambda) and decreased sample size relative to the NIR approach. Revised vicarious calibration and regional tuning of the scheme to switch between the NIR and SWIR approaches may improve retrievals in Chesapeake Bay, however, poor SNR values for the MODIS-Aqua SWIR bands remain the primary deficiency of the SWIR-based atmospheric correction approach.
Directory of Open Access Journals (Sweden)
Javier Marcello
2016-09-01
Full Text Available The precise mapping of vegetation covers in semi-arid areas is a complex task as this type of environment consists of sparse vegetation mainly composed of small shrubs. The launch of high resolution satellites, with additional spectral bands and the ability to alter the viewing angle, offers a useful technology to focus on this objective. In this context, atmospheric correction is a fundamental step in the pre-processing of such remote sensing imagery and, consequently, different algorithms have been developed for this purpose over the years. They are commonly categorized as imaged-based methods as well as in more advanced physical models based on the radiative transfer theory. Despite the relevance of this topic, a few comparative studies covering several methods have been carried out using high resolution data or which are specifically applied to vegetation covers. In this work, the performance of five representative atmospheric correction algorithms (DOS, QUAC, FLAASH, ATCOR and 6S has been assessed, using high resolution Worldview-2 imagery and field spectroradiometer data collected simultaneously, with the goal of identifying the most appropriate techniques. The study also included a detailed analysis of the parameterization influence on the final results of the correction, the aerosol model and its optical thickness being important parameters to be properly adjusted. The effects of corrections were studied in vegetation and soil sites belonging to different protected semi-arid ecosystems (high mountain and coastal areas. In summary, the superior performance of model-based algorithms, 6S in particular, has been demonstrated, achieving reflectance estimations very close to the in-situ measurements (RMSE of between 2% and 3%. Finally, an example of the importance of the atmospheric correction in the vegetation estimation in these natural areas is presented, allowing the robust mapping of species and the analysis of multitemporal variations
On the Atmospheric Correction of Antarctic Airborne Hyperspectral Data
Directory of Open Access Journals (Sweden)
Martin Black
2014-05-01
Full Text Available The first airborne hyperspectral campaign in the Antarctic Peninsula region was carried out by the British Antarctic Survey and partners in February 2011. This paper presents an insight into the applicability of currently available radiative transfer modelling and atmospheric correction techniques for processing airborne hyperspectral data in this unique coastal Antarctic environment. Results from the Atmospheric and Topographic Correction version 4 (ATCOR-4 package reveal absolute reflectance values somewhat in line with laboratory measured spectra, with Root Mean Square Error (RMSE values of 5% in the visible near infrared (0.4–1 µm and 8% in the shortwave infrared (1–2.5 µm. Residual noise remains present due to the absorption by atmospheric gases and aerosols, but certain parts of the spectrum match laboratory measured features very well. This study demonstrates that commercially available packages for carrying out atmospheric correction are capable of correcting airborne hyperspectral data in the challenging environment present in Antarctica. However, it is anticipated that future results from atmospheric correction could be improved by measuring in situ atmospheric data to generate atmospheric profiles and aerosol models, or with the use of multiple ground targets for calibration and validation.
Nguyen, Hieu Cong; Jung, Jaehoon; Lee, Jungbin; Choi, Sung-Uk; Hong, Suk-Young; Heo, Joon
2015-07-31
The reflectance of the Earth's surface is significantly influenced by atmospheric conditions such as water vapor content and aerosols. Particularly, the absorption and scattering effects become stronger when the target features are non-bright objects, such as in aqueous or vegetated areas. For any remote-sensing approach, atmospheric correction is thus required to minimize those effects and to convert digital number (DN) values to surface reflectance. The main aim of this study was to test the three most popular atmospheric correction models, namely (1) Dark Object Subtraction (DOS); (2) Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) and (3) the Second Simulation of Satellite Signal in the Solar Spectrum (6S) and compare them with Top of Atmospheric (TOA) reflectance. By using the k-Nearest Neighbor (kNN) algorithm, a series of experiments were conducted for above-ground forest biomass (AGB) estimations of the Gongju and Sejong region of South Korea, in order to check the effectiveness of atmospheric correction methods for Landsat ETM+. Overall, in the forest biomass estimation, the 6S model showed the bestRMSE's, followed by FLAASH, DOS and TOA. In addition, a significant improvement of RMSE by 6S was found with images when the study site had higher total water vapor and temperature levels. Moreover, we also tested the sensitivity of the atmospheric correction methods to each of the Landsat ETM+ bands. The results confirmed that 6S dominates the other methods, especially in the infrared wavelengths covering the pivotal bands for forest applications. Finally, we suggest that the 6S model, integrating water vapor and aerosol optical depth derived from MODIS products, is better suited for AGB estimation based on optical remote-sensing data, especially when using satellite images acquired in the summer during full canopy development.
A locally adaptive algorithm for shadow correction in color images
Karnaukhov, Victor; Kober, Vitaly
2017-09-01
The paper deals with correction of color images distorted by spatially nonuniform illumination. A serious distortion occurs in real conditions when a part of the scene containing 3D objects close to a directed light source is illuminated much brighter than the rest of the scene. A locally-adaptive algorithm for correction of shadow regions in color images is proposed. The algorithm consists of segmentation of shadow areas with rank-order statistics followed by correction of nonuniform illumination with human visual perception approach. The performance of the proposed algorithm is compared to that of common algorithms for correction of color images containing shadow regions.
Remote Sensing of Tropical Ecosystems: Atmospheric Correction and Cloud Masking Matter
Hilker, Thomas; Lyapustin, Alexei I.; Tucker, Compton J.; Sellers, Piers J.; Hall, Forrest G.; Wang, Yujie
2012-01-01
Tropical rainforests are significant contributors to the global cycles of energy, water and carbon. As a result, monitoring of the vegetation status over regions such as Amazonia has been a long standing interest of Earth scientists trying to determine the effect of climate change and anthropogenic disturbance on the tropical ecosystems and its feedback on the Earth's climate. Satellite-based remote sensing is the only practical approach for observing the vegetation dynamics of regions like the Amazon over useful spatial and temporal scales, but recent years have seen much controversy over satellite-derived vegetation states in Amazônia, with studies predicting opposite feedbacks depending on data processing technique and interpretation. Recent results suggest that some of this uncertainty could stem from a lack of quality in atmospheric correction and cloud screening. In this paper, we assess these uncertainties by comparing the current standard surface reflectance products (MYD09, MYD09GA) and derived composites (MYD09A1, MCD43A4 and MYD13A2 - Vegetation Index) from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Aqua satellite to results obtained from the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm. MAIAC uses a new cloud screening technique, and novel aerosol retrieval and atmospheric correction procedures which are based on time-series and spatial analyses. Our results show considerable improvements of MAIAC processed surface reflectance compared to MYD09/MYD13 with noise levels reduced by a factor of up to 10. Uncertainties in the current MODIS surface reflectance product were mainly due to residual cloud and aerosol contamination which affected the Normalized Difference Vegetation Index (NDVI): During the wet season, with cloud cover ranging between 90 percent and 99 percent, conventionally processed NDVI was significantly depressed due to undetected clouds. A smaller reduction in NDVI due to increased
Bourdet, Alice; Frouin, Robert J.
2014-11-01
The classic atmospheric correction algorithm, routinely applied to second-generation ocean-color sensors such as SeaWiFS, MODIS, and MERIS, consists of (i) estimating the aerosol reflectance in the red and near infrared (NIR) where the ocean is considered black (i.e., totally absorbing), and (ii) extrapolating the estimated aerosol reflectance to shorter wavelengths. The marine reflectance is then retrieved by subtraction. Variants and improvements have been made over the years to deal with non-null reflectance in the red and near infrared, a general situation in estuaries and the coastal zone, but the solutions proposed so far still suffer some limitations, due to uncertainties in marine reflectance modeling in the near infrared or difficulty to extrapolate the aerosol signal to the blue when using observations in the shortwave infrared (SWIR), a spectral range far from the ocean-color wavelengths. To estimate the marine signal (i.e., the product of marine reflectance and atmospheric transmittance) in the near infrared, the proposed approach is to decompose the aerosol reflectance in the near infrared to shortwave infrared into principal components. Since aerosol scattering is smooth spectrally, a few components are generally sufficient to represent the perturbing signal, i.e., the aerosol reflectance in the near infrared can be determined from measurements in the shortwave infrared where the ocean is black. This gives access to the marine signal in the near infrared, which can then be used in the classic atmospheric correction algorithm. The methodology is evaluated theoretically from simulations of the top-of-atmosphere reflectance for a wide range of geophysical conditions and angular geometries and applied to actual MODIS imagery acquired over the Gulf of Mexico. The number of discarded pixels is reduced by over 80% using the PC modeling to determine the marine signal in the near infrared prior to applying the classic atmospheric correction algorithm.
Specificity of Atmospheric Correction of Satellite Data on Ocean Color in the Far East
Aleksanin, A. I.; Kachur, V. A.
2017-12-01
Calculation errors in ocean-brightness coefficients in the Far Eastern are analyzed for two atmospheric correction algorithms (NIR and MUMM). The daylight measurements in different water types show that the main error component is systematic and has a simple dependence on the magnitudes of the coefficients. The causes of the error behavior are considered. The most probable explanation for the large errors in ocean-color parameters in the Far East is a high concentration of continental aerosol absorbing light. A comparison between satellite and in situ measurements at AERONET stations in the United States and South Korea has been made. It is shown the errors in these two regions differ by up to 10 times upon close water turbidity and relatively high aerosol optical-depth computation precision in the case of using the NIR correction of the atmospheric effect.
High-speed atmospheric correction for spectral image processing
Perkins, Timothy; Adler-Golden, Steven; Cappelaere, Patrice; Mandl, Daniel
2012-06-01
Land and ocean data product generation from visible-through-shortwave-infrared multispectral and hyperspectral imagery requires atmospheric correction or compensation, that is, the removal of atmospheric absorption and scattering effects that contaminate the measured spectra. We have recently developed a prototype software system for automated, low-latency, high-accuracy atmospheric correction based on a C++-language version of the Spectral Sciences, Inc. FLAASH™ code. In this system, pre-calculated look-up tables replace on-the-fly MODTRAN® radiative transfer calculations, while the portable C++ code enables parallel processing on multicore/multiprocessor computer systems. The initial software has been installed on the Sensor Web at NASA Goddard Space Flight Center, where it is currently atmospherically correcting new data from the EO-1 Hyperion and ALI sensors. Computation time is around 10 s per data cube per processor. Further development will be conducted to implement the new atmospheric correction software on board the upcoming HyspIRI mission's Intelligent Payload Module, where it would generate data products in nearreal time for Direct Broadcast to the ground. The rapid turn-around of data products made possible by this software would benefit a broad range of applications in areas of emergency response, environmental monitoring and national defense.
Atmospheric monitoring in MAGIC and data corrections
Directory of Open Access Journals (Sweden)
Fruck Christian
2015-01-01
Full Text Available A method for analyzing returns of a custom-made “micro”-LIDAR system, operated alongside the two MAGIC telescopes is presented. This method allows for calculating the transmission through the atmospheric boundary layer as well as thin cloud layers. This is achieved by applying exponential fits to regions of the back-scattering signal that are dominated by Rayleigh scattering. Making this real-time transmission information available for the MAGIC data stream allows to apply atmospheric corrections later on in the analysis. Such corrections allow for extending the effective observation time of MAGIC by including data taken under adverse atmospheric conditions. In the future they will help reducing the systematic uncertainties of energy and flux.
Directory of Open Access Journals (Sweden)
Lauri Markelin
2016-12-01
Full Text Available Atmospheric correction of remotely sensed imagery of inland water bodies is essential to interpret water-leaving radiance signals and for the accurate retrieval of water quality variables. Atmospheric correction is particularly challenging over inhomogeneous water bodies surrounded by comparatively bright land surface. We present results of AisaFENIX airborne hyperspectral imagery collected over a small inland water body under changing cloud cover, presenting challenging but common conditions for atmospheric correction. This is the first evaluation of the performance of the FENIX sensor over water bodies. ATCOR4, which is not specifically designed for atmospheric correction over water and does not make any assumptions on water type, was used to obtain atmospherically corrected reflectance values, which were compared to in situ water-leaving reflectance collected at six stations. Three different atmospheric correction strategies in ATCOR4 was tested. The strategy using fully image-derived and spatially varying atmospheric parameters produced a reflectance accuracy of ±0.002, i.e., a difference of less than 15% compared to the in situ reference reflectance. Amplitude and shape of the remotely sensed reflectance spectra were in general accordance with the in situ data. The spectral angle was better than 4.1° for the best cases, in the spectral range of 450–750 nm. The retrieval of chlorophyll-a (Chl-a concentration using a popular semi-analytical band ratio algorithm for turbid inland waters gave an accuracy of ~16% or 4.4 mg/m3 compared to retrieval of Chl-a from reflectance measured in situ. Using fixed ATCOR4 processing parameters for whole images improved Chl-a retrieval results from ~6 mg/m3 difference to reference to approximately 2 mg/m3. We conclude that the AisaFENIX sensor, in combination with ATCOR4 in image-driven parametrization, can be successfully used for inland water quality observations. This implies that the need for in situ
Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network
Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao
2018-03-01
Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.
The Bouguer Correction Algorithm for Gravity with Limited Range
Directory of Open Access Journals (Sweden)
MA Jian
2017-01-01
Full Text Available The Bouguer correction is an important item in gravity reduction, while the traditional Bouguer correction, whether the plane Bouguer correction or the spherical Bouguer correction, exists approximation error because of far-zone virtual terrain. The error grows as the calculation point gets higher. Therefore gravity reduction using the Bouguer correction with limited range, which was in accordance with the scope of the topographic correction, was researched in this paper. After that, a simplified formula to calculate the Bouguer correction with limited range was proposed. The algorithm, which is innovative and has the value of mathematical theory to some extent, shows consistency with the equation evolved from the strict integral algorithm for topographic correction. The interpolation experiment shows that gravity reduction based on the Bouguer correction with limited range is prior to unlimited range when the calculation point is taller than 1000 m.
Assessing atmospheric bias correction for dynamical consistency using potential vorticity
International Nuclear Information System (INIS)
Rocheta, Eytan; Sharma, Ashish; Evans, Jason P
2014-01-01
Correcting biases in atmospheric variables prior to impact studies or dynamical downscaling can lead to new biases as dynamical consistency between the ‘corrected’ fields is not maintained. Use of these bias corrected fields for subsequent impact studies and dynamical downscaling provides input conditions that do not appropriately represent intervariable relationships in atmospheric fields. Here we investigate the consequences of the lack of dynamical consistency in bias correction using a measure of model consistency—the potential vorticity (PV). This paper presents an assessment of the biases present in PV using two alternative correction techniques—an approach where bias correction is performed individually on each atmospheric variable, thereby ignoring the physical relationships that exists between the multiple variables that are corrected, and a second approach where bias correction is performed directly on the PV field, thereby keeping the system dynamically coherent throughout the correction process. In this paper we show that bias correcting variables independently results in increased errors above the tropopause in the mean and standard deviation of the PV field, which are improved when using the alternative proposed. Furthermore, patterns of spatial variability are improved over nearly all vertical levels when applying the alternative approach. Results point to a need for a dynamically consistent atmospheric bias correction technique which results in fields that can be used as dynamically consistent lateral boundaries in follow-up downscaling applications. (letter)
Directory of Open Access Journals (Sweden)
Stéfani Novoa
2017-01-01
Full Text Available The accurate measurement of suspended particulate matter (SPM concentrations in coastal waters is of crucial importance for ecosystem studies, sediment transport monitoring, and assessment of anthropogenic impacts in the coastal ocean. Ocean color remote sensing is an efficient tool to monitor SPM spatio-temporal variability in coastal waters. However, near-shore satellite images are complex to correct for atmospheric effects due to the proximity of land and to the high level of reflectance caused by high SPM concentrations in the visible and near-infrared spectral regions. The water reflectance signal (ρw tends to saturate at short visible wavelengths when the SPM concentration increases. Using a comprehensive dataset of high-resolution satellite imagery and in situ SPM and water reflectance data, this study presents (i an assessment of existing atmospheric correction (AC algorithms developed for turbid coastal waters; and (ii a switching method that automatically selects the most sensitive SPM vs. ρw relationship, to avoid saturation effects when computing the SPM concentration. The approach is applied to satellite data acquired by three medium-high spatial resolution sensors (Landsat-8/Operational Land Imager, National Polar-Orbiting Partnership/Visible Infrared Imaging Radiometer Suite and Aqua/Moderate Resolution Imaging Spectrometer to map the SPM concentration in some of the most turbid areas of the European coastal ocean, namely the Gironde and Loire estuaries as well as Bourgneuf Bay on the French Atlantic coast. For all three sensors, AC methods based on the use of short-wave infrared (SWIR spectral bands were tested, and the consistency of the retrieved water reflectance was examined along transects from low- to high-turbidity waters. For OLI data, we also compared a SWIR-based AC (ACOLITE with a method based on multi-temporal analyses of atmospheric constituents (MACCS. For the selected scenes, the ACOLITE-MACCS difference was
Pagnutti, Mary; Holekamp, Kara; Stewart, Randy; Vaughan, Ronald D.
2006-01-01
This Rapid Prototyping Capability study explores the potential to use atmospheric profiles derived from GPS (Global Positioning System) radio occultation measurements and by AIRS (Atmospheric Infrared Sounder) onboard the Aqua satellite to improve surface temperature retrieval from remotely sensed thermal imagery. This study demonstrates an example of a cross-cutting decision support technology whereby NASA data or models are shown to improve a wide number of observation systems or models. The ability to use one data source to improve others will be critical to the GEOSS (Global Earth Observation System of Systems) where a large number of potentially useful systems will require auxiliary datasets as input for decision support. Atmospheric correction of thermal imagery decouples TOA radiance and separates surface emission from atmospheric emission and absorption. Surface temperature can then be estimated from the surface emission with knowledge of its emissivity. Traditionally, radiosonde sounders or atmospheric models based on radiosonde sounders, such as the NOAA (National Oceanic & Atmospheric Administration) ARL (Air Resources Laboratory) READY (Real-time Environmental Application and Display sYstem), provide the atmospheric profiles required to perform atmospheric correction. Unfortunately, these types of data are too spatially sparse and too infrequently taken. The advent of high accuracy, global coverage, atmospheric data using GPS radio occultation and AIRS may provide a new avenue for filling data input gaps. In this study, AIRS and GPS radio occultation derived atmospheric profiles from the German Aerospace Center CHAMP (CHAllenging Minisatellite Payload), the Argentinean Commission on Space Activities SAC-C (Satellite de Aplicaciones Cientificas-C), and the pair of NASA GRACE (Gravity Recovery and Climate Experiment) satellites are used as input data in atmospheric radiative transport modeling based on the MODTRAN (MODerate resolution atmospheric
Synchronous atmospheric radiation correction of GF-2 satellite multispectral image
Bian, Fuqiang; Fan, Dongdong; Zhang, Yan; Wang, Dandan
2018-02-01
GF-2 remote sensing products have been widely used in many fields for its high-quality information, which provides technical support for the the macroeconomic decisions. Atmospheric correction is the necessary part in the data preprocessing of the quantitative high resolution remote sensing, which can eliminate the signal interference in the radiation path caused by atmospheric scattering and absorption, and reducting apparent reflectance into real reflectance of the surface targets. Aiming at the problem that current research lack of atmospheric date which are synchronization and region matching of the surface observation image, this research utilize the MODIS Level 1B synchronous data to simulate synchronized atmospheric condition, and write programs to implementation process of aerosol retrieval and atmospheric correction, then generate a lookup table of the remote sensing image based on the radioactive transfer model of 6S (second simulation of a satellite signal in the solar spectrum) to correct the atmospheric effect of multispectral image from GF-2 satellite PMS-1 payload. According to the correction results, this paper analyzes the pixel histogram of the reflectance spectrum of the 4 spectral bands of PMS-1, and evaluates the correction results of different spectral bands. Then conducted a comparison experiment on the same GF-2 image based on the QUAC. According to the different targets respectively statistics the average value of NDVI, implement a comparative study of NDVI from two different results. The degree of influence was discussed by whether to adopt synchronous atmospheric date. The study shows that the result of the synchronous atmospheric parameters have significantly improved the quantitative application of the GF-2 remote sensing data.
Rivalland, Vincent; Tardy, Benjamin; Huc, Mireille; Hagolle, Olivier; Marcq, Sébastien; Boulet, Gilles
2016-04-01
Land Surface temperature (LST) is a critical variable for studying the energy and water budgets at the Earth surface, and is a key component of many aspects of climate research and services. The Landsat program jointly carried out by NASA and USGS has been providing thermal infrared data for 40 years, but no associated LST product has been yet routinely proposed to community. To derive LST values, radiances measured at sensor-level need to be corrected for the atmospheric absorption, the atmospheric emission and the surface emissivity effect. Until now, existing LST products have been generated with multi channel methods such as the Temperature/Emissivity Separation (TES) adapted to ASTER data or the generalized split-window algorithm adapted to MODIS multispectral data. Those approaches are ill-adapted to the Landsat mono-window data specificity. The atmospheric correction methodology usually used for Landsat data requires detailed information about the state of the atmosphere. This information may be obtained from radio-sounding or model atmospheric reanalysis and is supplied to a radiative transfer model in order to estimate atmospheric parameters for a given coordinate. In this work, we present a new automatic tool dedicated to Landsat thermal data correction which improves the common atmospheric correction methodology by introducing the spatial dimension in the process. The python tool developed during this study, named LANDARTs for LANDsat Automatic Retrieval of surface Temperature, is fully automatic and provides atmospheric corrections for a whole Landsat tile. Vertical atmospheric conditions are downloaded from the ERA Interim dataset from ECMWF meteorological organization which provides them at 0.125 degrees resolution, at a global scale and with a 6-hour-time step. The atmospheric correction parameters are estimated on the atmospheric grid using the commercial software MODTRAN, then interpolated to 30m resolution. We detail the processing steps
[An automatic color correction algorithm for digital human body sections].
Zhuge, Bin; Zhou, He-qin; Tang, Lei; Lang, Wen-hui; Feng, Huan-qing
2005-06-01
To find a new approach to improve the uniformity of color parameters for images data of the serial sections of the human body. An auto-color correction algorithm in the RGB color space based on a standard CMYK color chart was proposed. The gray part of the color chart was auto-segmented from every original image, and fifteen gray values were attained. The transformation function between the measured gray value and the standard gray value of the color chart and the lookup table were obtained. In RGB color space, the colors of images were corrected according to the lookup table. The color of original Chinese Digital Human Girl No. 1 (CDH-G1) database was corrected by using the algorithm with Matlab 6.5, and it took 13.475 s to deal with one picture on a personal computer. Using the algorithm, the color of the original database is corrected automatically and quickly. The uniformity of color parameters for corrected dataset is improved.
Muller, Dagmar; Krasemann, Hajo; Brewin, Robert J. W.; Deschamps, Pierre-Yves; Doerffer, Roland; Fomferra, Norman; Franz, Bryan A.; Grant, Mike G.; Groom, Steve B.; Melin, Frederic;
2015-01-01
The Ocean Colour Climate Change Initiative intends to provide a long-term time series of ocean colour data and investigate the detectable climate impact. A reliable and stable atmospheric correction procedure is the basis for ocean colour products of the necessary high quality. In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite derived water leaving reflectance spectra, is extended by a ranking system. In principle, the statistical parameters such as root mean square error, bias, etc. and measures of goodness of fit, are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results. Although the presented methodology is intended to be used in an algorithm selection process, this paper focusses on the scope of the methodology rather than the properties of the individual processors.
Genetic algorithm for chromaticity correction in diffraction limited storage rings
Directory of Open Access Journals (Sweden)
M. P. Ehrlichman
2016-04-01
Full Text Available A multiobjective genetic algorithm is developed for optimizing nonlinearities in diffraction limited storage rings. This algorithm determines sextupole and octupole strengths for chromaticity correction that deliver optimized dynamic aperture and beam lifetime. The algorithm makes use of dominance constraints to breed desirable properties into the early generations. The momentum aperture is optimized indirectly by constraining the chromatic tune footprint and optimizing the off-energy dynamic aperture. The result is an effective and computationally efficient technique for correcting chromaticity in a storage ring while maintaining optimal dynamic aperture and beam lifetime.
ATMOSPHERIC PHASE DELAY CORRECTION OF D-INSAR BASED ON SENTINEL-1A
Directory of Open Access Journals (Sweden)
X. Li
2018-04-01
Full Text Available In this paper, we used the Generic Atmospheric Correction Online Service for InSAR (GACOS tropospheric delay maps to correct the atmospheric phase delay of the differential interferometric synthetic aperture radar (D-InSAR monitoring, and we improved the accuracy of subsidence monitoring using D-InSAR technology. Atmospheric phase delay, as one of the most important errors that limit the monitoring accuracy of InSAR, would lead to the masking of true phase in subsidence monitoring. For the problem, this paper used the Sentinel-1A images and the tropospheric delay maps got from GACOS to monitor the subsidence of the Yellow River Delta in Shandong Province. The conventional D-InSAR processing was performed using the GAMMA software. The MATLAB codes were used to correct the atmospheric delay of the D-InSAR results. The results before and after the atmospheric phase delay correction were verified and analyzed in the main subsidence area. The experimental results show that atmospheric phase influences the deformation results to a certain extent. After the correction, the measurement error of vertical deformation is reduced by about 18 mm, which proves that the removal of atmospheric effects can improve the accuracy of the D-InSAR monitoring.
Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo
2015-08-01
In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.
Li, Yihe; Li, Bofeng; Gao, Yang
2015-01-01
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400
International Nuclear Information System (INIS)
Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib
2008-01-01
Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in
Energy Technology Data Exchange (ETDEWEB)
Patrick Matthews
2012-10-01
CAU 104 comprises the following corrective action sites (CASs): • 07-23-03, Atmospheric Test Site T-7C • 07-23-04, Atmospheric Test Site T7-1 • 07-23-05, Atmospheric Test Site • 07-23-06, Atmospheric Test Site T7-5a • 07-23-07, Atmospheric Test Site - Dog (T-S) • 07-23-08, Atmospheric Test Site - Baker (T-S) • 07-23-09, Atmospheric Test Site - Charlie (T-S) • 07-23-10, Atmospheric Test Site - Dixie • 07-23-11, Atmospheric Test Site - Dixie • 07-23-12, Atmospheric Test Site - Charlie (Bus) • 07-23-13, Atmospheric Test Site - Baker (Buster) • 07-23-14, Atmospheric Test Site - Ruth • 07-23-15, Atmospheric Test Site T7-4 • 07-23-16, Atmospheric Test Site B7-b • 07-23-17, Atmospheric Test Site - Climax These 15 CASs include releases from 30 atmospheric tests conducted in the approximately 1 square mile of CAU 104. Because releases associated with the CASs included in this CAU overlap and are not separate and distinguishable, these CASs are addressed jointly at the CAU level. The purpose of this CADD/CAP is to evaluate potential corrective action alternatives (CAAs), provide the rationale for the selection of recommended CAAs, and provide the plan for implementation of the recommended CAA for CAU 104. Corrective action investigation (CAI) activities were performed from October 4, 2011, through May 3, 2012, as set forth in the CAU 104 Corrective Action Investigation Plan.
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
International Nuclear Information System (INIS)
Shangguan Ming-Jia; Xia Hai-Yun; Dou Xian-Kang; Wang Chong; Qiu Jia-Wei; Zhang Yun-Peng; Shu Zhi-Feng; Xue Xiang-Hui
2015-01-01
A correction considering the effects of atmospheric temperature, pressure, and Mie contamination must be performed for wind retrieval from a Rayleigh Doppler lidar (RDL), since the so-called Rayleigh response is directly related to the convolution of the optical transmission of the frequency discriminator and the Rayleigh–Brillouin spectrum of the molecular backscattering. Thus, real-time and on-site profiles of atmospheric pressure, temperature, and aerosols should be provided as inputs to the wind retrieval. Firstly, temperature profiles under 35 km and above the altitude are retrieved, respectively, from a high spectral resolution lidar (HSRL) and a Rayleigh integration lidar (RIL) incorporating to the RDL. Secondly, the pressure profile is taken from the European Center for Medium range Weather Forecast (ECMWF) analysis, while radiosonde data are not available. Thirdly, the Klett–Fernald algorithms are adopted to estimate the Mie and Rayleigh components in the atmospheric backscattering. After that, the backscattering ratio is finally determined in a nonlinear fitting of the transmission of the atmospheric backscattering through the Fabry–Perot interferometer (FPI) to a proposed model. In the validation experiments, wind profiles from the lidar show good agreement with the radiosonde in the overlapping altitude. Finally, a continuous wind observation shows the stability of the correction scheme. (paper)
Energy Technology Data Exchange (ETDEWEB)
Matthews, Patrick
2013-09-01
This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 105: Area 2 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada. CAU 105 comprises the following five corrective action sites (CASs): -02-23-04 Atmospheric Test Site - Whitney Closure In Place -02-23-05 Atmospheric Test Site T-2A Closure In Place -02-23-06 Atmospheric Test Site T-2B Clean Closure -02-23-08 Atmospheric Test Site T-2 Closure In Place -02-23-09 Atmospheric Test Site - Turk Closure In Place The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 105 based on the implementation of the corrective actions. Corrective action investigation (CAI) activities were performed from October 22, 2012, through May 23, 2013, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices.
Energy Technology Data Exchange (ETDEWEB)
Patrick Matthews
2009-05-01
This Corrective Action Decision Document/Closure Report has been prepared for Corrective Action Unit (CAU) 370, T-4 Atmospheric Test Site, located in Area 4 at the Nevada Test Site, Nevada, in accordance with the Federal Facility Agreement and Consent Order (FFACO). Corrective Action Unit 370 is comprised of Corrective Action Site (CAS) 04-23-01, Atmospheric Test Site T-4. The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 370 due to the implementation of the corrective action of closure in place with administrative controls. To achieve this, corrective action investigation (CAI) activities were performed from June 25, 2008, through April 2, 2009, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 370: T-4 Atmospheric Test Site and Record of Technical Change No. 1.
Skin Temperature Analysis and Bias Correction in a Coupled Land-Atmosphere Data Assimilation System
Bosilovich, Michael G.; Radakovich, Jon D.; daSilva, Arlindo; Todling, Ricardo; Verter, Frances
2006-01-01
In an initial investigation, remotely sensed surface temperature is assimilated into a coupled atmosphere/land global data assimilation system, with explicit accounting for biases in the model state. In this scheme, an incremental bias correction term is introduced in the model's surface energy budget. In its simplest form, the algorithm estimates and corrects a constant time mean bias for each gridpoint; additional benefits are attained with a refined version of the algorithm which allows for a correction of the mean diurnal cycle. The method is validated against the assimilated observations, as well as independent near-surface air temperature observations. In many regions, not accounting for the diurnal cycle of bias caused degradation of the diurnal amplitude of background model air temperature. Energy fluxes collected through the Coordinated Enhanced Observing Period (CEOP) are used to more closely inspect the surface energy budget. In general, sensible heat flux is improved with the surface temperature assimilation, and two stations show a reduction of bias by as much as 30 Wm(sup -2) Rondonia station in Amazonia, the Bowen ratio changes direction in an improvement related to the temperature assimilation. However, at many stations the monthly latent heat flux bias is slightly increased. These results show the impact of univariate assimilation of surface temperature observations on the surface energy budget, and suggest the need for multivariate land data assimilation. The results also show the need for independent validation data, especially flux stations in varied climate regimes.
An improved non-uniformity correction algorithm and its GPU parallel implementation
Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui
2018-05-01
The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.
Pathak, P.; Guyon, O.; Jovanovic, N.; Lozi, J.; Martinache, F.; Minowa, Y.; Kudo, T.; Kotani, T.; Takami, H.
2018-02-01
Adaptive optic (AO) systems delivering high levels of wavefront correction are now common at observatories. One of the main limitations to image quality after wavefront correction comes from atmospheric refraction. An atmospheric dispersion compensator (ADC) is employed to correct for atmospheric refraction. The correction is applied based on a look-up table consisting of dispersion values as a function of telescope elevation angle. The look-up table-based correction of atmospheric dispersion results in imperfect compensation leading to the presence of residual dispersion in the point spread function (PSF) and is insufficient when sub-milliarcsecond precision is required. The presence of residual dispersion can limit the achievable contrast while employing high-performance coronagraphs or can compromise high-precision astrometric measurements. In this paper, we present the first on-sky closed-loop correction of atmospheric dispersion by directly using science path images. The concept behind the measurement of dispersion utilizes the chromatic scaling of focal plane speckles. An adaptive speckle grid generated with a deformable mirror (DM) that has a sufficiently large number of actuators is used to accurately measure the residual dispersion and subsequently correct it by driving the ADC. We have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system on-sky closed-loop correction of residual dispersion to instruments which require sub-milliarcsecond correction.
Specificity of Atmosphere Correction of Satellite Ocean Color Data in Far-Eastern Region
Trusenkova, O.; Kachur, V.; Aleksanin, A. I.
2016-02-01
It was carried out an error analysis of satellite reflectance coefficients (Rrs) of MODIS/AQUA colour data for two atmospheric correction algorithms (NIR, MUMM) in the Far-Eastern region. Some sets of unique data of in situ and satellite measurements have been analysed. A set has some measurements with ASD spectroradiometer for each satellite pass. The measurement allocations were selected so the Chlorophyll-a concentration has high variability. Analysis of arbitrary set demonstrated that the main error component is systematic error, and it has simple relations on Rrs values. The reasons of such error behavior are considered. The most probable explanation of the large errors of oceanic color parameters in the Far-Eastern region is the ability of high concentrations of continental aerosol. A comparison of satellite and in situ measurements at AERONET stations of USA and South Korea regions has been made. It was shown that for NIR-correction of the atmosphere influence the error values in these two regions have differences up to 10 times for almost the same water turbidity and relatively good accuracy of computation of aerosol optical thickness. The study was supported by grant Russian Scientific Foundation No. 14-50-00034, by grant of Russian Foundation of Basic Research No.15-35-21032-mol-a-ved, and by Program of Basic Research "Far East" of Far Eastern Branch of Russian Academy of Sciences.
International Nuclear Information System (INIS)
Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.
1993-01-01
With transmission-corrected gamma-ray nondestructive assay instruments such as the Segmented Gamma Scanner (SGS) and the Tomographic Gamma Scanner (TGS) that is currently under development at Los Alamos National Laboratory, the amount of gamma-ray emitting material can be underestimated for samples in which the emitting material consists of particles or lumps of highly attenuating material. This problem is encountered in the assay of uranium and plutonium-bearing samples. To correct for this source of bias, we have developed a least-squares algorithm that uses transmission-corrected assay results for several emitted energies and a weighting function to account for statistical uncertainties in the assay results. The variation of effective lump size in the fitted model is parameterized; this allows the correction to be performed for a wide range of lump-size distributions. It may be possible to use the reduced chi-squared value obtained in the fit to identify samples in which assay assumptions have been violated. We found that the algorithm significantly reduced bias in simulated assays and improved SGS assay results for plutonium-bearing samples. Further testing will be conducted with the TGS, which is expected to be less susceptible than the SGS to systematic source of bias
Distortion correction algorithm for UAV remote sensing image based on CUDA
International Nuclear Information System (INIS)
Wenhao, Zhang; Yingcheng, Li; Delong, Li; Changsheng, Teng; Jin, Liu
2014-01-01
In China, natural disasters are characterized by wide distribution, severe destruction and high impact range, and they cause significant property damage and casualties every year. Following a disaster, timely and accurate acquisition of geospatial information can provide an important basis for disaster assessment, emergency relief, and reconstruction. In recent years, Unmanned Aerial Vehicle (UAV) remote sensing systems have played an important role in major natural disasters, with UAVs becoming an important technique of obtaining disaster information. UAV is equipped with a non-metric digital camera with lens distortion, resulting in larger geometric deformation for acquired images, and affecting the accuracy of subsequent processing. The slow speed of the traditional CPU-based distortion correction algorithm cannot meet the requirements of disaster emergencies. Therefore, we propose a Compute Unified Device Architecture (CUDA)-based image distortion correction algorithm for UAV remote sensing, which takes advantage of the powerful parallel processing capability of the GPU, greatly improving the efficiency of distortion correction. Our experiments show that, compared with traditional CPU algorithms and regardless of image loading and saving times, the maximum acceleration ratio using our proposed algorithm reaches 58 times that using the traditional algorithm. Thus, data processing time can be reduced by one to two hours, thereby considerably improving disaster emergency response capability
A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint
Energy Technology Data Exchange (ETDEWEB)
Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain
2017-07-25
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.
Quantum mean-field decoding algorithm for error-correcting codes
International Nuclear Information System (INIS)
Inoue, Jun-ichi; Saika, Yohei; Okada, Masato
2009-01-01
We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).
International Nuclear Information System (INIS)
Murase, Kenya; Itoh, Hisao; Mogami, Hiroshi; Ishine, Masashiro; Kawamura, Masashi; Iio, Atsushi; Hamamoto, Ken
1987-01-01
A computer based simulation method was developed to assess the relative effectiveness and availability of various attenuation compensation algorithms in single photon emission computed tomography (SPECT). The effect of the nonuniformity of attenuation coefficient distribution in the body, the errors in determining a body contour and the statistical noise on reconstruction accuracy and the computation time in using the algorithms were studied. The algorithms were classified into three groups: precorrection, post correction and iterative correction methods. Furthermore, a hybrid method was devised by combining several methods. This study will be useful for understanding the characteristics limitations and strengths of the algorithms and searching for a practical correction method for photon attenuation in SPECT. (orig.)
Atmospheric turbulence and sensor system effects on biometric algorithm performance
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
Fully multidimensional flux-corrected transport algorithms for fluids
International Nuclear Information System (INIS)
Zalesak, S.T.
1979-01-01
The theory of flux-corrected transport (FCT) developed by Boris and Book is placed in a simple, generalized format, and a new algorithm for implementing the critical flux limiting stage in multidimensions without resort to time splitting is presented. The new flux limiting algorithm allows the use of FCT techniques in multidimensional fluid problems for which time splitting would produce unacceptable numerical results, such as those involving incompressible or nearly incompressible flow fields. The 'clipping' problem associated with the original one dimensional flux limiter is also eliminated or alleviated. Test results and applications to a two dimensional fluid plasma problem are presented
Atmosphere Refraction Effects in Object Locating for Optical Satellite Remote Sensing Images
Directory of Open Access Journals (Sweden)
YAN Ming
2015-09-01
Full Text Available The collinear rigorous geometric model contains the atmosphere refraction geometric error in off-nadir observation. In order to estimate and correct the atmosphere refraction geometric error, the ISO international standard atmospheric model and Owens atmosphere refractive index algorithm are applied to calculate the index of atmosphere refraction in different latitude and altitude. The paper uses the weighted mean algorithm to reduce the eight layers ISO standard atmospheric model into a simple troposphere and stratosphere two layers spherical atmosphere. And the LOS vector track geometric algorithm is used to estimate the atmosphere refraction geometric error in different observation off-nadir angle. The results show that the atmosphere refraction will introduce about 2.5 m or 9 m geometric displacement in 30 or 45 degree off-nadir angle individual. Therefore, during geo-location processing of agile platform and extra wide high spatial resolution imagery, there is need to take into account the influence of atmosphere refraction and correct the atmosphere refraction geometric error to enhance the geo-location precision without GCPs.
TPC cross-talk correction: CERN-Dubna-Milano algorithm and results
De Min, A; Guskov, A; Krasnoperov, A; Nefedov, Y; Zhemchugov, A
2003-01-01
The CDM (CERN-Dubna-Milano) algorithm for TPC Xtalk correction is presented and discussed in detail. It is a data-driven, model-independent approach to the problem of Xtalk correction. It accounts for arbitrary amplitudes and pulse shapes of signals, and corrects (almost) all generations of Xtalk, with a view to handling (almost) correctly even complex multi-track events. Results on preamp amplification and preamp linearity from the analysis of test-charge injection data of all six TPC sectors are presented. The minimal expected error on the measurement of signal charges in the TPC is discussed. Results are given on the application of the CDM Xtalk correction to test-charge events and krypton events.
Precision Photometric Extinction Corrections from Direct Atmospheric Measurements
McGraw, John T.; Zimmer, P.; Linford, J.; Simon, T.; Measurement Astrophysics Research Group
2009-01-01
For decades astronomical extinction corrections have been accomplished using nightly mean extinction coefficients derived from Langley plots measured with the same telescope used for photometry. Because this technique results in lost time on program fields, observers only grudgingly made sporadic extinction measurements. Occasionally extinction corrections are not measured nightly but are made using tabulated mean monthly or even quarterly extinction coefficients. Any observer of the sky knows that Earth's atmosphere is an ever-changing fluid in which is embedded extinction sources ranging from Rayleigh (molecular) scattering to aerosol, smoke and dust scattering and absorption, to "just plain cloudy.” Our eyes also tell us that the type, direction and degree of extinction changes on time scales of minutes or less - typically shorter than many astronomical observations. Thus, we should expect that atmospheric extinction can change significantly during a single observation. Mean extinction coefficients might be well-defined nightly means, but those means have high variance because they do not accurately record the wavelength-, time-, and angle-dependent extinction actually affecting each observation. Our research group is implementing lidar measurements made in the direction of observation with one minute cadence, from which the absolute monochromatic extinction can be measured. Simultaneous spectrophotometry of nearby bright standard stars allows derivation and MODTRAN modeling atmospheric transmission as a function of wavelength for the atmosphere through which an observation is made. Application of this technique is demonstrated. Accurate real-time extinction measurements are an enabling factor for sub-1% photometry. This research is supported by NSF Grant 0421087 and AFRL Grant #FA9451-04-2-0355.
International Nuclear Information System (INIS)
De Agostini, A.; Moretti, R.; Belletti, S.; Maira, G.; Magri, G.C.; Bestagno, M.
1992-01-01
The correction of organ movements in sequential radionuclide renography was done using an iterative algorithm that, by means of a set of rectangular regions of interest (ROIs), did not require any anatomical marker or manual elaboration of frames. The realignment programme here proposed is quite independent of the spatial and temporal distribution of activity and analyses the rotational movement in a simplified but reliable way. The position of the object inside a frame is evaluated by choosing the best ROI in a set of ROIs shifted 1 pixel around the central one. Statistical tests have to be fulfilled by the algorithm in order to activate the realignment procedure. Validation of the algorithm was done for different acquisition set-ups and organ movements. Results, summarized in Table 1, show that in about 90% of the stimulated experiments the algorithm is able to correct the movements of the object with a maximum error less of equal to 1 pixel limit. The usefulness of the realignment programme was demonstrated with sequential radionuclide renography as a typical clinical application. The algorithm-corrected curves of a 1-year-old patient were completely different from those obtained without a motion correction procedure. The algorithm may be applicable also to other types of scintigraphic examinations, besides functional imaging in which the realignment of frames of the dynamic sequence was an intrinsic demand. (orig.)
A DSP-based neural network non-uniformity correction algorithm for IRFPA
Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu
2009-07-01
An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Directory of Open Access Journals (Sweden)
Francisco Eugenio
2017-11-01
Full Text Available Remote multispectral data can provide valuable information for monitoring coastal water ecosystems. Specifically, high-resolution satellite-based imaging systems, as WorldView-2 (WV-2, can generate information at spatial scales needed to implement conservation actions for protected littoral zones. However, coastal water-leaving radiance arriving at the space-based sensor is often small as compared to reflected radiance. In this work, complex approaches, which usually use an accurate radiative transfer code to correct the atmospheric effects, such as FLAASH, ATCOR and 6S, have been implemented for high-resolution imagery. They have been assessed in real scenarios using field spectroradiometer data. In this context, the three approaches have achieved excellent results and a slightly superior performance of 6S model-based algorithm has been observed. Finally, for the mapping of benthic habitats in shallow-waters marine protected environments, a relevant application of the proposed atmospheric correction combined with an automatic deglinting procedure is presented. This approach is based on the integration of a linear mixing model of benthic classes within the radiative transfer model of the water. The complete methodology has been applied to selected ecosystems in the Canary Islands (Spain but the obtained results allow the robust mapping of the spatial distribution and density of seagrass in coastal waters and the analysis of multitemporal variations related to the human activity and climate change in littoral zones.
The generation algorithm of arbitrary polygon animation based on dynamic correction
Directory of Open Access Journals (Sweden)
Hou Ya Wei
2016-01-01
Full Text Available This paper, based on the key-frame polygon sequence, proposes a method that makes use of dynamic correction to develop continuous animation. Firstly we use quadratic Bezier curve to interpolate the corresponding sides vector of polygon sequence consecutive frame and realize the continuity of animation sequences. And then, according to Bezier curve characteristic, we conduct dynamic regulation to interpolation parameters and implement the changing smoothness. Meanwhile, we take use of Lagrange Multiplier Method to correct the polygon and close it. Finally, we provide the concrete algorithm flow and present numerical experiment results. The experiment results show that the algorithm acquires excellent effect.
Two dimensional spatial distortion correction algorithm for scintillation GAMMA cameras
International Nuclear Information System (INIS)
Chaney, R.; Gray, E.; Jih, F.; King, S.E.; Lim, C.B.
1985-01-01
Spatial distortion in an Anger gamma camera originates fundamentally from the discrete nature of scintillation light sampling with an array of PMT's. Historically digital distortion correction started with the method based on the distortion measurement by using 1-D slit pattern and the subsequent on-line bi-linear approximation with 64 x 64 look-up tables for X and Y. However, the X, Y distortions are inherently two-dimensional in nature, and thus the validity of this 1-D calibration method becomes questionable with the increasing distortion amplitude in association with the effort to get better spatial and energy resolutions. The authors have developed a new accurate 2-D correction algorithm. This method involves the steps of; data collection from 2-D orthogonal hole pattern, 2-D distortion vector measurement, 2-D Lagrangian polynomial interpolation, and transformation to X, Y ADC frame. The impact of numerical precision used in correction and the accuracy of bilinear approximation with varying look-up table size have been carefully examined through computer simulation by using measured single PMT light response function together with Anger positioning logic. Also the accuracy level of different order Lagrangian polynomial interpolations for correction table expansion from hole centroids were investigated. Detailed algorithm and computer simulation are presented along with camera test results
Directory of Open Access Journals (Sweden)
Chinsu Lin
2015-05-01
Full Text Available Changes of Land Use and Land Cover (LULC affect atmospheric, climatic, and biological spheres of the earth. Accurate LULC map offers detail information for resources management and intergovernmental cooperation to debate global warming and biodiversity reduction. This paper examined effects of pansharpening and atmospheric correction on LULC classification. Object-Based Support Vector Machine (OB-SVM and Pixel-Based Maximum Likelihood Classifier (PB-MLC were applied for LULC classification. Results showed that atmospheric correction is not necessary for LULC classification if it is conducted in the original multispectral image. Nevertheless, pansharpening plays much more important roles on the classification accuracy than the atmospheric correction. It can help to increase classification accuracy by 12% on average compared to the ones without pansharpening. PB-MLC and OB-SVM achieved similar classification rate. This study indicated that the LULC classification accuracy using PB-MLC and OB-SVM is 82% and 89% respectively. A combination of atmospheric correction, pansharpening, and OB-SVM could offer promising LULC maps from WorldView-2 multispectral and panchromatic images.
UNIFICATION AND APPLICATIONS OF MODERN OCEANIC/ATMOSPHERIC DATA ASSIMILATION ALGORITHMS
Institute of Scientific and Technical Information of China (English)
QIAO Fang-li; ZHANG Shao-qing; YUAN Ye-li
2004-01-01
The key mathematics and applications of various modern atmospheric/oceanic data assimilation methods including Optimal Interpolation(OI),4-dimensional variational approach(4D-Var)and filters were systematically reviewed and classified.Based on the data assimilation philosophy,I.e.,using model dynamics to extract the observational information,the common character of the problem,such as the probabilistic nature of the evolution of the atmospheric/oceanic system,noisy and irregularly spaced observations,and the advantages and disadvantages of these data assimilation algorithms,were discussed.In the filtering framework,all modern data assimilation algorithms were unified: OI/3D-Var is a stationary filter,4D-Var is a linear(Kalman)filter and an ensemble of Kalman filters is able to construct a nonlinear filter.The nonlinear filter such as the Ensemble Kalman Filter(ENKF),Ensemble Adjustment Kalman Filter(EAKF)and Ensemble Transformation Kalman Filter(ETKF)can,to some extent,account for the non-Gaussian information of the prior distribution from the model.The flow-dependent covariance estimated by an ensemble filter may be introduced to OI and 4D-Var to improve these traditional algorithms.In practice,the performance of algorithms may depend on the specific numerical model and the choice of algorithm may depend on the specific problem.However,the unification of algorithms allows us to establish a unified test system to evaluate these algorithms,which provides more insights into data assimilation philosophies and helps improve data assimilation techniques.
Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media
Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.
2017-09-01
It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.
International Nuclear Information System (INIS)
Bur'yan, V.I.; Kozlova, L.V.; Kuzhil', A.S.; Shikalov, V.F.
2005-01-01
The development of algorithms for correction of self-powered neutron detector (SPND) inertial is caused by necessity to increase the fast response of the in-core instrumentation systems (ICIS). The increase of ICIS fast response will permit to monitor in real time fast transient processes in the core, and in perspective - to use the signals of rhodium SPND for functions of emergency protection by local parameters. In this paper it is proposed to use mathematical model of neutron flux measurements by means of SPND in integral form for creation of correction algorithms. This approach, in the case, is the most convenient for creation of recurrent algorithms for flux estimation. The results of comparison for estimation of neutron flux and reactivity by readings of ionization chambers and SPND signals, corrected by proposed algorithms, are presented [ru
The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction
Zhang, K.
2016-12-01
Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of
Ground-Based Correction of Remote-Sensing Spectral Imagery
Alder-Golden, Steven M.; Rochford, Peter; Matthew, Michael; Berk, Alexander
2007-01-01
Software has been developed for an improved method of correcting for the atmospheric optical effects (primarily, effects of aerosols and water vapor) in spectral images of the surface of the Earth acquired by airborne and spaceborne remote-sensing instruments. In this method, the variables needed for the corrections are extracted from the readings of a radiometer located on the ground in the vicinity of the scene of interest. The software includes algorithms that analyze measurement data acquired from a shadow-band radiometer. These algorithms are based on a prior radiation transport software model, called MODTRAN, that has been developed through several versions up to what are now known as MODTRAN4 and MODTRAN5 . These components have been integrated with a user-friendly Interactive Data Language (IDL) front end and an advanced version of MODTRAN4. Software tools for handling general data formats, performing a Langley-type calibration, and generating an output file of retrieved atmospheric parameters for use in another atmospheric-correction computer program known as FLAASH have also been incorporated into the present soft-ware. Concomitantly with the soft-ware described thus far, there has been developed a version of FLAASH that utilizes the retrieved atmospheric parameters to process spectral image data.
Atmospheric correction at AERONET locations: A new science and validation data set
Wang, Y.; Lyapustin, A.I.; Privette, J.L.; Morisette, J.T.; Holben, B.
2009-01-01
This paper describes an Aerosol Robotic Network (AERONET)-based Surface Reflectance Validation Network (ASRVN) and its data set of spectral surface bidirectional reflectance and albedo based on Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA data. The ASRVN is an operational data collection and processing system. It receives 50 ?? 50 km2; subsets of MODIS level 1B (L1B) data from MODIS adaptive processing system and AERONET aerosol and water-vapor information. Then, it performs an atmospheric correction (AC) for about 100 AERONET sites based on accurate radiative-transfer theory with complex quality control of the input data. The ASRVN processing software consists of an L1B data gridding algorithm, a new cloud-mask (CM) algorithm based on a time-series analysis, and an AC algorithm using ancillary AERONET aerosol and water-vapor data. The AC is achieved by fitting the MODIS top-of-atmosphere measurements, accumulated for a 16-day interval, with theoretical reflectance parameterized in terms of the coefficients of the Li SparseRoss Thick (LSRT) model of the bidirectional reflectance factor (BRF). The ASRVN takes several steps to ensure high quality of results: 1) the filtering of opaque clouds by a CM algorithm; 2) the development of an aerosol filter to filter residual semitransparent and subpixel clouds, as well as cases with high inhomogeneity of aerosols in the processing area; 3) imposing the requirement of the consistency of the new solution with previously retrieved BRF and albedo; 4) rapid adjustment of the 16-day retrieval to the surface changes using the last day of measurements; and 5) development of a seasonal backup spectral BRF database to increase data coverage. The ASRVN provides a gapless or near-gapless coverage for the processing area. The gaps, caused by clouds, are filled most naturally with the latest solution for a given pixel. The ASRVN products include three parameters of the LSRT model (kL, kG, and kV), surface albedo
Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm
DEFF Research Database (Denmark)
Rethore, Pierre-Elouan; Sørensen, Niels
2008-01-01
An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....
Directory of Open Access Journals (Sweden)
Vitor Souza Martins
2017-03-01
Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for
Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang
2008-03-01
The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.
GEO-LEO reflectance band inter-comparison with BRDF and atmospheric scattering corrections
Chang, Tiejun; Xiong, Xiaoxiong Jack; Keller, Graziela; Wu, Xiangqian
2017-09-01
The inter-comparison of the reflective solar bands between the instruments onboard a geostationary orbit satellite and onboard a low Earth orbit satellite is very helpful to assess their calibration consistency. GOES-R was launched on November 19, 2016 and Himawari 8 was launched October 7, 2014. Unlike the previous GOES instruments, the Advanced Baseline Imager on GOES-16 (GOES-R became GOES-16 after November 29 when it reached orbit) and the Advanced Himawari Imager (AHI) on Himawari 8 have onboard calibrators for the reflective solar bands. The assessment of calibration is important for their product quality enhancement. MODIS and VIIRS, with their stringent calibration requirements and excellent on-orbit calibration performance, provide good references. The simultaneous nadir overpass (SNO) and ray-matching are widely used inter-comparison methods for reflective solar bands. In this work, the inter-comparisons are performed over a pseudo-invariant target. The use of stable and uniform calibration sites provides comparison with appropriate reflectance level, accurate adjustment for band spectral coverage difference, reduction of impact from pixel mismatching, and consistency of BRDF and atmospheric correction. The site in this work is a desert site in Australia (latitude -29.0 South; longitude 139.8 East). Due to the difference in solar and view angles, two corrections are applied to have comparable measurements. The first is the atmospheric scattering correction. The satellite sensor measurements are top of atmosphere reflectance. The scattering, especially Rayleigh scattering, should be removed allowing the ground reflectance to be derived. Secondly, the angle differences magnify the BRDF effect. The ground reflectance should be corrected to have comparable measurements. The atmospheric correction is performed using a vector version of the Second Simulation of a Satellite Signal in the Solar Spectrum modeling and BRDF correction is performed using a semi
International Nuclear Information System (INIS)
Yang, P; Hu, S J; Chen, S Q; Yang, W; Xu, B; Jiang, W H
2006-01-01
In order to improve laser beam quality, a real number encoding genetic algorithm based on adaptive optics technology was presented. This algorithm was applied to control a 19-channel deformable mirror to correct phase aberration in laser beam. It is known that when traditional adaptive optics system is used to correct laser beam wave-front phase aberration, a precondition is to measure the phase aberration information in the laser beam. However, using genetic algorithms, there is no necessary to know the phase aberration information in the laser beam beforehand. The only parameter need to know is the Light intensity behind the pinhole on the focal plane. This parameter was used as the fitness function for the genetic algorithm. Simulation results show that the optimal shape of the 19-channel deformable mirror applied to correct the phase aberration can be ascertained. The peak light intensity was improved by a factor of 21, and the encircled energy strehl ratio was increased to 0.34 from 0.02 as the phase aberration was corrected with this technique
Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao
2016-06-01
An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.
Intercomparison of attenuation correction algorithms for single-polarized X-band radars
Lengfeld, K.; Berenguer, M.; Sempere Torres, D.
2018-03-01
Attenuation due to liquid water is one of the largest uncertainties in radar observations. The effects of attenuation are generally inversely proportional to the wavelength, i.e. observations from X-band radars are more affected by attenuation than those from C- or S-band systems. On the other hand, X-band radars can measure precipitation fields in higher temporal and spatial resolution and are more mobile and easier to install due to smaller antennas. A first algorithm for attenuation correction in single-polarized systems was proposed by Hitschfeld and Bordan (1954) (HB), but it gets unstable in case of small errors (e.g. in the radar calibration) and strong attenuation. Therefore, methods have been developed that restrict attenuation correction to keep the algorithm stable, using e.g. surface echoes (for space-borne radars) and mountain returns (for ground radars) as a final value (FV), or adjustment of the radar constant (C) or the coefficient α. In the absence of mountain returns, measurements from C- or S-band radars can be used to constrain the correction. All these methods are based on the statistical relation between reflectivity and specific attenuation. Another way to correct for attenuation in X-band radar observations is to use additional information from less attenuated radar systems, e.g. the ratio between X-band and C- or S-band radar measurements. Lengfeld et al. (2016) proposed such a method based isotonic regression of the ratio between X- and C-band radar observations along the radar beam. This study presents a comparison of the original HB algorithm and three algorithms based on the statistical relation between reflectivity and specific attenuation as well as two methods implementing additional information of C-band radar measurements. Their performance in two precipitation events (one mainly convective and the other one stratiform) shows that a restriction of the HB is necessary to avoid instabilities. A comparison with vertically pointing
Pile-up correction by Genetic Algorithm and Artificial Neural Network
Kafaee, M.; Saramad, S.
2009-08-01
Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.
Computational algorithms for simulations in atmospheric optics.
Konyaev, P A; Lukin, V P
2016-04-20
A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.
Quantum algorithms and quantum maps - implementation and error correction
International Nuclear Information System (INIS)
Alber, G.; Shepelyansky, D.
2005-01-01
Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)
Fourier domain preconditioned conjugate gradient algorithm for atmospheric tomography.
Yang, Qiang; Vogel, Curtis R; Ellerbroek, Brent L
2006-07-20
By 'atmospheric tomography' we mean the estimation of a layered atmospheric turbulence profile from measurements of the pupil-plane phase (or phase gradients) corresponding to several different guide star directions. We introduce what we believe to be a new Fourier domain preconditioned conjugate gradient (FD-PCG) algorithm for atmospheric tomography, and we compare its performance against an existing multigrid preconditioned conjugate gradient (MG-PCG) approach. Numerical results indicate that on conventional serial computers, FD-PCG is as accurate and robust as MG-PCG, but it is from one to two orders of magnitude faster for atmospheric tomography on 30 m class telescopes. Simulations are carried out for both natural guide stars and for a combination of finite-altitude laser guide stars and natural guide stars to resolve tip-tilt uncertainty.
Directory of Open Access Journals (Sweden)
Xiaole Shen
2015-09-01
Full Text Available The uneven illumination phenomenon caused by thin clouds will reduce the quality of remote sensing images, and bring adverse effects to the image interpretation. To remove the effect of thin clouds on images, an uneven illumination correction can be applied. In this paper, an effective uneven illumination correction algorithm is proposed to remove the effect of thin clouds and to restore the ground information of the optical remote sensing image. The imaging model of remote sensing images covered by thin clouds is analyzed. Due to the transmission attenuation, reflection, and scattering, the thin cloud cover usually increases region brightness and reduces saturation and contrast of the image. As a result, a wavelet domain enhancement is performed for the image in Hue-Saturation-Value (HSV color space. We use images with thin clouds in Wuhan area captured by QuickBird and ZiYuan-3 (ZY-3 satellites for experiments. Three traditional uneven illumination correction algorithms, i.e., multi-scale Retinex (MSR algorithm, homomorphic filtering (HF-based algorithm, and wavelet transform-based MASK (WT-MASK algorithm are performed for comparison. Five indicators, i.e., mean value, standard deviation, information entropy, average gradient, and hue deviation index (HDI are used to analyze the effect of the algorithms. The experimental results show that the proposed algorithm can effectively eliminate the influences of thin clouds and restore the real color of ground objects under thin clouds.
A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres
Directory of Open Access Journals (Sweden)
Sapar A.
2013-06-01
Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the
Energy Technology Data Exchange (ETDEWEB)
Patrick Matthews
2011-08-01
CAU 104 comprises the 15 CASs listed below: (1) 07-23-03, Atmospheric Test Site T-7C; (2) 07-23-04, Atmospheric Test Site T7-1; (3) 07-23-05, Atmospheric Test Site; (4) 07-23-06, Atmospheric Test Site T7-5a; (5) 07-23-07, Atmospheric Test Site - Dog (T-S); (6) 07-23-08, Atmospheric Test Site - Baker (T-S); (7) 07-23-09, Atmospheric Test Site - Charlie (T-S); (8) 07-23-10, Atmospheric Test Site - Dixie; (9) 07-23-11, Atmospheric Test Site - Dixie; (10) 07-23-12, Atmospheric Test Site - Charlie (Bus); (11) 07-23-13, Atmospheric Test Site - Baker (Buster); (12) 07-23-14, Atmospheric Test Site - Ruth; (13) 07-23-15, Atmospheric Test Site T7-4; (14) 07-23-16, Atmospheric Test Site B7-b; (15) 07-23-17, Atmospheric Test Site - Climax These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on April 28, 2011, by representatives of the Nevada Division of Environmental Protection and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 104. The releases at CAU 104 consist of surface-deposited radionuclides from 30 atmospheric nuclear tests. The presence and nature of contamination at CAU 104 will be evaluated based on information collected from a field investigation. Radiological contamination will be evaluated based on a comparison
Flux-corrected transport principles, algorithms, and applications
Kuzmin, Dmitri; Turek, Stefan
2005-01-01
Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...
Effects of Atmospheric Refraction on an Airborne Weather Radar Detection and Correction Method
Directory of Open Access Journals (Sweden)
Lei Wang
2015-01-01
Full Text Available This study investigates the effect of atmospheric refraction, affected by temperature, atmospheric pressure, and humidity, on airborne weather radar beam paths. Using three types of typical atmospheric background sounding data, we established a simulation model for an actual transmission path and a fitted correction path of an airborne weather radar beam during airplane take-offs and landings based on initial flight parameters and X-band airborne phased-array weather radar parameters. Errors in an ideal electromagnetic beam propagation path are much greater than those of a fitted path when atmospheric refraction is not considered. The rates of change in the atmospheric refraction index differ with weather conditions and the radar detection angles differ during airplane take-off and landing. Therefore, the airborne radar detection path must be revised in real time according to the specific sounding data and flight parameters. However, an error analysis indicates that a direct linear-fitting method produces significant errors in a negatively refractive atmosphere; a piecewise-fitting method can be adopted to revise the paths according to the actual atmospheric structure. This study provides researchers and practitioners in the aeronautics and astronautics field with updated information regarding the effect of atmospheric refraction on airborne weather radar detection and correction methods.
Flux-corrected transport principles, algorithms, and applications
Löhner, Rainald; Turek, Stefan
2012-01-01
Many modern high-resolution schemes for Computational Fluid Dynamics trace their origins to the Flux-Corrected Transport (FCT) paradigm. FCT maintains monotonicity using a nonoscillatory low-order scheme to determine the bounds for a constrained high-order approximation. This book begins with historical notes by J.P. Boris and D.L. Book who invented FCT in the early 1970s. The chapters that follow describe the design of fully multidimensional FCT algorithms for structured and unstructured grids, limiting for systems of conservation laws, and the use of FCT as an implicit subgrid scale model. The second edition presents 200 pages of additional material. The main highlights of the three new chapters include: FCT-constrained interpolation for Arbitrary Lagrangian-Eulerian methods, an optimization-based approach to flux correction, and FCT simulations of high-speed flows on overset grids. Addressing students and researchers, as well as CFD practitioners, the book is focused on computational aspects and contains m...
Relative Radiometric Normalization and Atmospheric Correction of a SPOT 5 Time Series
Directory of Open Access Journals (Sweden)
Matthieu Rumeau
2008-04-01
Full Text Available Multi-temporal images acquired at high spatial and temporal resolution are an important tool for detecting change and analyzing trends, especially in agricultural applications. However, to insure a reliable use of this kind of data, a rigorous radiometric normalization step is required. Normalization can be addressed by performing an atmospheric correction of each image in the time series. The main problem is the difficulty of obtaining an atmospheric characterization at a given acquisition date. In this paper, we investigate whether relative radiometric normalization can substitute for atmospheric correction. We develop an automatic method for relative radiometric normalization based on calculating linear regressions between unnormalized and reference images. Regressions are obtained using the reflectances of automatically selected invariant targets. We compare this method with an atmospheric correction method that uses the 6S model. The performances of both methods are compared using 18 images from of a SPOT 5 time series acquired over Reunion Island. Results obtained for a set of manually selected invariant targets show excellent agreement between the two methods in all spectral bands: values of the coefficient of determination (rÃ‚Â² exceed 0.960, and bias magnitude values are less than 2.65. There is also a strong correlation between normalized NDVI values of sugarcane fields (rÃ‚Â² = 0.959. Despite a relative error of 12.66% between values, very comparable NDVI patterns are observed.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline
International Nuclear Information System (INIS)
Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong
2015-01-01
The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method. (paper)
Accounting for Chromatic Atmospheric Effects on Barycentric Corrections
Energy Technology Data Exchange (ETDEWEB)
Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.; Jurgenson, Colby A., E-mail: ryan.blackman@yale.edu [Department of Astronomy, Yale University, 52 Hillhouse Avenue, New Haven, CT 06511 (United States)
2017-03-01
Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s{sup −1} can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380–680 nm) are required to account for this effect at the 10 cm s{sup −1} level, with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).
Qian, Fang; Wu, Yihui; Hao, Peng
2017-11-01
Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.
Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census
Li, C.; Guo, P.; Liu, X.
2017-09-01
A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.
Energy Technology Data Exchange (ETDEWEB)
Patrick Matthews
2012-09-01
Corrective Action Unit (CAU) 105 is located in Area 2 of the Nevada National Security Site, which is approximately 65 miles northwest of Las Vegas, Nevada. CAU 105 is a geographical grouping of sites where there has been a suspected release of contamination associated with atmospheric nuclear testing. This document describes the planned investigation of CAU 105, which comprises the following corrective action sites (CASs): • 02-23-04, Atmospheric Test Site - Whitney • 02-23-05, Atmospheric Test Site T-2A • 02-23-06, Atmospheric Test Site T-2B • 02-23-08, Atmospheric Test Site T-2 • 02-23-09, Atmospheric Test Site - Turk These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on April 30, 2012, by representatives of the Nevada Division of Environmental Protection and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 105. The site investigation process will also be conducted in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices to be applied to this activity. The potential contamination sources associated with all CAU 105 CASs are from atmospheric nuclear testing activities. The presence and nature of contamination at CAU
Baek, Jieun; Choi, Yosoon
2017-04-01
Most algorithms for least-cost path analysis usually calculate the slope gradient between the source cell and the adjacent cells to reflect the weights for terrain slope into the calculation of travel costs. However, these algorithms have limitations that they cannot analyze the least-cost path between two cells when obstacle cells with very high or low terrain elevation exist between the source cell and the target cell. This study presents a new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes to find possible paths satisfying the constraint of maximum or minimum slope gradient. The new algorithm calculates the slope gradient between the center cell and non-adjacent cells using the concept of extended move-sets. If the algorithm finds possible paths between the center cell and non-adjacent cells with satisfying the constraint of slope condition, terrain elevation of obstacle cells existing between two cells is corrected from the digital elevation model. After calculating the cumulative travel costs to the destination by reflecting the weight of the difference between the original and corrected elevations, the algorithm analyzes the least-cost path. The results of applying the proposed algorithm to the synthetic data sets and the real-world data sets provide proof that the new algorithm can provide more accurate least-cost paths than other conventional algorithms implemented in commercial GIS software such as ArcGIS.
Multirobot FastSLAM Algorithm Based on Landmark Consistency Correction
Directory of Open Access Journals (Sweden)
Shi-Ming Chen
2014-01-01
Full Text Available Considering the influence of uncertain map information on multirobot SLAM problem, a multirobot FastSLAM algorithm based on landmark consistency correction is proposed. Firstly, electromagnetism-like mechanism is introduced to the resampling procedure in single-robot FastSLAM, where we assume that each sampling particle is looked at as a charged electron and attraction-repulsion mechanism in electromagnetism field is used to simulate interactive force between the particles to improve the distribution of particles. Secondly, when multiple robots observe the same landmarks, every robot is regarded as one node and Kalman-Consensus Filter is proposed to update landmark information, which further improves the accuracy of localization and mapping. Finally, the simulation results show that the algorithm is suitable and effective.
Directory of Open Access Journals (Sweden)
Zhiguo Huang
2017-11-01
Full Text Available Infrared (IR radiometry technology is an important method for characterizing the IR signature of targets, such as aircrafts or rockets. However, the received signal of targets could be reduced by a combination of atmospheric molecule absorption and aerosol scattering. Therefore, atmospheric correction is a requisite step for obtaining the real radiance of targets. Conventionally, the atmospheric transmittance and the air path radiance are calculated by an atmospheric radiative transfer calculation software. In this paper, an improved IR radiometric method based on constant reference correction of atmospheric attenuation is proposed. The basic principle and procedure of this method are introduced, and then the linear model of high-speed calibration in consideration of the integration time is employed and confirmed, which is then applicable in various complex conditions. To eliminate stochastic errors, radiometric experiments were conducted for multiple integration times. Finally, several experiments were performed on a mid-wave IR system with Φ600 mm aperture. The radiometry results indicate that the radiation inversion precision of the novel method is 4.78–4.89%, while the precision of the conventional method is 10.86–13.81%.
Directory of Open Access Journals (Sweden)
Ronald Scheirer
2018-04-01
Full Text Available Atmospheric interaction distorts the surface signal received by a space-borne instrument. Images derived from visible channels appear often too bright and with reduced contrast. This hampers the use of RGB imagery otherwise useful in ocean color applications and in forecasting or operational disaster monitoring, for example forest fires. In order to correct for the dominant source of atmospheric noise, a simple, fast and flexible algorithm has been developed. The algorithm is implemented in Python and freely available in PySpectral which is part of the PyTroll family of open source packages, allowing easy access to powerful real-time image-processing tools. Pre-calculated look-up tables of top of atmosphere reflectance are derived by off-line calculations with RTM DISORT as part of the LibRadtran package. The approach is independent of platform and sensor bands, and allows it to be applied to any band in the visible spectral range. Due to the use of standard atmospheric profiles and standard aerosol loads, it is possible just to reduce the background disturbance. Thus signals from excess aerosols become more discernible. Examples of uncorrected and corrected satellite images demonstrate that this flexible real-time algorithm is a useful tool for atmospheric correction.
Muller, Dagmar; Krasemann, Hajo; Brewin, Robert J. W.; Brockmann, Carsten; Deschamps, Pierre-Yves; Fomferra, Norman; Franz, Bryan A.; Grant, Mike G.; Groom, Steve B.; Melin, Frederic;
2015-01-01
The established procedure to access the quality of atmospheric correction processors and their underlying algorithms is the comparison of satellite data products with related in-situ measurements. Although this approach addresses the accuracy of derived geophysical properties in a straight forward fashion, it is also limited in its ability to catch systematic sensor and processor dependent behaviour of satellite products along the scan-line, which might impair the usefulness of the data in spatial analyses. The Ocean Colour Climate Change Initiative (OC-CCI) aims to create an ocean colour dataset on a global scale to meet the demands of the ecosystem modelling community. The need for products with increasing spatial and temporal resolution that also show as little systematic and random errors as possible, increases. Due to cloud cover, even temporal means can be influenced by along-scanline artefacts if the observations are not balanced and effects cannot be cancelled out mutually. These effects can arise from a multitude of results which are not easily separated, if at all. Among the sources of artefacts, there are some sensor-specific calibration issues which should lead to similar responses in all processors, as well as processor-specific features which correspond with the individual choices in the algorithms. A set of methods is proposed and applied to MERIS data over two regions of interest in the North Atlantic and the South Pacific Gyre. The normalised water leaving reflectance products of four atmospheric correction processors, which have also been evaluated in match-up analysis, is analysed in order to find and interpret systematic effects across track. These results are summed up with a semi-objective ranking and are used as a complement to the match-up analysis in the decision for the best Atmospheric Correction (AC) processor. Although the need for discussion remains concerning the absolutes by which to judge an AC processor, this example demonstrates
The Algorithm Theoretical Basis Document for Tidal Corrections
Fricker, Helen A.; Ridgway, Jeff R.; Minster, Jean-Bernard; Yi, Donghui; Bentley, Charles R.`
2012-01-01
This Algorithm Theoretical Basis Document deals with the tidal corrections that need to be applied to range measurements made by the Geoscience Laser Altimeter System (GLAS). These corrections result from the action of ocean tides and Earth tides which lead to deviations from an equilibrium surface. Since the effect of tides is dependent of the time of measurement, it is necessary to remove the instantaneous tide components when processing altimeter data, so that all measurements are made to the equilibrium surface. The three main tide components to consider are the ocean tide, the solid-earth tide and the ocean loading tide. There are also long period ocean tides and the pole tide. The approximate magnitudes of these components are illustrated in Table 1, together with estimates of their uncertainties (i.e. the residual error after correction). All of these components are important for GLAS measurements over the ice sheets since centimeter-level accuracy for surface elevation change detection is required. The effect of each tidal component is to be removed by approximating their magnitude using tidal prediction models. Conversely, assimilation of GLAS measurements into tidal models will help to improve them, especially at high latitudes.
Iterative atmospheric correction scheme and the polarization color of alpine snow
Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond
2012-07-01
Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories.In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction.In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment and
Iterative atmospheric correction scheme and the polarization color of alpine snow
International Nuclear Information System (INIS)
Ottaviani, Matteo; Cairns, Brian; Ferrare, Rich; Rogers, Raymond
2012-01-01
Characterization of the Earth's surface is crucial to remote sensing, both to map geomorphological features and because subtracting this signal is essential during retrievals of the atmospheric constituents located between the surface and the sensor. Current operational algorithms model the surface total reflectance through a weighted linear combination of a few geometry-dependent kernels, each devised to describe a particular scattering mechanism. The information content of these measurements is overwhelmed by that of instruments with polarization capabilities: proposed models in this case are based on the Fresnel reflectance of an isotropic distribution of facets. Because of its remarkable lack of spectral contrast, the polarized reflectance of land surfaces in the shortwave infrared spectral region, where atmospheric scattering is minimal, can be used to model the surface also at shorter wavelengths, where aerosol retrievals are attempted based on well-established scattering theories. In radiative transfer simulations, straightforward separation of the surface and atmospheric contributions is not possible without approximations because of the coupling introduced by multiple reflections. Within a general inversion framework, the problem can be eliminated by linearizing the radiative transfer calculation, and making the Jacobian (i.e., the derivative expressing the sensitivity of the reflectance with respect to model parameters) available at output. We present a general methodology based on a Gauss-Newton iterative search, which automates this procedure and eliminates de facto the need of an ad hoc atmospheric correction. In this case study we analyze the color variations in the polarized reflectance measured by the NASA Goddard Institute of Space Studies Research Scanning Polarimeter during a survey of late-season snowfields in the High Sierra. This insofar unique dataset presents challenges linked to the rugged topography associated with the alpine environment
Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping
2011-04-01
In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.
Research on correction algorithm of laser positioning system based on four quadrant detector
Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia
2018-02-01
This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.
Energy Technology Data Exchange (ETDEWEB)
Patrick Matthews
2012-08-01
CAU 570 comprises the following six corrective action sites (CASs): • 02-23-07, Atmospheric Test Site - Tesla • 09-23-10, Atmospheric Test Site T-9 • 09-23-11, Atmospheric Test Site S-9G • 09-23-14, Atmospheric Test Site - Rushmore • 09-23-15, Eagle Contamination Area • 09-99-01, Atmospheric Test Site B-9A These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each CAS. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed on April 30, 2012, by representatives of the Nevada Division of Environmental Protection and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office. The DQO process was used to identify and define the type, amount, and quality of data needed to develop and evaluate appropriate corrective actions for CAU 570. The site investigation process will also be conducted in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices to be applied to this activity. The presence and nature of contamination at CAU 570 will be evaluated based on information collected from a field investigation. Radiological contamination will be evaluated based on a comparison of the total effective dose at sample locations to the dose-based final action level. The total effective dose will be calculated as the total of separate estimates of internal and external dose. Results from the analysis of soil samples will be used to calculate internal radiological
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Zidane, Shems
This study is based on data acquired with an airborne multi-altitude sensor on July 2004 during a nonstandard atmospheric event in the region of Saint-Jean-sur-Richelieu, Quebec. By non-standard atmospheric event we mean an aerosol atmosphere that does not obey the typical monotonic, scale height variation employed in virtually all atmospheric correction codes. The surfaces imaged during this field campaign included a diverse variety of targets : agricultural land, water bodies, urban areas and forests. The multi-altitude approach employed in this campaign allowed us to better understand the altitude dependent influence of the atmosphere over the array of ground targets and thus to better characterize the perturbation induced by a non-standard (smoke) plume. The transformation of the apparent radiance at 3 different altitudes into apparent reflectance and the insertion of the plume optics into an atmospheric correction model permitted an atmospheric correction of the apparent reflectance at the two higher altitudes. The results showed consistency with the apparent validation reflectances derived from the lowest altitude radiances. This approach effectively confirmed the accuracy of our non-standard atmospheric correction approach. This test was particularly relevant at the highest altitude of 3.17 km : the apparent reflectances at this altitude were above most of the plume and therefore represented a good test of our ability to adequately correct for the influence of the perturbation. Standard atmospheric disturbances are obviously taken into account in most atmospheric correction models, but these are based on monotonically decreasing aerosol variations with increasing altitude. When the atmospheric radiation is affected by a plume or a local, non-standard pollution event, one must adapt the existing models to the radiative transfer constraints of the local perturbation and to the reality of the measurable parameters available for ingestion into the model. The
International Nuclear Information System (INIS)
Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng
2011-01-01
Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.
Remer, L. A.; Boss, E.; Ahmad, Z.; Cairns, B.; Chowdhary, J.; Coddington, O.; Davis, A. B.; Dierssen, H. M.; Diner, D. J.; Franz, B. A.; Frouin, R.; Gao, B. C.; Garay, M. J.; Heidinger, A.; Ibrahim, A.; Kalashnikova, O. V.; Knobelspiesse, K. D.; Levy, R. C.; Omar, A. H.; Meyer, K.; Platnick, S. E.; Seidel, F. C.; van Diedenhoven, B.; Werdell, J.; Xu, F.; Zhai, P.; Zhang, Z.
2017-12-01
NASA's Science Team for the Plankton, Aerosol, Clouds, ocean Ecosystem (PACE) mission is concluding three years of study exploring the science potential of expanded spectral, angular and polarization capability for space-based retrievals of water leaving radiance, aerosols and clouds. The work anticipates future development of retrievals to be applied to the PACE Ocean Color Instrument (OCI) and/or possibly a PACE Multi-Angle Polarimeter (MAP). In this presentation we will report on the Science Team's accomplishments associated with the atmosphere (significant efforts are also directed by the ST towards the ocean). Included in the presentation will be sensitivity studies that explore new OCI capabilities for aerosol and cloud layer height, aerosol absorption characterization, cloud property retrievals, and how we intend to move from heritage atmospheric correction algorithms to make use of and adjust to OCI's hyperspectral and UV wavelengths. We will then address how capabilities will improve with the PACE MAP, how these capabilities from both OCI and MAP correspond to specific societal benefits from the PACE mission, and what is still needed to close the gaps in our understanding before the PACE mission can realize its full potential.
Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm
Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.
2003-10-01
We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.
Goyens, C; Jamet, C; Ruddick, K G
2013-09-09
The present study provides an extensive overview of red and near infra-red (NIR) spectral relationships found in the literature and used to constrain red or NIR-modeling schemes in current atmospheric correction (AC) algorithms with the aim to improve water-leaving reflectance retrievals, ρw(λ), in turbid waters. However, most of these spectral relationships have been developed with restricted datasets and, subsequently, may not be globally valid, explaining the need of an accurate validation exercise. Spectral relationships are validated here with turbid in situ data for ρw(λ). Functions estimating ρw(λ) in the red were only valid for moderately turbid waters (ρw(λNIR) turbidity ranges presented in the in situ dataset. In the NIR region of the spectrum, the constant NIR reflectance ratio suggested by Ruddick et al. (2006) (Limnol. Oceanogr. 51, 1167-1179), was valid for moderately to very turbid waters (ρw(λNIR) turbid waters (ρw(λNIR) > 10(-2)). The results of this study suggest to use the red bounding equations and the polynomial NIR function to constrain red or NIR-modeling schemes in AC processes with the aim to improve ρw(λ) retrievals where current AC algorithms fail.
International Nuclear Information System (INIS)
Albino, Lucas D.; Santos, Gabriela R.; Ribeiro, Victor A.B.; Rodrigues, Laura N.; Weltman, Eduardo; Braga, Henrique F.
2013-01-01
The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)
Institute of Scientific and Technical Information of China (English)
Xu Hanqiu
2006-01-01
In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invariant features identified from multitemtween the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnormalized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.
Sakkas, Georgios; Sakellariou, Nikolaos
2018-05-01
Strong motion recordings are the key in many earthquake engineering applications and are also fundamental for seismic design. The present study focuses on the automated correction of accelerograms, analog and digital. The main feature of the proposed algorithm is the automatic selection for the cut-off frequencies based on a minimum spectral value in a predefined frequency bandwidth, instead of the typical signal-to-noise approach. The algorithm follows the basic steps of the correction procedure (instrument correction, baseline correction and appropriate filtering). Besides the corrected time histories, Peak Ground Acceleration, Peak Ground Velocity, Peak Ground Displacement values and the corrected Fourier Spectra are also calculated as well as the response spectra. The algorithm is written in Matlab environment, is fast enough and can be used for batch processing or in real-time applications. In addition, the possibility to also perform a signal-to-noise ratio is added as well as to perform causal or acausal filtering. The algorithm has been tested in six significant earthquakes (Kozani-Grevena 1995, Aigio 1995, Athens 1999, Lefkada 2003 and Kefalonia 2014) of the Greek territory with analog and digital accelerograms.
International Nuclear Information System (INIS)
Zibordi, G.; Maracci, G.
1993-01-01
Monitoring reflectance of polar icecaps has relevance in climate studies. In fact, climate changes produce variations in the morphology of ice and snow covers, which are detectable as surface reflectance change. Surface reflectance can be retrieved from remotely sensed data. However, absolute values independent of atmospheric turbidity and surface altitude can only be obtained after removing masking effects of the atmosphere. An atmospheric correction model, accounting for surface and sensor altitudes above sea level, is described and validated through data detected over Antarctic surfaces with a Barnes Modular Multispectral Radiometer having bands overlapping those of the Landsat Thematic Mapper. The model is also applied in a sensitivity analysis to investigate error induced in reflectance obtained from satellite data by indeterminacy in optical parameters of atmospheric constituents. Results show that indeterminacy in the atmospheric water vapor optical thickness is the main source of nonaccuracy in the retrieval of surface reflectance from data remotely sensed over Antarctic regions
Brajard, J.; Moulin, C.; Thiria, S.
2008-10-01
This paper presents a comparison of the atmospheric correction accuracy between the standard sea-viewing wide field-of-view sensor (SeaWiFS) algorithm and the NeuroVaria algorithm for the ocean off the Indian coast in March 1999. NeuroVaria is a general method developed to retrieve aerosol optical properties and water-leaving reflectances for all types of aerosols, including absorbing ones. It has been applied to SeaWiFS images of March 1999, during an episode of transport of absorbing aerosols coming from pollutant sources in India. Water-leaving reflectances and aerosol optical thickness estimated by the two methods were extracted along a transect across the aerosol plume for three days. The comparison showed that NeuroVaria allows the retrieval of oceanic properties in the presence of absorbing aerosols with a better spatial and temporal stability than the standard SeaWiFS algorithm. NeuroVaria was then applied to the available SeaWiFS images over a two-week period. NeuroVaria algorithm retrieves ocean products for a larger number of pixels than the standard one and eliminates most of the discontinuities and artifacts associated with the standard algorithm in presence of absorbing aerosols.
Energy Technology Data Exchange (ETDEWEB)
Matthews, Patrick
2014-01-01
The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 105 based on the implementation of the corrective actions. Corrective action investigation (CAI) activities were performed from October 22, 2012, through May 23, 2013, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices.
International Nuclear Information System (INIS)
Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.
2015-01-01
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
Energy Technology Data Exchange (ETDEWEB)
Labaria, George R. [Univ. of California, Santa Cruz, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Warrick, Abbie L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, Peter M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, Daniel H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-01-12
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
Howard, J. E.
2014-12-01
This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.
Xiao, Zhongxiu
2018-04-01
A Method of Measuring and Correcting Tilt of Anti - vibration Wind Turbines Based on Screening Algorithm is proposed in this paper. First of all, we design a device which the core is the acceleration sensor ADXL203, the inclination is measured by installing it on the tower of the wind turbine as well as the engine room. Next using the Kalman filter algorithm to filter effectively by establishing a state space model for signal and noise. Then we use matlab for simulation. Considering the impact of the tower and nacelle vibration on the collected data, the original data and the filtering data are classified and stored by the Screening algorithm, then filter the filtering data to make the output data more accurate. Finally, we eliminate installation errors by using algorithm to achieve the tilt correction. The device based on this method has high precision, low cost and anti-vibration advantages. It has a wide range of application and promotion value.
Natraj, V.; Thompson, D. R.; Mathur, A. K.; Babu, K. N.; Kindel, B. C.; Massie, S. T.; Green, R. O.; Bhattacharya, B. K.
2017-12-01
Remote Visible / ShortWave InfraRed (VSWIR) spectroscopy, typified by the Next-Generation Airborne Visible/Infrared Imaging Spectrometer (AVIRIS-NG), is a powerful tool to map the composition, health, and biodiversity of Earth's terrestrial and aquatic ecosystems. These studies must first estimate surface reflectance, removing the atmospheric effects of absorption and scattering by water vapor and aerosols. Since atmospheric state varies spatiotemporally, and is insufficiently constrained by climatological models, it is important to estimate it directly from the VSWIR data. However, water vapor and aerosol estimation is a significant ongoing challenge for existing atmospheric correction models. Conventional VSWIR atmospheric correction methods evolved from multi-band approaches and do not fully utilize the rich spectroscopic data available. We use spectrally resolved (line-by-line) radiative transfer calculations, coupled with optimal estimation theory, to demonstrate improved accuracy of surface retrievals. These spectroscopic techniques are already pervasive in atmospheric remote sounding disciplines but have not yet been applied to imaging spectroscopy. Our analysis employs a variety of scenes from the recent AVIRIS-NG India campaign, which spans various climes, elevation changes, a wide range of biomes and diverse aerosol scenarios. A key aspect of our approach is joint estimation of surface and aerosol parameters, which allows assessment of aerosol distortion effects using spectral shapes across the entire measured interval from 380-2500 nm. We expect that this method would outperform band ratio approaches, and enable evaluation of subtle aerosol parameters where in situ reference data is not available, or for extreme aerosol loadings, as is observed in the India scenarios. The results are validated using existing in-situ reference spectra, reflectance measurements from assigned partners in India, and objective spectral quality metrics for scenes without any
Lipton, A.; Moncet, J. L.; Payne, V.; Lynch, R.; Polonsky, I. N.
2017-12-01
We will present recent results from an algorithm for producing climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. Developments to be presented include the impact of a radiance-based pre-classification method for the atmospheric background. In addition to improving retrieval performance, pre-classification has the potential to reduce the sensitivity of the retrievals to the climatological data from which the background estimate and its error covariance are derived. We will also discuss evaluation of a method for mitigating the effect of clouds on the radiances, and enhancements of the radiative transfer forward model.
An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP
Moncet, J. L.
2015-12-01
We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from
Goldengorin, Boris; Vink, Marius de
1999-01-01
The Data-Correcting Algorithm (DCA) corrects the data of a hard problem instance in such a way that we obtain an instance of a well solvable special case. For a given prescribed accuracy of the solution, the DCA uses a branch and bound scheme to make sure that the solution of the corrected instance
International Nuclear Information System (INIS)
Serpolla, A.; Bonafoni, S.; Basili, P.; Biondi, R.; Arino, O.
2009-01-01
This paper presents the validation results of ENVISAT MERIS and TERRA MODIS retrieval algorithms for atmospheric Water Vapour Content (WVC) estimation in clear sky condition on land. The MERIS algorithms exploits the radiance ratio of the absorbing channel at 900 nm with the almost absorption-free reference at 890 nm, while the MODIS one is based on the ratio of measurements centred at near 0.905, 0.936, and 0.94 μm with atmospheric window reflectance at 0.865 and 1.24 μm. The first test was performed in the Mediterranean area using WVC provided from both ECMWF and AERONET. As a second step, the performances of the algorithms were tested exploiting WVC computed from radio sounding (RAOBs)in the North East Australia. The different comparisons with respect to reference WVC values showed an overestimation of WVC by MODIS (root mean square error percentage greater than 20%) and an acceptable performance of MERIS algorithms (root mean square error percentage around 10%) [it
Nearshore Water Quality Estimation Using Atmospherically Corrected AVIRIS Data
Directory of Open Access Journals (Sweden)
Sima Bagheri
2011-02-01
Full Text Available The objective of the research is to characterize the surface spectral reflectance of the nearshore waters using atmospheric correction code—Tafkaa for retrieval of the marine water constituent concentrations from hyperspectral data. The study area is the nearshore waters of New York/New Jersey considered as a valued ecological, economic and recreational resource within the New York metropolitan area. Comparison of the Airborne Visible Infrared Imaging Spectrometer (AVIRIS measured radiance and in situ reflectance measurement shows the effect of the solar source and atmosphere in the total upwelling spectral radiance measured by AVIRIS. Radiative transfer code, Tafkaa was applied to remove the effects of the atmosphere and to generate accurate reflectance (R(0 from the AVIRIS radiance for retrieving water quality parameters (i.e., total chlorophyll. Chlorophyll estimation as index of phytoplankton abundance was optimized using AVIRIS band ratio at 675 nm and 702 nm resulting in a coefficient of determination of R2 = 0.98. Use of the radiative transfer code in conjunction with bio optical model is the main tool for using ocean color remote sensing as an operational tool for monitoring of the key nearshore ecological communities of phytoplankton important in global change studies.
Radar Rainfall Bias Correction based on Deep Learning Approach
Song, Yang; Han, Dawei; Rico-Ramirez, Miguel A.
2017-04-01
Radar rainfall measurement errors can be considerably attributed to various sources including intricate synoptic regimes. Temperature, humidity and wind are typically acknowledged as critical meteorological factors in inducing the precipitation discrepancies aloft and on the ground. The conventional practices mainly use the radar-gauge or geostatistical techniques by direct weighted interpolation algorithms as bias correction schemes whereas rarely consider the atmospheric effects. This study aims to comprehensively quantify those meteorological elements' impacts on radar-gauge rainfall bias correction based on a deep learning approach. The deep learning approach employs deep convolutional neural networks to automatically extract three-dimensional meteorological features for target recognition based on high range resolution profiles. The complex nonlinear relationships between input and target variables can be implicitly detected by such a scheme, which is validated on the test dataset. The proposed bias correction scheme is expected to be a promising improvement in systematically minimizing the synthesized atmospheric effects on rainfall discrepancies between radar and rain gauges, which can be useful in many meteorological and hydrological applications (e.g., real-time flood forecasting) especially for regions with complex atmospheric conditions.
Pantazis, Alexandros; Papayannis, Alexandros; Georgoussis, Georgios
2018-04-01
In this paper we present a development of novel algorithms and techniques implemented within the Laser Remote Sensing Laboratory (LRSL) of the National Technical University of Athens (NTUA), in collaboration with Raymetrics S.A., in order to incorporate them into a 3-Dimensional (3D) lidar. The lidar is transmitting at 355 nm in the eye safe region and the measurements then are transposed to the visual range at 550 nm, according to the World Meteorological Organization (WMO) and the International Civil Aviation Organization (ICAO) rules of daytime visibility. These algorithms are able to provide horizontal, slant and vertical visibility for tower aircraft controllers, meteorologists, but also from pilot's point of view. Other algorithms are also provided for detection of atmospheric layering in any given direction and vertical angle, along with the detection of the Planetary Boundary Layer Height (PBLH).
International Nuclear Information System (INIS)
Sun, Y.; Hou, Y.; Yan, Y.
2004-01-01
With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author)
Research of beam hardening correction method for CL system based on SART algorithm
International Nuclear Information System (INIS)
Cao Daquan; Wang Yaxiao; Que Jiemin; Sun Cuili; Wei Cunfeng; Wei Long
2014-01-01
Computed laminography (CL) is a non-destructive testing technique for large objects, especially for planar objects. Beam hardening artifacts were wildly observed in the CL system and significantly reduce the image quality. This study proposed a novel simultaneous algebraic reconstruction technique (SART) based beam hardening correction (BHC) method for the CL system, namely the SART-BHC algorithm in short. The SART-BHC algorithm took the polychromatic attenuation process in account to formulate the iterative reconstruction update. A novel projection matrix calculation method which was different from the conventional cone-beam or fan-beam geometry was also studied for the CL system. The proposed method was evaluated with simulation data and experimental data, which was generated using the Monte Carlo simulation toolkit Geant4 and a bench-top CL system, respectively. All projection data were reconstructed with SART-BHC algorithm and the standard filtered back projection (FBP) algorithm. The reconstructed images show that beam hardening artifacts are greatly reduced with the SART-BHC algorithm compared to the FBP algorithm. The SART-BHC algorithm doesn't need any prior know-ledge about the object or the X-ray spectrum and it can also mitigate the interlayer aliasing. (authors)
International Nuclear Information System (INIS)
Narabayashi, Masaru; Mizowaki, Takashi; Matsuo, Yukinori; Nakamura, Mitsuhiro; Takayama, Kenji; Norihisa, Yoshiki; Sakanaka, Katsuyuki; Hiraoka, Masahiro
2012-01-01
Heterogeneity correction algorithms can have a large impact on the dose distributions of stereotactic body radiation therapy (SBRT) for lung tumors. Treatment plans of 20 patients who underwent SBRT for lung tumors with the prescribed dose of 48 Gy in four fractions at the isocenter were reviewed retrospectively and recalculated with different heterogeneity correction algorithms: the pencil beam convolution algorithm with a Batho power-law correction (BPL) in Eclipse, the radiological path length algorithm (RPL), and the X-ray Voxel Monte Carlo algorithm (XVMC) in iPlan. The doses at the periphery (minimum dose and D95) of the planning target volume (PTV) were compared using the same monitor units among the three heterogeneity correction algorithms, and the monitor units were compared between two methods of dose prescription, that is, an isocenter dose prescription (IC prescription) and dose-volume based prescription (D95 prescription). Mean values of the dose at the periphery of the PTV were significantly lower with XVMC than with BPL using the same monitor units (P<0.001). In addition, under IC prescription using BPL, RPL and XVMC, the ratios of mean values of monitor units were 1, 0.959 and 0.986, respectively. Under D95 prescription, they were 1, 0.937 and 1.088, respectively. These observations indicated that the application of XVMC under D95 prescription results in an increase in the actually delivered dose by 8.8% on average compared with the application of BPL. The appropriateness of switching heterogeneity correction algorithms and dose prescription methods should be carefully validated from a clinical viewpoint. (author)
Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)
2001-01-01
The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.
Improved ocean-color remote sensing in the Arctic using the POLYMER algorithm
Frouin, Robert; Deschamps, Pierre-Yves; Ramon, Didier; Steinmetz, François
2012-10-01
Atmospheric correction of ocean-color imagery in the Arctic brings some specific challenges that the standard atmospheric correction algorithm does not address, namely low solar elevation, high cloud frequency, multi-layered polar clouds, presence of ice in the field-of-view, and adjacency effects from highly reflecting surfaces covered by snow and ice and from clouds. The challenges may be addressed using a flexible atmospheric correction algorithm, referred to as POLYMER (Steinmetz and al., 2011). This algorithm does not use a specific aerosol model, but fits the atmospheric reflectance by a polynomial with a non spectral term that accounts for any non spectral scattering (clouds, coarse aerosol mode) or reflection (glitter, whitecaps, small ice surfaces within the instrument field of view), a spectral term with a law in wavelength to the power -1 (fine aerosol mode), and a spectral term with a law in wavelength to the power -4 (molecular scattering, adjacency effects from clouds and white surfaces). Tests are performed on selected MERIS imagery acquired over Arctic Seas. The derived ocean properties, i.e., marine reflectance and chlorophyll concentration, are compared with those obtained with the standard MEGS algorithm. The POLYMER estimates are more realistic in regions affected by the ice environment, e.g., chlorophyll concentration is higher near the ice edge, and spatial coverage is substantially increased. Good retrievals are obtained in the presence of thin clouds, with ocean-color features exhibiting spatial continuity from clear to cloudy regions. The POLYMER estimates of marine reflectance agree better with in situ measurements than the MEGS estimates. Biases are 0.001 or less in magnitude, except at 412 and 443 nm, where they reach 0.005 and 0.002, respectively, and root-mean-squared difference decreases from 0.006 at 412 nm to less than 0.001 at 620 and 665 nm. A first application to MODIS imagery is presented, revealing that the POLYMER algorithm is
DEFF Research Database (Denmark)
Allan, Mathew G; Hamilton, David P.; Trolle, Dennis
2016-01-01
Atmospheric correction of Landsat 7 thermal data was carried out for the purpose of retrieval of lake skin water temperature in Rotorua lakes, and Lake Taupo, North Island, New Zealand. The effect of the atmosphere was modelled using four sources of atmospheric profile data as input to the MODera...
Atmospheric Correction Inter-comparison Exercise (ACIX)
Vermote, E.; Doxani, G.; Gascon, F.; Roger, J. C.; Skakun, S.
2017-12-01
The free and open data access policy to Landsat-8 (L-8) and Sentinel-2 (S-2) satellite imagery has encouraged the development of atmospheric correction (AC) approaches for generating Bottom-of-Atmosphere (BOA) products. Several entities have started to generate (or plan to generate in the short term) BOA reflectance products at global scale for L-8 and S-2 missions. To this end, the European Space Agency (ESA) and National Aeronautics and Space Administration (NASA) have initiated an exercise on the inter-comparison of the available AC processors. The results of the exercise are expected to point out the strengths and weaknesses, as well as communalities and discrepancies of various AC processors, in order to suggest and define ways for their further improvement. In particular, 11 atmospheric processors from five different countries participate in ACIX with the aim to inter-compare their performance when applied to L-8 and S-2 data. All the processors should be operational without requiring parametrization when applied on different areas. A protocol describing in details the inter-comparison metrics and the test dataset based on the AERONET sites has been agreed unanimously during the 1st ACIX workshop in June 2016. In particular, a basic and an advanced run of each of the processor were requested in the frame of ACIX, with the aim to draw robust and reliable conclusions on the processors' performance. The protocol also describes the comparison metrics of the aerosol optical thickness and water vapour products of the processors with the corresponding AERONET measurements. Moreover, concerning the surface reflectances, the inter-comparison among the processors is defined, as well as the comparison with the MODIS surface reflectance and with a reference surface reflectance product. Such a reference product will be obtained using the AERONET characterization of the aerosol (size distribution and refractive indices) and an accurate radiative transfer code. The inter
Bias correction of daily satellite precipitation data using genetic algorithm
Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.
2018-05-01
Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.
Sosnovik, David E; Dai, Guangping; Nahrendorf, Matthias; Rosen, Bruce R; Seethamraju, Ravi
2007-08-01
To evaluate the use of a transmit-receive surface (TRS) coil and a cardiac-tailored intensity-correction algorithm for cardiac MRI in mice at 9.4 Tesla (9.4T). Fast low-angle shot (FLASH) cines, with and without delays alternating with nutations for tailored excitation (DANTE) tagging, were acquired in 13 mice. An intensity-correction algorithm was developed to compensate for the sensitivity profile of the surface coil, and was tailored to account for the unique distribution of noise and flow artifacts in cardiac MR images. Image quality was extremely high and allowed fine structures such as trabeculations, valve cusps, and coronary arteries to be clearly visualized. The tag lines created with the surface coil were also sharp and clearly visible. Application of the intensity-correction algorithm improved signal intensity, tissue contrast, and image quality even further. Importantly, the cardiac-tailored properties of the correction algorithm prevented noise and flow artifacts from being significantly amplified. The feasibility and value of cardiac MRI in mice with a TRS coil has been demonstrated. In addition, a cardiac-tailored intensity-correction algorithm has been developed and shown to improve image quality even further. The use of these techniques could produce significant potential benefits over a broad range of scanners, coil configurations, and field strengths. (c) 2007 Wiley-Liss, Inc.
A New Adaptive Gamma Correction Based Algorithm Using DWT-SVD for Non-Contrast CT Image Enhancement.
Kallel, Fathi; Ben Hamida, Ahmed
2017-12-01
The performances of medical image processing techniques, in particular CT scans, are usually affected by poor contrast quality introduced by some medical imaging devices. This suggests the use of contrast enhancement methods as a solution to adjust the intensity distribution of the dark image. In this paper, an advanced adaptive and simple algorithm for dark medical image enhancement is proposed. This approach is principally based on adaptive gamma correction using discrete wavelet transform with singular-value decomposition (DWT-SVD). In a first step, the technique decomposes the input medical image into four frequency sub-bands by using DWT and then estimates the singular-value matrix of the low-low (LL) sub-band image. In a second step, an enhanced LL component is generated using an adequate correction factor and inverse singular value decomposition (SVD). In a third step, for an additional improvement of LL component, obtained LL sub-band image from SVD enhancement stage is classified into two main classes (low contrast and moderate contrast classes) based on their statistical information and therefore processed using an adaptive dynamic gamma correction function. In fact, an adaptive gamma correction factor is calculated for each image according to its class. Finally, the obtained LL sub-band image undergoes inverse DWT together with the unprocessed low-high (LH), high-low (HL), and high-high (HH) sub-bands for enhanced image generation. Different types of non-contrast CT medical images are considered for performance evaluation of the proposed contrast enhancement algorithm based on adaptive gamma correction using DWT-SVD (DWT-SVD-AGC). Results show that our proposed algorithm performs better than other state-of-the-art techniques.
International Nuclear Information System (INIS)
Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel
2015-01-01
In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions.Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach.To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region.For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder.The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation.The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT. (paper)
Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman
2014-12-01
The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra
Energy Technology Data Exchange (ETDEWEB)
Iwai, P; Lins, L Nadler [AC Camargo Cancer Center, Sao Paulo (Brazil)
2016-06-15
Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT or IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.
International Nuclear Information System (INIS)
Iwai, P; Lins, L Nadler
2016-01-01
Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT or IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.
Directory of Open Access Journals (Sweden)
Tara Blakey
2016-10-01
Full Text Available This study evaluated the ability to improve Sea-Viewing Wide Field-of-View Sensor (SeaWiFS chl-a retrieval from optically shallow coastal waters by applying algorithms specific to the pixels’ benthic class. The form of the Ocean Color (OC algorithm was assumed for this study. The operational atmospheric correction producing Level 2 SeaWiFS data was retained since the focus of this study was on establishing the benefit from the alternative specification of the bio-optical algorithm. Benthic class was determined through satellite image-based classification methods. Accuracy of the chl-a algorithms evaluated was determined through comparison with coincident in situ measurements of chl-a. The regionally-tuned models that were allowed to vary by benthic class produced more accurate estimates of chl-a than the single, unified regionally-tuned model. Mean absolute percent difference was approximately 70% for the regionally-tuned, benthic class-specific algorithms. Evaluation of the residuals indicated the potential for further improvement to chl-a estimation through finer characterization of benthic environments. Atmospheric correction procedures specialized to coastal environments were recognized as areas for future improvement as these procedures would improve both classification and algorithm tuning.
The data correction algorithms in sup 6 sup 0 Co train inspection system
Yuan Ya Ding; LiuXiMing; Miao Ji Cheng
2002-01-01
Because of the physical characteristics of the sup 6 sup 0 Co train inspection system and the application of high-speed data collection system based on current integral, the original images have been distorted in a certain degree. Authors investigate into the reasons why the distortion comes into being, and accordingly present the data correction algorithm
Corrections for hydrostatic atmospheric models: radii and effective temperatures of Wolf Rayet stars
International Nuclear Information System (INIS)
Loore, C. de; Hellings, P.; Lamers, H.J.G.L.M.
1982-01-01
With the assumption of plane-parallel hydrostatic atmospheres, used generally for the computation of evolutionary models, the radii of WR stars are seriously underestimated. The true atmospheres may be very extended, due to the effect of the stellar wind. Instead of these hydrostatic atmospheres the authors consider dynamical atmospheres adopting a velocity law. The equation of the optical depth is integrated outwards using the equation of continuity. The ''hydrostatic'' radii are to be multiplied with a factor 2 to 8, and the effective temperatures with a factor 0.8 to 0.35 when Wolf Rayet characteristics for the wind are considered, and WR mass loss rates are used. With these corrections the effective temperatures of the theoretical models, which are helium burning Roche lobe overflow remnants, range between 30,000 K and 50,000 K. Effective temperatures calculated in the hydrostatic hypothesis can be as high as 150,000 K for helium burning RLOF-remnants with WR mass loss rates. (Auth.)
Kalashnikova, O. V.; Garay, M. J.; Xu, F.; Seidel, F. C.; Diner, D. J.
2015-12-01
Satellite remote sensing of ocean color is a critical tool for assessing the productivity of marine ecosystems and monitoring changes resulting from climatic or environmental influences. Yet water-leaving radiance comprises less than 10% of the signal measured from space, making correction for absorption and scattering by the intervening atmosphere imperative. Traditional ocean color retrieval algorithms utilize a standard set of aerosol models and the assumption of negligible water-leaving radiance in the near-infrared. Modern improvements have been developed to handle absorbing aerosols such as urban particulates in coastal areas and transported desert dust over the open ocean, where ocean fertilization can impact biological productivity at the base of the marine food chain. Even so, imperfect knowledge of the absorbing aerosol optical properties or their height distribution results in well-documented sources of error. In the UV, the problem of UV-enhanced absorption and nonsphericity of certain aerosol types are amplified due to the increased Rayleigh and aerosol optical depth, especially at off-nadir view angles. Multi-angle spectro-polarimetric measurements have been advocated as an additional tool to better understand and retrieve the aerosol properties needed for atmospheric correction for ocean color retrievals. The central concern of the work to be described is the assessment of the effects of absorbing aerosol properties on water leaving radiance measurement uncertainty by neglecting UV-enhanced absorption of carbonaceous particles and by not accounting for dust nonsphericity. In addition, we evaluate the polarimetric sensitivity of absorbing aerosol properties in light of measurement uncertainties achievable for the next generation of multi-angle polarimetric imaging instruments, and demonstrate advantages and disadvantages of wavelength selection in the UV/VNIR range. The phase matrices for the spherical smoke particles were calculated using a standard
Vector Green's function algorithm for radiative transfer in plane-parallel atmosphere
International Nuclear Information System (INIS)
Qin Yi; Box, Michael A.
2006-01-01
Green's function is a widely used approach for boundary value problems. In problems related to radiative transfer, Green's function has been found to be useful in land, ocean and atmosphere remote sensing. It is also a key element in higher order perturbation theory. This paper presents an explicit expression of the Green's function, in terms of the source and radiation field variables, for a plane-parallel atmosphere with either vacuum boundaries or a reflecting (BRDF) surface. Full polarization state is considered but the algorithm has been developed in such way that it can be easily reduced to solve scalar radiative transfer problems, which makes it possible to implement a single set of code for computing both the scalar and the vector Green's function
TUnfold, an algorithm for correcting migration effects in high energy physics
Energy Technology Data Exchange (ETDEWEB)
Schmitt, Stefan
2012-07-15
TUnfold is a tool for correcting migration and background effects in high energy physics for multi-dimensional distributions. It is based on a least square fit with Tikhonov regularisation and an optional area constraint. For determining the strength of the regularisation parameter, the L-curve method and scans of global correlation coefficients are implemented. The algorithm supports background subtraction and error propagation of statistical and systematic uncertainties, in particular those originating from limited knowledge of the response matrix. The program is interfaced to the ROOT analysis framework.
Direct cone-beam cardiac reconstruction algorithm with cardiac banding artifact correction
International Nuclear Information System (INIS)
Taguchi, Katsuyuki; Chiang, Beshan S.; Hein, Ilmar A.
2006-01-01
Multislice helical computed tomography (CT) is a promising noninvasive technique for coronary artery imaging. Various factors can cause inconsistencies in cardiac CT data, which can result in degraded image quality. These inconsistencies may be the result of the patient physiology (e.g., heart rate variations), the nature of the data (e.g., cone-angle), or the reconstruction algorithm itself. An algorithm which provides the best temporal resolution for each slice, for example, often provides suboptimal image quality for the entire volume since the cardiac temporal resolution (TRc) changes from slice to slice. Such variations in TRc can generate strong banding artifacts in multi-planar reconstruction images or three-dimensional images. Discontinuous heart walls and coronary arteries may compromise the accuracy of the diagnosis. A β-blocker is often used to reduce and stabilize patients' heart rate but cannot eliminate the variation. In order to obtain robust and optimal image quality, a software solution that increases the temporal resolution and decreases the effect of heart rate is highly desirable. This paper proposes an ECG-correlated direct cone-beam reconstruction algorithm (TCOT-EGR) with cardiac banding artifact correction (CBC) and disconnected projections redundancy compensation technique (DIRECT). First the theory and analytical model of the cardiac temporal resolution is outlined. Next, the performance of the proposed algorithms is evaluated by using computer simulations as well as patient data. It will be shown that the proposed algorithms enhance the robustness of the image quality against inconsistencies by guaranteeing smooth transition of heart cycles used in reconstruction
Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing
International Nuclear Information System (INIS)
King, Stephen F.; Zhang, Jue; Zhou, Shun
2016-01-01
The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ_2_3=45"∘±1"∘, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.
Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing
Energy Technology Data Exchange (ETDEWEB)
King, Stephen F. [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Zhang, Jue [Center for High Energy Physics, Peking University,Beijing 100871 (China); Zhou, Shun [Center for High Energy Physics, Peking University,Beijing 100871 (China); Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)
2016-12-06
The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ{sub 23}=45{sup ∘}±1{sup ∘}, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.
Li, Z. W.; Xu, Wenbin; Feng, G. C.; Hu, J.; Wang, C. C.; Ding, X. L.; Zhu, J. J.
2012-01-01
The propagation delay when radar signals travel from the troposphere has been one of the major limitations for the applications of high precision repeat-pass Interferometric Synthetic Aperture Radar (InSAR). In this paper, we first present an elevation-dependent atmospheric correction model for Advanced Synthetic Aperture Radar (ASAR—the instrument aboard the ENVISAT satellite) interferograms with Medium Resolution Imaging Spectrometer (MERIS) integrated water vapour (IWV) data. Then, using four ASAR interferometric pairs over Southern California as examples, we conduct the atmospheric correction experiments with cloud-free MERIS IWV data. The results show that after the correction the rms differences between InSAR and GPS have reduced by 69.6 per cent, 29 per cent, 31.8 per cent and 23.3 per cent, respectively for the four selected interferograms, with an average improvement of 38.4 per cent. Most importantly, after the correction, six distinct deformation areas have been identified, that is, Long Beach–Santa Ana Basin, Pomona–Ontario, San Bernardino and Elsinore basin, with the deformation velocities along the radar line-of-sight (LOS) direction ranging from −20 mm yr−1 to −30 mm yr−1 and on average around −25 mm yr−1, and Santa Fe Springs and Wilmington, with a slightly low deformation rate of about −10 mm yr−1 along LOS. Finally, through the method of stacking, we generate a mean deformation velocity map of Los Angeles over a period of 5 yr. The deformation is quite consistent with the historical deformation of the area. Thus, using the cloud-free MERIS IWV data correcting synchronized ASAR interferograms can significantly reduce the atmospheric effects in the interferograms and further better capture the ground deformation and other geophysical signals.
Li, Z. W.
2012-05-01
The propagation delay when radar signals travel from the troposphere has been one of the major limitations for the applications of high precision repeat-pass Interferometric Synthetic Aperture Radar (InSAR). In this paper, we first present an elevation-dependent atmospheric correction model for Advanced Synthetic Aperture Radar (ASAR—the instrument aboard the ENVISAT satellite) interferograms with Medium Resolution Imaging Spectrometer (MERIS) integrated water vapour (IWV) data. Then, using four ASAR interferometric pairs over Southern California as examples, we conduct the atmospheric correction experiments with cloud-free MERIS IWV data. The results show that after the correction the rms differences between InSAR and GPS have reduced by 69.6 per cent, 29 per cent, 31.8 per cent and 23.3 per cent, respectively for the four selected interferograms, with an average improvement of 38.4 per cent. Most importantly, after the correction, six distinct deformation areas have been identified, that is, Long Beach–Santa Ana Basin, Pomona–Ontario, San Bernardino and Elsinore basin, with the deformation velocities along the radar line-of-sight (LOS) direction ranging from −20 mm yr−1 to −30 mm yr−1 and on average around −25 mm yr−1, and Santa Fe Springs and Wilmington, with a slightly low deformation rate of about −10 mm yr−1 along LOS. Finally, through the method of stacking, we generate a mean deformation velocity map of Los Angeles over a period of 5 yr. The deformation is quite consistent with the historical deformation of the area. Thus, using the cloud-free MERIS IWV data correcting synchronized ASAR interferograms can significantly reduce the atmospheric effects in the interferograms and further better capture the ground deformation and other geophysical signals.
Energy Technology Data Exchange (ETDEWEB)
Matthews, Patrick [Navarro-Intera, LLC (N-I), Las Vegas, NV (United States)
2013-11-01
This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 570: Area 9 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada. This complies with the requirements of the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the State of Nevada; U.S. Department of Energy (DOE), Environmental Management; U.S. Department of Defense; and DOE, Legacy Management. The purpose of the CADD/CR is to provide justification and documentation supporting the recommendation that no further corrective action is needed.
Evaluation of the global orbit correction algorithm for the APS real-time orbit feedback system
International Nuclear Information System (INIS)
Carwardine, J.; Evans, K. Jr.
1997-01-01
The APS real-time orbit feedback system uses 38 correctors per plane and has available up to 320 rf beam position monitors. Orbit correction is implemented using multiple digital signal processors. Singular value decomposition is used to generate a correction matrix from a linear response matrix model of the storage ring lattice. This paper evaluates the performance of the APS system in terms of its ability to correct localized and distributed sources of orbit motion. The impact of regulator gain and bandwidth, choice of beam position monitors, and corrector dynamics are discussed. The weighted least-squares algorithm is reviewed in the context of local feedback
National Research Council Canada - National Science Library
Gruninger, John; Fox, Marsha; Lee, Jamine; Ratkowski, Anthony J; Hoke, Michael L
2006-01-01
The atmospheric correction of thermal infrared (TIR) imagery involves the combined tasks of separation of atmospheric transmittance, downwelling flux and upwelling radiance from the surface material spectral emissivity and temperature...
International Nuclear Information System (INIS)
Bose, Supratik; Shukla, Himanshu; Maltz, Jonathan
2010-01-01
Purpose: In current image guided pretreatment patient position adjustment methods, image registration is used to determine alignment parameters. Since most positioning hardware lacks the full six degrees of freedom (DOF), accuracy is compromised. The authors show that such compromises are often unnecessary when one models the planned treatment beams as part of the adjustment calculation process. The authors present a flexible algorithm for determining optimal realizable adjustments for both step-and-shoot and arc delivery methods. Methods: The beam shape model is based on the polygonal intersection of each beam segment with the plane in pretreatment image volume that passes through machine isocenter perpendicular to the central axis of the beam. Under a virtual six-DOF correction, ideal positions of these polygon vertices are computed. The proposed method determines the couch, gantry, and collimator adjustments that minimize the total mismatch of all vertices over all segments with respect to their ideal positions. Using this geometric error metric as a function of the number of available DOF, the user may select the most desirable correction regime. Results: For a simulated treatment plan consisting of three equally weighted coplanar fixed beams, the authors achieve a 7% residual geometric error (with respect to the ideal correction, considered 0% error) by applying gantry rotation as well as translation and isocentric rotation of the couch. For a clinical head-and-neck intensity modulated radiotherapy plan with seven beams and five segments per beam, the corresponding error is 6%. Correction involving only couch translation (typical clinical practice) leads to a much larger 18% mismatch. Clinically significant consequences of more accurate adjustment are apparent in the dose volume histograms of target and critical structures. Conclusions: The algorithm achieves improvements in delivery accuracy using standard delivery hardware without significantly increasing
Energy Technology Data Exchange (ETDEWEB)
Kim, Ye-Seul; Park, Hye-Suk; Kim, Hee-Joung [Yonsei University, Wonju (Korea, Republic of); Choi, Young-Wook; Choi, Jae-Gu [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of)
2014-12-15
Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.
International Nuclear Information System (INIS)
Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2015-01-01
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr 3 ) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr 3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R 2 =0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant
Energy Technology Data Exchange (ETDEWEB)
Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)
2015-10-11
The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Filli, Lukas; Finkenstaedt, Tim; Andreisek, Gustav; Guggenberger, Roman [University Hospital of Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); Marcon, Magda [University Hospital of Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); University of Udine, Institute of Diagnostic Radiology, Department of Medical and Biological Sciences, Udine (Italy); Scholz, Bernhard [Imaging and Therapy Division, Siemens AG, Healthcare Sector, Forchheim (Germany); Calcagni, Maurizio [University Hospital of Zurich, Division of Plastic Surgery and Hand Surgery, Zurich (Switzerland)
2014-12-15
The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was ''almost perfect'' (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. (orig.)
International Nuclear Information System (INIS)
Filli, Lukas; Finkenstaedt, Tim; Andreisek, Gustav; Guggenberger, Roman; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio
2014-01-01
The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was ''almost perfect'' (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. (orig.)
International Nuclear Information System (INIS)
Matthews, Patrick; Peterson, Dawn
2011-01-01
Corrective Action Unit 106 comprises four corrective action sites (CASs): (1) 05-20-02, Evaporation Pond; (2) 05-23-05, Atmospheric Test Site - Able; (3) 05-45-04, 306 GZ Rad Contaminated Area; (4) 05-45-05, 307 GZ Rad Contaminated Area. The purpose of this CADD/CR is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 106 based on the implementation of corrective actions. The corrective action of clean closure was implemented at CASs 05-45-04 and 05-45-05, while no corrective action was necessary at CASs 05-20-02 and 05-23-05. Corrective action investigation (CAI) activities were performed from October 20, 2010, through June 1, 2011, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 106: Areas 5, 11 Frenchman Flat Atmospheric Sites. The approach for the CAI was divided into two facets: investigation of the primary release of radionuclides, and investigation of other releases (mechanical displacement and chemical releases). The purpose of the CAI was to fulfill data needs as defined during the data quality objective (DQO) process. The CAU 106 dataset of investigation results was evaluated based on a data quality assessment. This assessment demonstrated the dataset is complete and acceptable for use in fulfilling the DQO data needs. Investigation results were evaluated against final action levels (FALs) established in this document. A radiological dose FAL of 25 millirem per year was established based on the Industrial Area exposure scenario (2,250 hours of annual exposure). The only radiological dose exceeding the FAL was at CAS 05-45-05 and was associated with potential source material (PSM). It is also assumed that additional PSM in the form of depleted uranium (DU) and DU-contaminated debris at CASs 05-45-04 and 05-45-05 exceed the FAL. Therefore, corrective actions were undertaken at these CASs that consisted of removing PSM and collecting verification
Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong
2017-10-01
Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.
International Nuclear Information System (INIS)
Barry, J.M.; Pollard, J.P.
1986-11-01
A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels
Directory of Open Access Journals (Sweden)
Ray Debraj
2015-01-01
Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.
High-resolution studies of the structure of the solar atmosphere using a new imaging algorithm
Karovska, Margarita; Habbal, Shadia Rifai
1991-01-01
The results of the application of a new image restoration algorithm developed by Ayers and Dainty (1988) to the multiwavelength EUV/Skylab observations of the solar atmosphere are presented. The application of the algorithm makes it possible to reach a resolution better than 5 arcsec, and thus study the structure of the quiet sun on that spatial scale. The results show evidence for discrete looplike structures in the network boundary, 5-10 arcsec in size, at temperatures of 100,000 K.
Analysis of Hyperspectral Imagery for Oil Spill Detection Using SAM Unmixing Algorithm Techniques
Directory of Open Access Journals (Sweden)
Ahmad Keshavarz
2017-04-01
Full Text Available Oil spill is one of major marine environmental challenges. The main impacts of this phenomenon are preventing light transmission into the deep water and oxygen absorption, which can disturb the photosynthesis process of water plants. In this research, we utilize SpecTIR airborne sensor data to extract and classify oils spill for the Gulf of Mexico Deepwater Horizon (DWH happened in 2010. For this purpose, by using FLAASH algorithm atmospheric correction is first performed. Then, total 360 spectral bands from 183 to 198 and from 255 to 279 have been excluded by applying the atmospheric correction algorithm due to low signal to noise ratio (SNR. After that, bands 1 to 119 have been eliminated for their irrelevancy to extracting oil spill spectral endmembers. In the next step, by using MATLAB hyperspectral toolbox, six spectral endmembers according to the ratio of oil to water have been extracted. Finally, by using extracted endmembers and SAM classification algorithm, the image has been classified into 6 classes. The classes are 100% oil, 80% oil and 20% water, 60% oil and 40% water, 40% oil and 60% water, 20% oil and 80% water, and 100% water.
Atmospheric Pressure Corrections in Geodesy and Oceanography: a Strategy for Handling Air Tides
Ponte, Rui M.; Ray, Richard D.
2003-01-01
Global pressure data are often needed for processing or interpreting modern geodetic and oceanographic measurements. The most common source of these data is the analysis or reanalysis products of various meteorological centers. Tidal signals in these products can be problematic for several reasons, including potentially aliased sampling of the semidiurnal solar tide as well as the presence of various modeling or timing errors. Building on the work of Van den Dool and colleagues, we lay out a strategy for handling atmospheric tides in (re)analysis data. The procedure also offers a method to account for ocean loading corrections in satellite altimeter data that are consistent with standard ocean-tide corrections. The proposed strategy has immediate application to the on-going Jason-1 and GRACE satellite missions.
Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon
2013-07-01
Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-02-26
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.
Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri
1992-01-01
The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.
Biogeosystem Technique as a method to correct the climate
Kalinitchenko, Valery; Batukaev, Abdulmalik; Batukaev, Magomed; Minkina, Tatiana
2017-04-01
The climate change and uncertainties of biosphere are on agenda. Correction o the climate drivers will make the climate and biosphere more predictable and certain. Direct sequestration of fossil industrial hydrocarbons and natural methane excess for greenhouse effect reduction is a dangerous mistake. Most quantity of carbon now exists in the form of geological deposits and further reduction of carbon content in biosphere and atmosphere leads to degradation of life. We propose the biological management of the greenhouse gases changing the ratio of biological and atmospheric phases of carbon and water cycle. The biological correction of carbon cycle is the obvious measure because the biological alterations of the Earth's climate have ever been an important peculiarity of the Planet's history. At the first stage of the Earth's climate correction algorithm we use the few leading obvious principal as follows: The more greenhouse amount in atmosphere, the higher greenhouse effect; The more biological production of terrestrial ecosystem, the higher carbon dioxide biological sequestration from atmosphere; The more fresh ionized active oxygen biological production, the higher rate of methane and hydrogen sulfide oxidation in atmosphere, water and soil; The more quantity of carbon in the form of live biological matter in soil and above-ground biomass, the less quantity of carbon in atmosphere; The less sink of carbon to water system, the less emission of greenhouse gases from water system; The less rate of water consumption per unit of biological production, the less transpiration rate of water vapor as a greenhouse gas; The higher intra-soil utilization of mortal biomass, biological and mineral wastes into the plant nutrition instead of its mineralization to greenhouse gases, the less greenhouse effect; The more fossil industrial hydrocarbons are used, the higher can be Earth's biomass; The higher biomass on the Earth, the more of ecology safe food, raw material and biofuel
International Nuclear Information System (INIS)
Jones, Andrew Osler
2004-01-01
There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the
De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P
2014-10-01
Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.
International Nuclear Information System (INIS)
Ogino, Takashi; Egawa, Sunao
1991-01-01
New algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images were developed. One, designated plane weighting method, is to correct CT value in proportion to the position of the beam element passing through the voxel. The other, designated solid weighting method, is to correct CT value in proportion to the length of the beam element passing through the voxel and the volume of voxel. Phantom experiments showed fair spatial resolution in the transverse direction. In the longitudinal direction, however, spatial resolution of under slice thickness could not be obtained. Contrast resolution was equivalent for both methods. In patient studies, the reconstructed radiotherapy simulation image was almost similar in visual perception of the density resolution to a simulation film taken by X-ray simulator. (author)
Korkin, S.; Lyapustin, A.
2012-12-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD
InSAR atmospheric correction using Himawari-8 Geostationary Meteorological Satellite
Kinoshita, Y.; Nimura, T.; Furuta, R.
2017-12-01
The atmospheric delay effect is one of the limitations for the accurate surface displacement detection by Synthetic Aperture Radar Interferometry (InSAR). Many previous studies have attempted to mitigate the neutral atmospheric delay in InSAR (e.g. Jolivet et al. 2014; Foster et al. 2006; Kinoshita et al. 2013). Hanssen et al. (2001) investigated the relationship between the 27 hourly observations of GNSS precipitable water vapor (PWV) and the infrared brightness temperature derived from visible satellite imagery, and showed a good correlation. Here we showed a preliminary result of the newly developed method for the neutral atmospheric delay correction using the Himawari-8 Japanese geostationary meteorological satellite data. The Himawari-8 satellite is the Japanese state-of-the-art geostationary meteorological satellite that has 16 observation channels and has spatial resolutions of 0.5 km (visible) and 2.0 km (near-infrared and infrared) with an time interval of 2.5 minutes around Japan. To estimate the relationship between the satellite brightness temperature and the atmospheric delay amount. Since the InSAR atmospheric delay is principally the same as that in GNSS, we at first compared the Himawari-8 data with the GNSS zenith tropospheric delay data derived from the Japanese dense GNSS network. The comparison of them showed that the band with the wavelength of 6.9 μm had the highest correlation to the GNSS observation. Based on this result, we developed an InSAR atmospheric delay model that uses the Himawari-8 6.9 μm band data. For the model validation, we generated InSAR images from the ESA's C-band Sentinel-1 SLC data with the GAMMA SAR software. We selected two regions around Tokyo and Sapporo (both in Japan) as the test sites because of the less temporal decorrelation. The validation result showed that the delay model reasonably estimate large scale phase variation whose spatial scale was on the order of over 20 km. On the other hand, phase variations of
Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi
2016-06-01
A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification
Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver
2012-01-01
Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.
International Nuclear Information System (INIS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2014-01-01
The maximum likelihood attenuation correction factors (MLACF) algorithm has been developed to calculate the maximum-likelihood estimate of the activity image and the attenuation sinogram in time-of-flight (TOF) positron emission tomography, using only emission data without prior information on the attenuation. We consider the case of a Poisson model of the data, in the absence of scatter or random background. In this case the maximization with respect to the attenuation factors can be achieved in a closed form and the MLACF algorithm works by updating the activity. Despite promising numerical results, the convergence of this algorithm has not been analysed. In this paper we derive the algorithm and demonstrate that the MLACF algorithm monotonically increases the likelihood, is asymptotically regular, and that the limit points of the iteration are stationary points of the likelihood. Because the problem is not convex, however, the limit points might be saddle points or local maxima. To obtain some empirical insight into the latter question, we present data obtained by applying MLACF to 2D simulated TOF data, using a large number of iterations and different initializations. (paper)
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-01-01
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169
Improved Global Ocean Color Using Polymer Algorithm
Steinmetz, Francois; Ramon, Didier; Deschamps, ierre-Yves; Stum, Jacques
2010-12-01
A global ocean color product has been developed based on the use of the POLYMER algorithm to correct atmospheric scattering and sun glint and to process the data to a Level 2 ocean color product. Thanks to the use of this algorithm, the coverage and accuracy of the MERIS ocean color product have been significantly improved when compared to the standard product, therefore increasing its usefulness for global ocean monitor- ing applications like GLOBCOLOUR. We will present the latest developments of the algorithm, its first application to MODIS data and its validation against in-situ data from the MERMAID database. Examples will be shown of global NRT chlorophyll maps produced by CLS with POLYMER for operational applications like fishing or oil and gas industry, as well as its use by Scripps for a NASA study of the Beaufort and Chukchi seas.
A Deep Machine Learning Algorithm to Optimize the Forecast of Atmospherics
Russell, A. M.; Alliss, R. J.; Felton, B. D.
Space-based applications from imaging to optical communications are significantly impacted by the atmosphere. Specifically, the occurrence of clouds and optical turbulence can determine whether a mission is a success or a failure. In the case of space-based imaging applications, clouds produce atmospheric transmission losses that can make it impossible for an electro-optical platform to image its target. Hence, accurate predictions of negative atmospheric effects are a high priority in order to facilitate the efficient scheduling of resources. This study seeks to revolutionize our understanding of and our ability to predict such atmospheric events through the mining of data from a high-resolution Numerical Weather Prediction (NWP) model. Specifically, output from the Weather Research and Forecasting (WRF) model is mined using a Random Forest (RF) ensemble classification and regression approach in order to improve the prediction of low cloud cover over the Haleakala summit of the Hawaiian island of Maui. RF techniques have a number of advantages including the ability to capture non-linear associations between the predictors (in this case physical variables from WRF such as temperature, relative humidity, wind speed and pressure) and the predictand (clouds), which becomes critical when dealing with the complex non-linear occurrence of clouds. In addition, RF techniques are capable of representing complex spatial-temporal dynamics to some extent. Input predictors to the WRF-based RF model are strategically selected based on expert knowledge and a series of sensitivity tests. Ultimately, three types of WRF predictors are chosen: local surface predictors, regional 3D moisture predictors and regional inversion predictors. A suite of RF experiments is performed using these predictors in order to evaluate the performance of the hybrid RF-WRF technique. The RF model is trained and tuned on approximately half of the input dataset and evaluated on the other half. The RF
Hu, Chuanmin; Lee, Zhongping; Franz, Bryan
2011-01-01
A new empirical algorithm is proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters (approximately 77% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote sensing reflectance (R(sub rs), sr(sup -1) in the green and a reference formed linearly between R(sub rs) in the blue and red. For low Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band-ratios and Chl, which was further validated using global data collected concurrently by ship-borne and SeaWiFS satellite instruments. Model simulations showed that for low Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient, and performed similarly for different relative contributions of non-phytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time-series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over MERIS and CZCS data indicate that the new approach should be generally applicable to all existing and future ocean color instruments.
Directory of Open Access Journals (Sweden)
Z. X. Cao
2014-06-01
Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.
Morille, Yohann; Haeffelin, Martial; Drobinski, Philippe; Pelon, Jacques
2007-01-01
International audience; Today several lidar networks around the world provide large data sets that are extremely valuable for aerosol and cloud research. Retrieval of atmospheric constituent properties from lidar profiles requires detailed analysis of spatial and temporal variations of the signal. This paper presents an algorithm called STRAT (STRucture of the ATmosphere) designed to retrieve the vertical distribution of cloud and aerosol layers in the boundary layer and through the free trop...
Weighted nonnegative tensor factorization for atmospheric tomography reconstruction
Carmona-Ballester, David; Trujillo-Sevilla, Juan M.; Bonaque-González, Sergio; Gómez-Cárdenes, Óscar; Rodríguez-Ramos, José M.
2018-06-01
Context. Increasing the area on the sky over which atmospheric turbulences can be corrected is a matter of wide interest in astrophysics, especially when a new generation of extremely large telescopes (ELT) is to come in the near future. Aims: In this study we tested if a method for visual representation in three-dimensional displays, the weighted nonnegative tensor factorization (WNTF), is able to improve the quality of the atmospheric tomography (AT) reconstruction as compared to a more standardized method like a randomized Kaczmarz algorithm. Methods: A total of 1000 different atmospheres were simulated and recovered by both methods. Recovering was computed for two and three layers and for four different constellations of laser guiding stars (LGS). The goodness of both methods was tested by means of the radial average of the Strehl ratio across the field of view of a telescope of 8m diameter with a sky coverage of 97.8 arcsec. Results: The proposed method significantly outperformed the Kaczmarz in all tested cases (p ≤ 0.05). In WNTF, three-layers configuration provided better outcomes, but there was no clear relation between different LGS constellations and the quality of Strehl ratio maps. Conclusions: The WNTF method is a novel technique in astronomy and its use to recover atmospheric turbulence profiles was proposed and tested. It showed better quality of reconstruction than a conventional Kaczmarz algorithm independently of the number and height of recovered atmospheric layers and of the constellation of laser guide star used. The WNTF method was shown to be a useful tool in highly ill-posed AT problems, where the difficulty of classical algorithms produce high Strehl value maps.
Jiang, Guo-Qing; Xu, Jing; Wei, Jun
2018-04-01
Two algorithms based on machine learning neural networks are proposed—the shallow learning (S-L) and deep learning (D-L) algorithms—that can potentially be used in atmosphere-only typhoon forecast models to provide flow-dependent typhoon-induced sea surface temperature cooling (SSTC) for improving typhoon predictions. The major challenge of existing SSTC algorithms in forecast models is how to accurately predict SSTC induced by an upcoming typhoon, which requires information not only from historical data but more importantly also from the target typhoon itself. The S-L algorithm composes of a single layer of neurons with mixed atmospheric and oceanic factors. Such a structure is found to be unable to represent correctly the physical typhoon-ocean interaction. It tends to produce an unstable SSTC distribution, for which any perturbations may lead to changes in both SSTC pattern and strength. The D-L algorithm extends the neural network to a 4 × 5 neuron matrix with atmospheric and oceanic factors being separated in different layers of neurons, so that the machine learning can determine the roles of atmospheric and oceanic factors in shaping the SSTC. Therefore, it produces a stable crescent-shaped SSTC distribution, with its large-scale pattern determined mainly by atmospheric factors (e.g., winds) and small-scale features by oceanic factors (e.g., eddies). Sensitivity experiments reveal that the D-L algorithms improve maximum wind intensity errors by 60-70% for four case study simulations, compared to their atmosphere-only model runs.
Vector Green's function algorithm for radiative transfer in plane-parallel atmosphere
Energy Technology Data Exchange (ETDEWEB)
Qin Yi [School of Physics, University of New South Wales (Australia)]. E-mail: yi.qin@csiro.au; Box, Michael A. [School of Physics, University of New South Wales (Australia)
2006-01-15
Green's function is a widely used approach for boundary value problems. In problems related to radiative transfer, Green's function has been found to be useful in land, ocean and atmosphere remote sensing. It is also a key element in higher order perturbation theory. This paper presents an explicit expression of the Green's function, in terms of the source and radiation field variables, for a plane-parallel atmosphere with either vacuum boundaries or a reflecting (BRDF) surface. Full polarization state is considered but the algorithm has been developed in such way that it can be easily reduced to solve scalar radiative transfer problems, which makes it possible to implement a single set of code for computing both the scalar and the vector Green's function.
Drift-corrected Odin-OSIRIS ozone product: algorithm and updated stratospheric ozone trends
Directory of Open Access Journals (Sweden)
A. E. Bourassa
2018-01-01
Full Text Available A small long-term drift in the Optical Spectrograph and Infrared Imager System (OSIRIS stratospheric ozone product, manifested mostly since 2012, is quantified and attributed to a changing bias in the limb pointing knowledge of the instrument. A correction to this pointing drift using a predictable shape in the measured limb radiance profile is implemented and applied within the OSIRIS retrieval algorithm. This new data product, version 5.10, displays substantially better both long- and short-term agreement with Microwave Limb Sounder (MLS ozone throughout the stratosphere due to the pointing correction. Previously reported stratospheric ozone trends over the time period 1984–2013, which were derived by merging the altitude–number density ozone profile measurements from the Stratospheric Aerosol and Gas Experiment (SAGE II satellite instrument (1984–2005 and from OSIRIS (2002–2013, are recalculated using the new OSIRIS version 5.10 product and extended to 2017. These results still show statistically significant positive trends throughout the upper stratosphere since 1997, but at weaker levels that are more closely in line with estimates from other data records.
International Nuclear Information System (INIS)
Parra, J.C.; Acevedo, P.S.; Sobrino, J.A.; Morales, L.J.
2006-01-01
Four algorithms based on the technique of split-window, to estimate the land surface temperature starting from the data provided by the sensor Advanced Very High Resolution radiometer (AVHRR), on board the series of satellites of the National Oceanic and Atmospheric Administration (NOAA), are carried out. These algorithms consider corrections for atmospheric characteristics and emissivity of the different surfaces of the land. Fourteen images AVHRR-NOAA corresponding to the months of October of 2003, and January of 2004 were used. Simultaneously, measurements of soil temperature in the Carillanca hydro-meteorological station were collected in the Region of La Araucana, Chile (38 deg 41 min S; 72 deg 25 min W). Of all the used algorithms, the best results correspond to the model proposed by Sobrino and Raussoni (2000), with a media and standard deviation corresponding to the difference among the temperature of floor measure in situ and the estimated for this algorithm, of -0.06 and 2.11 K, respectively. (Author)
A fingerprint key binding algorithm based on vector quantization and error correction
Li, Liang; Wang, Qian; Lv, Ke; He, Ning
2012-04-01
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.
Energy Technology Data Exchange (ETDEWEB)
Sloop, Christy
2013-04-01
This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 569: Area 3 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada. CAU 569 comprises the following nine corrective action sites (CASs): • 03-23-09, T-3 Contamination Area • 03-23-10, T-3A Contamination Area • 03-23-11, T-3B Contamination Area • 03-23-12, T-3S Contamination Area • 03-23-13, T-3T Contamination Area • 03-23-14, T-3V Contamination Area • 03-23-15, S-3G Contamination Area • 03-23-16, S-3H Contamination Area • 03-23-21, Pike Contamination Area The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 569 based on the implementation of the corrective actions listed in Table ES-2.
Directory of Open Access Journals (Sweden)
Mehrdad Mohammadpour
2013-10-01
Full Text Available Purpose: To assess the safety, efficacy and predictability of photorefractive keratectomy (PRK [Tissue-saving (TS versus Plano-scan (PS ablation algorithms] of Technolas 217z excimer laser for correction of myopic astigmatismMethods: In this retrospective study one hundred and seventy eyes of 85 patients (107 eyes (62.9% with PS and 63 eyes (37.1% with TS algorithm were included. TS algorithm was applied for those with central corneal thickness less than 500 µm or estimated residual stromal thickness less than 420 µm. Mitomycin C (MMC was applied for 120 eyes (70.6%; in case of an ablation depth more than 60 μm and/or astigmatic correction more than one diopter (D. Mean sphere, cylinder, spherical equivalent (SE refraction, uncorrected visual acuity (UCVA, best corrected visual acuity (BCVA were measured preoperatively, and 4 weeks,12 weeks and 24 weeks postoperatively.Results: One, three and six months postoperatively, 60%, 92.9%, 97.5% of eyes had UCVA of 20/20 or better, respectively. Mean preoperative and 1, 3, 6 months postoperative SE were -3.48±1.28 D (-1.00 to -8.75, -0.08±0.62D, -0.02±0.57 and -0.004± 0.29, respectively. And also, 87.6%, 94.1% and 100% were within ±1.0 D of emmetropia and 68.2, 75.3, 95% were within ±0.5 of emmetropia. The safety and efficacy indices were 0.99 and 0.99 at 12 weeks and 1.009 and 0.99 at 24 weeks, respectively. There was no clinically or statistically significant difference between the outcomes of PS or TS algorithms or between those with or without MMC in either group in terms of safety, efficacy, predictability or stability. Dividing the eyes with subjective SE≤4 D and SE≥4 D postoperatively, there was no significant difference between the predictability of the two groups. There was no intra- or postoperative complication.Conclusion: Outcomes of PRK for correction of myopic astigmatism showed great promise with both PS and TS algorithms.
International Nuclear Information System (INIS)
Huijsmans, D.P.
1982-01-01
The aim of this research was to distinguish as accurately as possible between two mechanisms behind a half-daily variation in detected numbers of neutrons and mesons in the secondary cosmic ray particles at sea level. These two mechanisms are due to air pressure variations at sea level and affect the number of primary particles with a certain arrival direction. The distribution among arrival directions in the ecliptic plane varies if a gradient exists in the guiding centre density of primaries in directions perpendicular to the neutral sheet. Chapter 2 is devoted to the calculation of a physically and statistically justifiable determination of the barometric coefficient for neutron measurements and air pressures. Chapter 3 deals with the estimation of atmospheric correction coefficients for the elimination of the influence of changing atmospheric conditions on the number of detected mesons. For mesons the variation of total mass, and also the variations in mass-distribution along the trajectory of the mesons are important. After correction for atmospheric variations using the resulting atmospheric correction coefficients from chapter 2 and 3, the influence of the structure of the interplanetary magnetic field near the earth is examined in chapter 4. 0inally, in chapter 5, a power spectral analysis of variations in corrected intensities of neutrons and mesons is carried out. Such an analysis distinguishes the variance of a time series into contributions within small frequency intervals. From the power spectra of variations on a yearly basis, a statistically fundamented judgement can be given as to the significance of the semi-diurnal variation during the different phases of the solar magnetic activity cycle. (Auth.)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
Kalashnikova, Olga; Garay, Michael; Xu, Feng; Diner, David; Seidel, Felix
2016-07-01
Multiangle spectro-polarimetric measurements have been advocated as an additional tool for better understanding and quantifying the aerosol properties needed for atmospheric correction for ocean color retrievals. The central concern of this work is the assessment of the effects of absorbing aerosol properties on remote sensing reflectance measurement uncertainty caused by neglecting UV-enhanced absorption of carbonaceous particles and by not accounting for dust nonsphericity. In addition, we evaluate the polarimetric sensitivity of absorbing aerosol properties in light of measurement uncertainties achievable for the next generation of multi-angle polarimetric imaging instruments, and demonstrate advantages and disadvantages of wavelength selection in the UV/VNIR range. In this work a vector Markov Chain radiative transfer code including bio-optical models was used to quantitatively evaluate in water leaving radiances between atmospheres containing realistic UV-enhanced and non-spherical aerosols and the SEADAS carbonaceous and dust-like aerosol models. The phase matrices for the spherical smoke particles were calculated using a standard Mie code, while those for non-spherical dust particles were calculated using the numerical approach developed for modeling dust for the AERONET network of ground-based sunphotometers. As a next step, we have developed a retrieval code that employs a coupled Markov Chain (MC) and adding/doubling radiative transfer method for joint retrieval of aerosol properties and water leaving radiance from Airborne Multiangle SpectroPolarimetric Imager-1 (AirMSPI-1) polarimetric observations. The AirMSPI-1 instrument has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. AirMSPI typically acquires observations of a target area at 9 view angles between ±67° at 10 m resolution. AirMSPI spectral channels are centered at 355, 380, 445, 470, 555, 660, and 865 nm, with 470, 660, and 865 reporting linear polarization. We
Kleinert, Anne
2006-01-20
The detectors used in the cryogenic limb-emission sounder MIPAS-B2 (Michelson Interferometer for Passive Atmospheric Sounding) show a nonlinear response, which leads to radiometric errors in the calibrated spectra if the nonlinearity is not taken into account. In the case of emission measurements, the dominant error that arises from the nonlinearity is the changing detector responsivity as the incident photon load changes. The effect of the distortion of a single interferogram can be neglected. A method to characterize the variable responsivity and to correct for this effect is proposed. Furthermore, a detailed error estimation is presented.
International Nuclear Information System (INIS)
Fairbanks, Leandro R.; Barbi, Gustavo L.; Silva, Wiliam T.; Reis, Eduardo G.F.; Borges, Leandro F.; Bertucci, Edenyse C.; Maciel, Marina F.; Amaral, Leonardo L.
2011-01-01
Since the cross-section for various radiation interactions is dependent upon tissue material, the presence of heterogeneities affects the final dose delivered. This paper aims to analyze how different treatment planning algorithms (Fast Fourier Transform, Convolution, Superposition, Fast Superposition and Clarkson) work when heterogeneity corrections are used. To that end, a farmer-type ionization chamber was positioned reproducibly (during the time of CT as well as irradiation) inside several phantoms made of aluminum, bone, cork and solid water slabs. The percent difference between the dose measured and calculated by the various algorithms was less than 5%.The convolution method shows better results for high density materials (difference ∼1 %), whereas the Superposition algorithm is more accurate for low densities (around 1,1%). (author)
Digital Repository Service at National Institute of Oceanography (India)
Nagamani, P.V.; Latha, T.P.; Rao, K.H.; Suresh, T.; Choudhury, S.B.; Dutt, C.B.S.; Dadhwal, V.K.
Cloud masking is one of the primary and important steps in the atmospheric correction procedure in particular to coastal ocean waters. Cloud masking for ocean colour data processing is based on the assumption that the water reflectance is close...
Shastri, Niket; Pathak, Kamlesh
2018-05-01
The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Sokolov, Anton; Gengembre, Cyril; Dmitriev, Egor; Delbarre, Hervé
2017-04-01
The problem is considered of classification of local atmospheric meteorological events in the coastal area such as sea breezes, fogs and storms. The in-situ meteorological data as wind speed and direction, temperature, humidity and turbulence are used as predictors. Local atmospheric events of 2013-2014 were analysed manually to train classification algorithms in the coastal area of English Channel in Dunkirk (France). Then, ultrasonic anemometer data and LIDAR wind profiler data were used as predictors. A few algorithms were applied to determine meteorological events by local data such as a decision tree, the nearest neighbour classifier, a support vector machine. The comparison of classification algorithms was carried out, the most important predictors for each event type were determined. It was shown that in more than 80 percent of the cases machine learning algorithms detect the meteorological class correctly. We expect that this methodology could be applied also to classify events by climatological in-situ data or by modelling data. It allows estimating frequencies of each event in perspective of climate change.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Use of GLOBE Observations to Derive a Landsat 8 Split Window Algorithm for Urban Heat Island
Fagerstrom, L.; Czajkowski, K. P.
2017-12-01
Surface temperature has been studied to investigate the warming of urban climates, also known as urban heat islands, which can impact urban planning, public health, pollution levels, and energy consumption. However, the full potential of remotely sensed images is limited when analyzing land surface temperature due to the daunting task of correcting for atmospheric effects. Landsat 8 has two thermal infrared sensors. With two bands in the infrared region, a split window algorithm (SWA), can be applied to correct for atmospheric effects. This project used in situ surface temperature measurements from NASA's ground observation program, the Global Learning and Observations to Benefit the Environment (GLOBE), to derive the correcting coefficients for use in the SWA. The GLOBE database provided land surface temperature data that coincided with Landsat 8 overpasses. The land surface temperature derived from Landsat 8 SWA can be used to analyze for urban heat island effect.
International Nuclear Information System (INIS)
Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin
2014-01-01
In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2
The Bouguer Correction Algorithm for Gravity with Limited Range
MA Jian; WEI Ziqing; WU Lili; YANG Zhenghui
2017-01-01
The Bouguer correction is an important item in gravity reduction, while the traditional Bouguer correction, whether the plane Bouguer correction or the spherical Bouguer correction, exists approximation error because of far-zone virtual terrain. The error grows as the calculation point gets higher. Therefore gravity reduction using the Bouguer correction with limited range, which was in accordance with the scope of the topographic correction, was researched in this paper. After that, a simpli...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
International Nuclear Information System (INIS)
Tachibana, Masayuki; Noguchi, Yoshitaka; Fukunaga, Jyunichi; Hirano, Naomi; Yoshidome, Satoshi; Hirose, Takaaki
2009-01-01
The monitor unit (MU) was calculated by pencil beam convolution (inhomogeneity correction algorithm: batho power law) [PBC (BPL)] which is the dose calculation algorithm based on measurement in the past in the stereotactic lung irradiation study. The recalculation was done by analytical anisotropic algorithm (AAA), which is the dose calculation algorithm based on theory data. The MU calculated by PBC (BPL) and AAA was compared for each field. In the result of the comparison of 1031 fields in 136 cases, the MU calculated by PBC (BPL) was about 2% smaller than that calculated by AAA. This depends on whether one does the calculation concerning the extension of the second electrons. In particular, the difference in the MU is influenced by the X-ray energy. With the same X-ray energy, when the irradiation field size is small, the lung pass length is long, the lung pass length percentage is large, and the CT value of the lung is low, and the difference of MU is increased. (author)
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
International Nuclear Information System (INIS)
Waldmann, I. P.
2016-01-01
Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as the “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process
Waldmann, I. P.
2016-04-01
Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as the “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.
Energy Technology Data Exchange (ETDEWEB)
Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT (United Kingdom)
2016-04-01
Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as the “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.
Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun
2017-09-19
In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.
NWS Corrections to Observations
National Oceanic and Atmospheric Administration, Department of Commerce — Form B-14 is the National Weather Service form entitled 'Notice of Corrections to Weather Records.' The forms are used to make corrections to observations on forms...
Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.
2013-05-01
Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.
Adaptive testing with equated number-correct scoring
van der Linden, Willem J.
1999-01-01
A constrained CAT algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived
Beam hardening correction algorithm in microtomography images
International Nuclear Information System (INIS)
Sales, Erika S.; Lima, Inaya C.B.; Lopes, Ricardo T.; Assis, Joaquim T. de
2009-01-01
Quantification of mineral density of bone samples is directly related to the attenuation coefficient of bone. The X-rays used in microtomography images are polychromatic and have a moderately broad spectrum of energy, which makes the low-energy X-rays passing through a sample to be absorbed, causing a decrease in the attenuation coefficient and possibly artifacts. This decrease in the attenuation coefficient is due to a process called beam hardening. In this work the beam hardening of microtomography images of vertebrae of Wistar rats subjected to a study of hyperthyroidism was corrected by the method of linearization of the projections. It was discretized using a spectrum in energy, also called the spectrum of Herman. The results without correction for beam hardening showed significant differences in bone volume, which could lead to a possible diagnosis of osteoporosis. But the data with correction showed a decrease in bone volume, but this decrease was not significant in a confidence interval of 95%. (author)
Beam hardening correction algorithm in microtomography images
Energy Technology Data Exchange (ETDEWEB)
Sales, Erika S.; Lima, Inaya C.B.; Lopes, Ricardo T., E-mail: esales@con.ufrj.b, E-mail: ricardo@lin.ufrj.b [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Lab. de Instrumentacao Nuclear; Assis, Joaquim T. de, E-mail: joaquim@iprj.uerj.b [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico. Dept. de Engenharia Mecanica
2009-07-01
Quantification of mineral density of bone samples is directly related to the attenuation coefficient of bone. The X-rays used in microtomography images are polychromatic and have a moderately broad spectrum of energy, which makes the low-energy X-rays passing through a sample to be absorbed, causing a decrease in the attenuation coefficient and possibly artifacts. This decrease in the attenuation coefficient is due to a process called beam hardening. In this work the beam hardening of microtomography images of vertebrae of Wistar rats subjected to a study of hyperthyroidism was corrected by the method of linearization of the projections. It was discretized using a spectrum in energy, also called the spectrum of Herman. The results without correction for beam hardening showed significant differences in bone volume, which could lead to a possible diagnosis of osteoporosis. But the data with correction showed a decrease in bone volume, but this decrease was not significant in a confidence interval of 95%. (author)
Atmospheric correction of Earth-observation remote sensing images
Indian Academy of Sciences (India)
In earth observation, the atmospheric particles contaminate severely, through absorption and scattering, the reflected electromagnetic signal from the earth surface. It will be greatly beneficial for land surface characterization if we can remove these atmospheric effects from imagery and retrieve surface reflectance that ...
Directory of Open Access Journals (Sweden)
Pei-Fang (Jennifer Tsai
2012-01-01
Full Text Available Remanufacturing of used products has become a strategic issue for cost-sensitive businesses. Due to the nature of uncertain supply of end-of-life (EoL products, the reverse logistic can only be sustainable with a dynamic production planning for disassembly process. This research investigates the sequencing of disassembly operations as a single-period partial disassembly optimization (SPPDO problem to minimize total disassembly cost. AND/OR graph representation is used to include all disassembly sequences of a returned product. A label correcting algorithm is proposed to find an optimal partial disassembly plan if a specific reusable subpart is retrieved from the original return. Then, a heuristic procedure that utilizes this polynomial-time algorithm is presented to solve the SPPDO problem. Numerical examples are used to demonstrate the effectiveness of this solution procedure.
Analysis of Hyperspectral Imagery for Oil Spill Detection Using SAM Unmixing Algorithm Techniques
Ahmad Keshavarz; Seyed Mohammad Karim Hashemizadeh
2017-01-01
Oil spill is one of major marine environmental challenges. The main impacts of this phenomenon are preventing light transmission into the deep water and oxygen absorption, which can disturb the photosynthesis process of water plants. In this research, we utilize SpecTIR airborne sensor data to extract and classify oils spill for the Gulf of Mexico Deepwater Horizon (DWH) happened in 2010. For this purpose, by using FLAASH algorithm atmospheric correction is first performed. Then, total 360 sp...
Directory of Open Access Journals (Sweden)
Robert Shuchman
2009-03-01
Full Text Available An advanced operational semi-empirical algorithm for processing satellite remote sensing data in the visible region is described. Based on the Levenberg-Marquardt multivariate optimization procedure, the algorithm is developed for retrieving major water colour producing agents: chlorophyll-a, suspended minerals and dissolved organics. Two assurance units incorporated by the algorithm are intended to flag pixels with inaccurate atmospheric correction and specific hydro-optical properties not covered by the applied hydro-optical model. The hydro-optical model is a set of spectral cross-sections of absorption and backscattering of the colour producing agents. The combination of the optimization procedure and a replaceable hydro-optical model makes the developed algorithm not specific to a particular satellite sensor or a water body. The algorithm performance efficiency is amply illustrated for SeaWiFS, MODIS and MERIS images over a variety of water bodies.
DEFF Research Database (Denmark)
Proud, Simon Richard; Fensholt, R.; Rasmussen, M.O.
2010-01-01
Atmospheric perturbations are a large source of uncertainty in remotely sensed imagery of the Earth's surface. This paper explores the effectiveness of the simplified method for atmospheric correction (SMAC) in reducing the effects of these perturbations in images of the African Continent gathered...... by the Spinning Enhanced Visible & InfraRed Imager (SEVIRI) aboard Meteosat Second Generation (MSG). In order to examine the accuracy of the SMAC we compare its results to those computed by the Second Simulation of the Satellite Signal in the Solar Spectrum (6SV1.1), a highly accurate radiative transfer code......, for a wide range of atmospheric conditions. We find that the SMAC does not offer a high level of accuracy under many sets of atmospheric conditions with under 20% of observations in channels 1 and 2 providing a relative error of less than 10% when compared to 6SV1.1. Those observations involving medium...
Directory of Open Access Journals (Sweden)
V. S. Kudryashov
2016-01-01
Full Text Available The article is devoted to the development of a correction control algorithm by temperature mode of a periodic rubber mixing process for JSC "Voronezh tire plant". The algorithm is designed to perform in the main controller a section of rubber mixing Siemens S7 CPU319F-3 PN/DP, which forms tasks for the local temperature controllers HESCH HE086 and Jumo dTRON304, operating by tempering stations. To compile the algorithm was performed a systematic analysis of rubber mixing process as an object of control and was developed a mathematical model of the process based on the heat balance equations describing the processes of heat transfer through the walls of technological devices, the change of coolant temperature and the temperature of the rubber compound mixing until discharge from the mixer chamber. Due to the complexity and nonlinearity of the control object – Rubber mixers and the availability of methods and a wide experience of this device control in an industrial environment, a correction algorithm is implemented on the basis of an artificial single-layer neural network and it provides the correction of tasks for local controllers on the cooling water temperature and air temperature in the workshop, which may vary considerably depending on the time of the year, and during prolonged operation of the equipment or its downtime. Tempering stations control is carried out by changing the flow of cold water from the cooler and on/off control of the heating elements. The analysis of the model experiments results and practical research at the main controller programming in the STEP 7 environment at the enterprise showed a decrease in the mixing time for different types of rubbers by reducing of heat transfer process control error.
International Nuclear Information System (INIS)
Chan, K.L.; Ning, Z.; Westerdahl, D.; Wong, K.C.; Sun, Y.W.; Hartl, A.; Wenig, M.O.
2014-01-01
In this paper, we present the first dispersive infrared spectroscopic (DIRS) measurement of atmospheric carbon dioxide (CO 2 ) using a new scanning Fabry–Pérot interferometer (FPI) sensor. The sensor measures the optical spectra in the mid infrared (3900 nm to 5220 nm) wavelength range with full width half maximum (FWHM) spectral resolution of 78.8 nm at the CO 2 absorption band (∼ 4280 nm) and sampling resolution of 20 nm. The CO 2 concentration is determined from the measured optical absorption spectra by fitting it to the CO 2 reference spectrum. Interference from other major absorbers in the same wavelength range, e.g., carbon monoxide (CO) and water vapor (H 2 O), was taken out by including their reference spectra in the fit as well. The detailed descriptions of the instrumental setup, the retrieval procedure, a modeling study for error analysis as well as laboratory validation using standard gas concentrations are presented. An iterative algorithm to account for the non-linear response of the fit function to the absorption cross sections due to the broad instrument function was developed and tested. A modeling study of the retrieval algorithm showed that errors due to instrument noise can be considerably reduced by using the dispersive spectral information in the retrieval. The mean measurement error of the prototype DIRS CO 2 measurement for 1 minute averaged data is about ± 2.5 ppmv, and down to ± 0.8 ppmv for 10 minute averaged data. A field test of atmospheric CO 2 measurements were carried out in an urban site in Hong Kong for a month and compared to a commercial non-dispersive infrared (NDIR) CO 2 analyzer. 10 minute averaged data shows good agreement between the DIRS and NDIR measurements with Pearson correlation coefficient (R) of 0.99. This new method offers an alternative approach of atmospheric CO 2 measurement featuring high accuracy, correction of non-linear absorption and interference of water vapor. - Highlights: • Dispersive infrared
Directory of Open Access Journals (Sweden)
Y. W. Sun
2013-08-01
Full Text Available In this paper, we present an optimized analysis algorithm for non-dispersive infrared (NDIR to in situ monitor stack emissions. The proposed algorithm simultaneously compensates for nonlinear absorption and cross interference among different gases. We present a mathematical derivation for the measurement error caused by variations in interference coefficients when nonlinear absorption occurs. The proposed algorithm is derived from a classical one and uses interference functions to quantify cross interference. The interference functions vary proportionally with the nonlinear absorption. Thus, interference coefficients among different gases can be modeled by the interference functions whether gases are characterized by linear or nonlinear absorption. In this study, the simultaneous analysis of two components (CO2 and CO serves as an example for the validation of the proposed algorithm. The interference functions in this case can be obtained by least-squares fitting with third-order polynomials. Experiments show that the results of cross interference correction are improved significantly by utilizing the fitted interference functions when nonlinear absorptions occur. The dynamic measurement ranges of CO2 and CO are improved by about a factor of 1.8 and 3.5, respectively. A commercial analyzer with high accuracy was used to validate the CO and CO2 measurements derived from the NDIR analyzer prototype in which the new algorithm was embedded. The comparison of the two analyzers show that the prototype works well both within the linear and nonlinear ranges.
Energy Technology Data Exchange (ETDEWEB)
Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)
2016-03-15
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.
Whiteman, D. N.; Cadirola, M.; Venable, D.; Calhoun, M.; Miloshevich, L; Vermeesch, K.; Twigg, L.; Dirisu, A.; Hurst, D.; Hall, E.;
2012-01-01
The MOHAVE-2009 campaign brought together diverse instrumentation for measuring atmospheric water vapor. We report on the participation of the ALVICE (Atmospheric Laboratory for Validation, Interagency Collaboration and Education) mobile laboratory in the MOHAVE-2009 campaign. In appendices we also report on the performance of the corrected Vaisala RS92 radiosonde measurements during the campaign, on a new radiosonde based calibration algorithm that reduces the influence of atmospheric variability on the derived calibration constant, and on other results of the ALVICE deployment. The MOHAVE-2009 campaign permitted the Raman lidar systems participating to discover and address measurement biases in the upper troposphere and lower stratosphere. The ALVICE lidar system was found to possess a wet bias which was attributed to fluorescence of insect material that was deposited on the telescope early in the mission. Other sources of wet biases are discussed and data from other Raman lidar systems are investigated, revealing that wet biases in upper tropospheric (UT) and lower stratospheric (LS) water vapor measurements appear to be quite common in Raman lidar systems. Lower stratospheric climatology of water vapor is investigated both as a means to check for the existence of these wet biases in Raman lidar data and as a source of correction for the bias. A correction technique is derived and applied to the ALVICE lidar water vapor profiles. Good agreement is found between corrected ALVICE lidar measurments and those of RS92, frost point hygrometer and total column water. The correction is offered as a general method to both quality control Raman water vapor lidar data and to correct those data that have signal-dependent bias. The influence of the correction is shown to be small at regions in the upper troposphere where recent work indicates detection of trends in atmospheric water vapor may be most robust. The correction shown here holds promise for permitting useful upper
Estimate of the atmospheric turbidity from three broad-band solar radiation algorithms. A comparative study
Directory of Open Access Journals (Sweden)
G. López
2004-09-01
Full Text Available Atmospheric turbidity is an important parameter for assessing the air pollution in local areas, as well as being the main parameter controlling the attenuation of solar radiation reaching the Earth's surface under cloudless sky conditions. Among the different turbidity indices, the Ångström turbidity coefficient β is frequently used. In this work, we analyse the performance of three methods based on broad-band solar irradiance measurements in the estimation of β. The evaluation of the performance of the models was undertaken by graphical and statistical (root mean square errors and mean bias errors means. The data sets used in this study comprise measurements of broad-band solar irradiance obtained at eight radiometric stations and aerosol optical thickness measurements obtained at one co-located radiometric station. Since all three methods require estimates of precipitable water content, three common methods for calculating atmospheric precipitable water content from surface air temperature and relative humidity are evaluated. Results show that these methods exhibit significant differences for low values of precipitable water. The effect of these differences in precipitable water estimates on turbidity algorithms is discussed. Differences in hourly turbidity estimates are later examined. The effects of random errors in pyranometer measurements and cloud interferences on the performance of the models are also presented. Examination of the annual cycle of monthly mean values of β for each location has shown that all three turbidity algorithms are suitable for analysing long-term trends and seasonal patterns.
Directory of Open Access Journals (Sweden)
G. López
2004-09-01
Full Text Available Atmospheric turbidity is an important parameter for assessing the air pollution in local areas, as well as being the main parameter controlling the attenuation of solar radiation reaching the Earth's surface under cloudless sky conditions. Among the different turbidity indices, the Ångström turbidity coefficient β is frequently used. In this work, we analyse the performance of three methods based on broad-band solar irradiance measurements in the estimation of β. The evaluation of the performance of the models was undertaken by graphical and statistical (root mean square errors and mean bias errors means. The data sets used in this study comprise measurements of broad-band solar irradiance obtained at eight radiometric stations and aerosol optical thickness measurements obtained at one co-located radiometric station. Since all three methods require estimates of precipitable water content, three common methods for calculating atmospheric precipitable water content from surface air temperature and relative humidity are evaluated. Results show that these methods exhibit significant differences for low values of precipitable water. The effect of these differences in precipitable water estimates on turbidity algorithms is discussed. Differences in hourly turbidity estimates are later examined. The effects of random errors in pyranometer measurements and cloud interferences on the performance of the models are also presented. Examination of the annual cycle of monthly mean values of β for each location has shown that all three turbidity algorithms are suitable for analysing long-term trends and seasonal patterns.
Energy Technology Data Exchange (ETDEWEB)
Lopez, G.; Batlles, F.J. [Dept. de Ingenieria Electrica y Termica, EPS La Rabida, Univ. de Huelva, Huelva (Spain)
2004-07-01
Atmospheric turbidity is an important parameter for assessing the air pollution in local areas, as well as being the main parameter controlling the attenuation of solar radiation reaching the Earth's surface under cloudless sky conditions. Among the different turbidity indices, the Aangstroem turbidity coefficient {beta} is frequently used. In this work, we analyse the performance of three methods based on broadband solar irradiance measurements in the estimation of {beta}. The evaluation of the performance of the models was undertaken by graphical and statistical (root mean square errors and mean bias errors) means. The data sets used in this study comprise measurements of broad-band solar irradiance obtained at eight radiometric stations and aerosol optical thickness measurements obtained at one co-located radiometric station. Since all three methods require estimates of precipitable water content, three common methods for calculating atmospheric precipitable water content from surface air temperature and relative humidity are evaluated. Results show that these methods exhibit significant differences for low values of precipitable water. The effect of these differences in precipitable water estimates on turbidity algorithms is discussed. Differences in hourly turbidity estimates are later examined. The effects of random errors in pyranometer measurements and cloud interferences on the performance of the models are also presented. Examination of the annual cycle of monthly mean values of {beta} for each location has shown that all three turbidity algorithms are suitable for analysing long-term trends and seasonal patterns. (orig.)
Gambacorta, A.; Nalli, N. R.; Tan, C.; Iturbide-Sanchez, F.; Wilson, M.; Zhang, K.; Xiong, X.; Barnet, C. D.; Sun, B.; Zhou, L.; Wheeler, A.; Reale, A.; Goldberg, M.
2017-12-01
The NOAA Unique Combined Atmospheric Processing System (NUCAPS) is the NOAA operational algorithm to retrieve thermodynamic and composition variables from hyper spectral thermal sounders such as CrIS, IASI and AIRS. The combined use of microwave sounders, such as ATMS, AMSU and MHS, enables full atmospheric sounding of the atmospheric column under all-sky conditions. NUCAPS retrieval products are accessible in near real time (about 1.5 hour delay) through the NOAA Comprehensive Large Array-data Stewardship System (CLASS). Since February 2015, NUCAPS retrievals have been also accessible via Direct Broadcast, with unprecedented low latency of less than 0.5 hours. NUCAPS builds on a long-term, multi-agency investment on algorithm research and development. The uniqueness of this algorithm consists in a number of features that are key in providing highly accurate and stable atmospheric retrievals, suitable for real time weather and air quality applications. Firstly, maximizing the use of the information content present in hyper spectral thermal measurements forms the foundation of the NUCAPS retrieval algorithm. Secondly, NUCAPS is a modular, name-list driven design. It can process multiple hyper spectral infrared sounders (on Aqua, NPP, MetOp and JPSS series) by mean of the same exact retrieval software executable and underlying spectroscopy. Finally, a cloud-clearing algorithm and a synergetic use of microwave radiance measurements enable full vertical sounding of the atmosphere, under all-sky regimes. As we transition toward improved hyper spectral missions, assessing retrieval skill and consistency across multiple platforms becomes a priority for real time users applications. Focus of this presentation is a general introduction on the recent improvements in the delivery of the NUCAPS full spectral resolution upgrade and an overview of the lessons learned from the 2017 Hazardous Weather Test bed Spring Experiment. Test cases will be shown on the use of NPP and Met
International Nuclear Information System (INIS)
Thing, Rune S.; Bernchou, Uffe; Brink, Carsten; Mainegra-Hing, Ernesto
2013-01-01
Purpose: Cone beam computed tomography (CBCT) image quality is limited by scattered photons. Monte Carlo (MC) simulations provide the ability of predicting the patient-specific scatter contamination in clinical CBCT imaging. Lengthy simulations prevent MC-based scatter correction from being fully implemented in a clinical setting. This study investigates the combination of using fast MC simulations to predict scatter distributions with a ray tracing algorithm to allow calibration between simulated and clinical CBCT images. Material and methods: An EGSnrc-based user code (egs c bct), was used to perform MC simulations of an Elekta XVI CBCT imaging system. A 60keV x-ray source was used, and air kerma scored at the detector plane. Several variance reduction techniques (VRTs) were used to increase the scatter calculation efficiency. Three patient phantoms based on CT scans were simulated, namely a brain, a thorax and a pelvis scan. A ray tracing algorithm was used to calculate the detector signal due to primary photons. A total of 288 projections were simulated, one for each thread on the computer cluster used for the investigation. Results: Scatter distributions for the brain, thorax and pelvis scan were simulated within 2 % statistical uncertainty in two hours per scan. Within the same time, the ray tracing algorithm provided the primary signal for each of the projections. Thus, all the data needed for MC-based scatter correction in clinical CBCT imaging was obtained within two hours per patient, using a full simulation of the clinical CBCT geometry. Conclusions: This study shows that use of MC-based scatter corrections in CBCT imaging has a great potential to improve CBCT image quality. By use of powerful VRTs to predict scatter distributions and a ray tracing algorithm to calculate the primary signal, it is possible to obtain the necessary data for patient specific MC scatter correction within two hours per patient
Energy Technology Data Exchange (ETDEWEB)
Romanov, A.; Edstrom, D.; Emanov, F. A.; Koop, I. A.; Perevedentsev, E. A.; Rogovsky, Yu. A.; Shwartz, D. B.; Valishev, A.
2017-03-28
Precise beam based measurement and correction of magnetic optics is essential for the successful operation of accelerators. The LOCO algorithm is a proven and reliable tool, which in some situations can be improved by using a broader class of experimental data. The standard data sets for LOCO include the closed orbit responses to dipole corrector variation, dispersion, and betatron tunes. This paper discusses the benefits from augmenting the data with four additional classes of experimental data: the beam shape measured with beam profile monitors; responses of closed orbit bumps to focusing field variations; betatron tune responses to focusing field variations; BPM-to-BPM betatron phase advances and beta functions in BPMs from turn-by-turn coordinates of kicked beam. All of the described features were implemented in the Sixdsimulation software that was used to correct the optics of the VEPP-2000 collider, the VEPP-5 injector booster ring, and the FAST linac.
Wojdyga, Krzysztof; Malicki, Marcin
2017-11-01
Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.
Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration
Lovejoy, McKenna Roberts
Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order
BARTTest: Community-Standard Atmospheric Radiative-Transfer and Retrieval Tests
Harrington, Joseph; Himes, Michael D.; Cubillos, Patricio E.; Blecic, Jasmina; Challener, Ryan C.
2018-01-01
Atmospheric radiative transfer (RT) codes are used both to predict planetary and brown-dwarf spectra and in retrieval algorithms to infer atmospheric chemistry, clouds, and thermal structure from observations. Observational plans, theoretical models, and scientific results depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. The community needs a suite of test calculations with analytically, numerically, or at least community-verified results. We therefore present the Bayesian Atmospheric Radiative Transfer Test Suite, or BARTTest. BARTTest has four categories of tests: analytically verified RT tests of simple atmospheres (single line in single layer, line blends, saturation, isothermal, multiple line-list combination, etc.), community-verified RT tests of complex atmospheres, synthetic retrieval tests on simulated data with known answers, and community-verified real-data retrieval tests.BARTTest is open-source software intended for community use and further development. It is available at https://github.com/ExOSPORTS/BARTTest. We propose this test suite as a standard for verifying atmospheric RT and retrieval codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G, NASA Astrophysics Data Analysis Program grant NNX13AF38G, and NASA Exoplanets Research Program grant NNX17AB62G.
International Nuclear Information System (INIS)
Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.
2011-01-01
Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat frames used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.
Experimental and theoretical studies of near-ground acoustic radiation propagation in the atmosphere
Belov, Vladimir V.; Burkatovskaya, Yuliya B.; Krasnenko, Nikolai P.; Rakov, Aleksandr S.; Rakov, Denis S.; Shamanaeva, Liudmila G.
2017-11-01
Results of experimental and theoretical studies of the process of near-ground propagation of monochromatic acoustic radiation on atmospheric paths from a source to a receiver taking into account the contribution of multiple scattering on fluctuations of atmospheric temperature and wind velocity, refraction of sound on the wind velocity and temperature gradients, and its reflection by the underlying surface for different models of the atmosphere depending the sound frequency, coefficient of reflection from the underlying surface, propagation distance, and source and receiver altitudes are presented. Calculations were performed by the Monte Carlo method using the local estimation algorithm by the computer program developed by the authors. Results of experimental investigations under controllable conditions are compared with theoretical estimates and results of analytical calculations for the Delany-Bazley impedance model. Satisfactory agreement of the data obtained confirms the correctness of the suggested computer program.
Iterative optimization of quantum error correcting codes
International Nuclear Information System (INIS)
Reimpell, M.; Werner, R.F.
2005-01-01
We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step
Phase correction of MR perfusion/diffusion images
International Nuclear Information System (INIS)
Chenevert, T.L.; Pipe, J.G.; Brunberg, J.A.; Yeung, H.N.
1989-01-01
Apparent diffusion coefficient (ADC) and perfusion MR sequences are exceptionally sensitive to minute motion and, therefore, are prone to bulk motions that hamper ADC/perfusion quantification. The authors have developed a phase correction algorithm to substantially reduce this error. The algorithm uses a diffusion-insensitive data set to correct data that are diffusion sensitive but phase corrupt. An assumption of the algorithm is that bulk motion phase shifts are uniform in one dimension, although they may be arbitrarily large and variable from acquisition to acquisition. This is facilitated by orthogonal section selection. The correction is applied after one Fourier transform of a two-dimensional Fourier transform reconstruction. Imaging experiments on rat and human brain demonstrate significant artifact reduction in ADC and perfusion measurements
Modified Decoding Algorithm of LLR-SPA
Directory of Open Access Journals (Sweden)
Zhongxun Wang
2014-09-01
Full Text Available In wireless sensor networks, the energy consumption is mainly occurred in the stage of information transmission. The Low Density Parity Check code can make full use of the channel information to save energy. Because of the widely used decoding algorithm of the Low Density Parity Check code, this paper proposes a new decoding algorithm which is based on the LLR-SPA (Sum-Product Algorithm in Log-Likelihood-domain to improve the accuracy of the decoding algorithm. In the modified algorithm, a piecewise linear function is used to approximate the complicated Jacobi correction term in LLR-SPA decoding algorithm. Construct the tangent by the tangency point to the function of Jacobi correction term, which is based on the first order Taylor Series. In this way, the proposed piecewise linear approximation offers almost a perfect match to the function of Jacobi correction term. Meanwhile, the proposed piecewise linear approximation could avoid the operation of logarithmic which is more suitable for practical application. The simulation results show that the proposed algorithm could improve the decoding accuracy greatly without noticeable variation of the computational complexity.
Radakovich, Jon; Bosilovich, M.; Chern, Jiun-dar; daSilva, Arlindo
2004-01-01
The NASA/NCAR Finite Volume GCM (fvGCM) with the NCAR CLM (Community Land Model) version 2.0 was integrated into the NASA/GMAO Finite Volume Data Assimilation System (fvDAS). A new method was developed for coupled skin temperature assimilation and bias correction where the analysis increment and bias correction term is passed into the CLM2 and considered a forcing term in the solution to the energy balance. For our purposes, the fvDAS CLM2 was run at 1 deg. x 1.25 deg. horizontal resolution with 55 vertical levels. We assimilate the ISCCP-DX (30 km resolution) surface temperature product. The atmospheric analysis was performed 6-hourly, while the skin temperature analysis was performed 3-hourly. The bias correction term, which was updated at the analysis times, was added to the skin temperature tendency equation at every timestep. In this presentation, we focus on the validation of the surface energy budget at the in situ reference sites for the Coordinated Enhanced Observation Period (CEOP). We will concentrate on sites that include independent skin temperature measurements and complete energy budget observations for the month of July 2001. In addition, MODIS skin temperature will be used for validation. Several assimilations were conducted and preliminary results will be presented.
Software for Generating Troposphere Corrections for InSAR Using GPS and Weather Model Data
Moore, Angelyn W.; Webb, Frank H.; Fishbein, Evan F.; Fielding, Eric J.; Owen, Susan E.; Granger, Stephanie L.; Bjoerndahl, Fredrik; Loefgren, Johan; Fang, Peng; Means, James D.;
2013-01-01
Atmospheric errors due to the troposphere are a limiting error source for spaceborne interferometric synthetic aperture radar (InSAR) imaging. This software generates tropospheric delay maps that can be used to correct atmospheric artifacts in InSAR data. The software automatically acquires all needed GPS (Global Positioning System), weather, and Digital Elevation Map data, and generates a tropospheric correction map using a novel algorithm for combining GPS and weather information while accounting for terrain. Existing JPL software was prototypical in nature, required a MATLAB license, required additional steps to acquire and ingest needed GPS and weather data, and did not account for topography in interpolation. Previous software did not achieve a level of automation suitable for integration in a Web portal. This software overcomes these issues. GPS estimates of tropospheric delay are a source of corrections that can be used to form correction maps to be applied to InSAR data, but the spacing of GPS stations is insufficient to remove short-wavelength tropospheric artifacts. This software combines interpolated GPS delay with weather model precipitable water vapor (PWV) and a digital elevation model to account for terrain, increasing the spatial resolution of the tropospheric correction maps and thus removing short wavelength tropospheric artifacts to a greater extent. It will be integrated into a Web portal request system, allowing use in a future L-band SAR Earth radar mission data system. This will be a significant contribution to its technology readiness, building on existing investments in in situ space geodetic networks, and improving timeliness, quality, and science value of the collected data
Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET
Efficient error correction for next-generation sequencing of viral amplicons.
Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury
2012-06-25
Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.
Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing
2017-12-01
We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.
International Nuclear Information System (INIS)
Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun
2017-01-01
Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis. (paper)
Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun
2017-10-01
Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.
Zhu, Zhe
2017-08-01
The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.
Energy Technology Data Exchange (ETDEWEB)
Krzyżanowska, A. [AGH-UST, Cracow; Deptuch, G. W. [Fermilab; Maj, P. [AGH-UST, Cracow; Gryboś, P. [AGH-UST, Cracow; Szczygieł, R. [AGH-UST, Cracow
2017-08-01
This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operation of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.
STAR Algorithm Integration Team - Facilitating operational algorithm development
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
International Nuclear Information System (INIS)
Nakajima, Teruyuki
2010-01-01
I explain the motivation behind our paper 'Algorithms for radiative intensity calculations in moderately thick atmospheres using a truncation approximation' (JQSRT 1988;40:51-69) and discuss our results in a broader historical context.
Study of a Vegetation Index Based on HJ CCD Data's top-of-atmosphere reflectance and FPAR Inversion
International Nuclear Information System (INIS)
Dong, Taifeng; Wu, Bingfang; Meng, Jihua
2014-01-01
The Fraction of Photosynthetically Active Radiation (FPAR)absorbed by plant canopies is a key parameter for monitoring crop condition and estimating crop yield. In general, it is necessary to obtain Top of Canopy (TOC) reflectance from optical remote sensing data in digital number through atmospheric correction procedures before retrieving FPAR. However, there are a few of uncertainties that existe in the process of atmosphere correction and reduced the quality of TOC. This paper presents a vegetation index based on Top-of-Atmosphere (TOA) reflectance derived from HJ-1 CCD satellite for estimating direct crop FPAR. The vegetation index (HJVI) was designed based on the simulated results of a canopy-atmosphere radiative transfer model, including TOA reflectance and corresponded FPAR. The HJVI had taken the advantages of information in the green, the red and the near-infrared spectral domainswith with a aim of reducing the atmospheric effect and enhancing the sensitive to green vegetation. The HJVI was used to estimate soybean FPAR directly and validated using field measurements. The result indicated that the inversion algorithm produced a good relationship between the prediction and measurement (R 2 = 0.546, RMSE = 0.083) and the HJVI showed high potential for estimating FPAR based on the HJ-1 TOA reflectance directly
Scarino, B. R.; Minnis, P.; Yost, C. R.; Chee, T.; Palikonda, R.
2015-12-01
Single-channel algorithms for satellite thermal-infrared- (TIR-) derived land and sea surface skin temperature (LST and SST) are advantageous in that they can be easily applied to a variety of satellite sensors. They can also accommodate decade-spanning instrument series, particularly for periods when split-window capabilities are not available. However, the benefit of one unified retrieval methodology for all sensors comes at the cost of critical sensitivity to surface emissivity (ɛs) and atmospheric transmittance estimation. It has been demonstrated that as little as 0.01 variance in ɛs can amount to more than a 0.5-K adjustment in retrieved LST values. Atmospheric transmittance requires calculations that employ vertical profiles of temperature and humidity from numerical weather prediction (NWP) models. Selection of a given NWP model can significantly affect LST and SST agreement relative to their respective validation sources. Thus, it is necessary to understand the accuracies of the retrievals for various NWP models to ensure the best LST/SST retrievals. The sensitivities of the single-channel retrievals to surface emittance and NWP profiles are investigated using NASA Langley historic land and ocean clear-sky skin temperature (Ts) values derived from high-resolution 11-μm TIR brightness temperature measured from geostationary satellites (GEOSat) and Advanced Very High Resolution Radiometers (AVHRR). It is shown that mean GEOSat-derived, anisotropy-corrected LST can vary by up to ±0.8 K depending on whether CERES or MODIS ɛs sources are used. Furthermore, the use of either NOAA Global Forecast System (GFS) or NASA Goddard Modern-Era Retrospective Analysis for Research and Applications (MERRA) for the radiative transfer model initial atmospheric state can account for more than 0.5-K variation in mean Ts. The results are compared to measurements from the Surface Radiation Budget Network (SURFRAD), an Atmospheric Radiation Measurement (ARM) Program ground
Revisiting Short-Wave-Infrared (SWIR) Bands for Atmospheric Correction in Coastal Waters
Pahlevan, Nima; Roger, Jean-Claude; Ahmad, Ziauddin
2017-01-01
The shortwave infrared (SWIR) bands on the existing Earth Observing missions like MODIS have been designed to meet land and atmospheric science requirements. The future geostationary and polar-orbiting ocean color missions, however, require highly sensitive SWIR bands (greater than 1550nm) to allow for a precise removal of aerosol contributions. This will allow for reasonable retrievals of the remote sensing reflectance (R(sub rs)) using standard NASA atmospheric corrections over turbid coastal waters. Design, fabrication, and maintaining high-performance SWIR bands at very low signal levels bear significant costs on dedicated ocean color missions. This study aims at providing a full analysis of the utility of alternative SWIR bands within the 1600nm atmospheric window if the bands within the 2200nm window were to be excluded due to engineering/cost constraints. Following a series of sensitivity analyses for various spectral band configurations as a function of water vapor amount, we chose spectral bands centered at 1565 and 1675nm as suitable alternative bands within the 1600nm window for a future geostationary imager. The sensitivity of this band combination to different aerosol conditions, calibration uncertainties, and extreme water turbidity were studied and compared with that of all band combinations available on existing polar-orbiting missions. The combination of the alternative channels was shown to be as sensitive to test aerosol models as existing near-infrared (NIR) band combinations (e.g., 748 and 869nm) over clear open ocean waters. It was further demonstrated that while in extremely turbid waters the 1565/1675 band pair yields R(sub rs) retrievals as good as those derived from all other existing SWIR band pairs (greater than 1550nm), their total calibration uncertainties must be less than 1% to meet current science requirements for ocean color retrievals (i.e., delta R(sub rs) (443) less than 5%). We further show that the aerosol removal using the
Energy Technology Data Exchange (ETDEWEB)
Chan, K.L. [School of Energy and Environment, City University of Hong Kong (Hong Kong); Ning, Z., E-mail: zhining@cityu.edu.hk [School of Energy and Environment, City University of Hong Kong (Hong Kong); Guy Carpenter Climate Change Centre, City University of Hong Kong (Hong Kong); Westerdahl, D. [Ability R and D Energy Research Centre, City University of Hong Kong (Hong Kong); Wong, K.C. [School of Energy and Environment, City University of Hong Kong (Hong Kong); Sun, Y.W. [Anhui Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Hefei (China); Hartl, A. [School of Energy and Environment, City University of Hong Kong (Hong Kong); Wenig, M.O. [Meteorological Institute, Ludwig-Maximilians-Universität Munich (Germany)
2014-02-01
In this paper, we present the first dispersive infrared spectroscopic (DIRS) measurement of atmospheric carbon dioxide (CO{sub 2}) using a new scanning Fabry–Pérot interferometer (FPI) sensor. The sensor measures the optical spectra in the mid infrared (3900 nm to 5220 nm) wavelength range with full width half maximum (FWHM) spectral resolution of 78.8 nm at the CO{sub 2} absorption band (∼ 4280 nm) and sampling resolution of 20 nm. The CO{sub 2} concentration is determined from the measured optical absorption spectra by fitting it to the CO{sub 2} reference spectrum. Interference from other major absorbers in the same wavelength range, e.g., carbon monoxide (CO) and water vapor (H{sub 2}O), was taken out by including their reference spectra in the fit as well. The detailed descriptions of the instrumental setup, the retrieval procedure, a modeling study for error analysis as well as laboratory validation using standard gas concentrations are presented. An iterative algorithm to account for the non-linear response of the fit function to the absorption cross sections due to the broad instrument function was developed and tested. A modeling study of the retrieval algorithm showed that errors due to instrument noise can be considerably reduced by using the dispersive spectral information in the retrieval. The mean measurement error of the prototype DIRS CO{sub 2} measurement for 1 minute averaged data is about ± 2.5 ppmv, and down to ± 0.8 ppmv for 10 minute averaged data. A field test of atmospheric CO{sub 2} measurements were carried out in an urban site in Hong Kong for a month and compared to a commercial non-dispersive infrared (NDIR) CO{sub 2} analyzer. 10 minute averaged data shows good agreement between the DIRS and NDIR measurements with Pearson correlation coefficient (R) of 0.99. This new method offers an alternative approach of atmospheric CO{sub 2} measurement featuring high accuracy, correction of non-linear absorption and interference of water
Mariano, Adrian V.; Grossmann, John M.
2010-11-01
Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.
Pitarch, Jaime; Ruiz-Verdú, Antonio; Sendra, María. D.; Santoleri, Rosalia
2017-02-01
We studied the performance of the MERIS maximum peak height (MPH) algorithm in the retrieval of chlorophyll-a concentration (CHL), using a matchup data set of Bottom-of-Rayleigh Reflectances (BRR) and CHL from a hypertrophic lake (Albufera de Valencia). The MPH algorithm produced a slight underestimation of CHL in the pixels classified as cyanobacteria (83% of the total) and a strong overestimation in those classified as eukaryotic phytoplankton (17%). In situ biomass data showed that the binary classification of MPH was not appropriate for mixed phytoplankton populations, producing also unrealistic discontinuities in the CHL maps. We recalibrated MPH using our matchup data set and found that a single calibration curve of third degree fitted equally well to all matchups regardless of how they were classified. As a modification to the former approach, we incorporated the Phycocyanin Index (PCI) in the formula, thus taking into account the gradient of phytoplankton composition, which reduced the CHL retrieval errors. By using in situ biomass data, we also proved that PCI was indeed an indicator of cyanobacterial dominance. We applied our recalibration of the MPH algorithm to the whole MERIS data set (2002-2012). Results highlight the usefulness of the MPH algorithm as a tool to monitor eutrophication. The relevance of this fact is higher since MPH does not require a complete atmospheric correction, which often fails over such waters. An adequate flagging or correction of sun glint is advisable though, since the MPH algorithm was sensitive to sun glint.
Automatic Contextual Text Correction Using The Linguistic Habits Graph Lhg
Directory of Open Access Journals (Sweden)
Marcin Gadamer
2009-01-01
Full Text Available Automatic text correction is an essential problem of today text processors and editors. Thispaper introduces a novel algorithm for automation of contextual text correction using a LinguisticHabit Graph (LHG also introduced in this paper. A specialist internet crawler hasbeen constructed for searching through web sites in order to build a Linguistic Habit Graphafter text corpuses gathered in polish web sites. The achieved correction results on a basis ofthis algorithm using this LHG were compared with commercial programs which also enableto make text correction: Microsoft Word 2007, Open Office Writer 3.0 and search engineGoogle. The achieved results of text correction were much better than correction made bythese commercial tools.
Algorithm 426 : Merge sort algorithm [M1
Bron, C.
1972-01-01
Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives
Off-Angle Iris Correction Methods
Energy Technology Data Exchange (ETDEWEB)
Santos-Villalobos, Hector J [ORNL; Thompson, Joseph T [ORNL; Karakaya, Mahmut [ORNL; Boehnen, Chris Bensing [ORNL
2016-01-01
In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not account for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-01-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels
Photon attenuation correction technique in SPECT based on nonlinear optimization
International Nuclear Information System (INIS)
Suzuki, Shigehito; Wakabayashi, Misato; Okuyama, Keiichi; Kuwamura, Susumu
1998-01-01
Photon attenuation correction in SPECT was made using a nonlinear optimization theory, in which an optimum image is searched so that the sum of square errors between observed and reprojected projection data is minimized. This correction technique consists of optimization and step-width algorithms, which determine at each iteration a pixel-by-pixel directional value of search and its step-width, respectively. We used the conjugate gradient and quasi-Newton methods as the optimization algorithm, and Curry rule and the quadratic function method as the step-width algorithm. Statistical fluctuations in the corrected image due to statistical noise in the emission projection data grew as the iteration increased, depending on the combination of optimization and step-width algorithms. To suppress them, smoothing for directional values was introduced. Computer experiments and clinical applications showed a pronounced reduction in statistical fluctuations of the corrected image for all combinations. Combinations using the conjugate gradient method were superior in noise characteristic and computation time. The use of that method with the quadratic function method was optimum if noise property was regarded as important. (author)
Improved multivariate polynomial factoring algorithm
International Nuclear Information System (INIS)
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
Atmospheric correction of Earth-observation remote sensing images ...
Indian Academy of Sciences (India)
The physics underlying the problem of solar radiation propagations that takes into account ... SART code (Spherical Atmosphere Radiation. Transfer) ... The use of Monte Carlo sampling ..... length because this soil is formed by clay and sand.
Directory of Open Access Journals (Sweden)
Yu-Ze Zhang
2017-01-01
Full Text Available The Cross-track Infrared Sounder (CrIS is one of the most advanced hyperspectral instruments and has been used for various atmospheric applications such as atmospheric retrievals and weather forecast modeling. However, because of the specific design purpose of CrIS, little attention has been paid to retrieving land surface parameters from CrIS data. To take full advantage of the rich spectral information in CrIS data to improve the land surface retrievals, particularly the acquisition of a continuous Land Surface Emissivity (LSE spectrum, this paper attempts to simultaneously retrieve a continuous LSE spectrum and the Land Surface Temperature (LST from CrIS data with the atmospheric reanalysis data and the Iterative Spectrally Smooth Temperature and Emissivity Separation (ISSTES algorithm. The results show that the accuracy of the retrieved LSEs and LST is comparable with the current land products. The overall differences of the LST and LSE retrievals are approximately 1.3 K and 1.48%, respectively. However, the LSEs in our study can be provided as a continuum spectrum instead of the single-channel values in traditional products. The retrieved LST and LSEs now can be better used to further analyze the surface properties or improve the retrieval of atmospheric parameters.
Truncation correction for oblique filtering lines
International Nuclear Information System (INIS)
Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Guenter; Dennerlein, Frank; Noo, Frederic
2008-01-01
State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.
Histogram-driven cupping correction (HDCC) in CT
Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.
2010-04-01
Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.
DeCesare, A; Secanell, M; Lagravère, M O; Carey, J
2013-01-01
The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.
Lavender, Samantha; Brito, Fabrice; Aas, Christina; Casu, Francesco; Ribeiro, Rita; Farres, Jordi
2014-05-01
Data challenges are becoming the new method to promote innovation within data-intensive applications; building or evolving user communities and potentially developing sustainable commercial services. These can utilise the vast amount of information (both in scope and volume) that's available online, and profits from reduced processing costs. Data Challenges are also closely related to the recent paradigm shift towards e-Science, also referred to as "data-intensive science'. The E-CEO project aims to deliver a collaborative platform that, through Data Challenge Contests, will improve the adoption and outreach of new applications and methods to processes Earth Observation (EO) data. Underneath, the backbone must be a common environment where the applications can be developed, deployed and executed. Then, the results need to be easily published in a common visualization platform for their effective validation, evaluation and transparent peer comparisons. Contest #3 is based around the atmospheric correction (AC) of ocean colour data with a particular focus on the use of auxiliary data files for processing Level 1 (Top of Atmosphere, TOA, calibrated radiances/reflectances) to Level 2 products (Bottom of Atmosphere, BOA, calibrated radiances/reflectance and derived products). Scientific researchers commonly accept the auxiliary inputs that they've been provided with and/or use the climatological data that accompanies the processing software; often because it can be difficult to obtain multiple data sources and convert them into a format the software accepts. Therefore, it's proposed to compare various ocean colour AC approaches and in the process study the uncertainties associated with using different meteorological auxiliary products for the processing of Medium Resolution Imaging Spectrometer (MERIS) i.e. the sensitivity of different atmospheric correction input assumptions.
Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm
Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.
2017-12-01
Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.
SPAM-assisted partial volume correction algorithm for PET
Energy Technology Data Exchange (ETDEWEB)
Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)
2000-07-01
A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38{+-}6%, while those of hippocampus and amygdala by 4{+-}3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images.
SPAM-assisted partial volume correction algorithm for PET
International Nuclear Information System (INIS)
Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul
2000-01-01
A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38±6%, while those of hippocampus and amygdala by 4±3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images
A general algorithm for distributing information in a graph
Aji, Srinivas M.; McEliece, Robert J.
1997-01-01
We present a general “message-passing” algorithm for distributing information in a graph. This algorithm may help us to understand the approximate correctness of both the Gallager-Tanner-Wiberg algorithm, and the turbo-decoding algorithm.
Automated general temperature correction method for dielectric soil moisture sensors
Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao
2017-08-01
An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a
Calibrating a Soil-Vegetation-Atmosphere system with a genetical algorithm
Schneider, S.; Jacques, D.; Mallants, D.
2009-04-01
Accuracy of model prediction is well known for being very sensitive to the quality of the calibration of the model. It is also known that quantifying soil hydraulic parameters in a Soil-Vegetation-Atmosphere (SVA) system is a highly non-linear parameter estimation problem, and that robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the north of Belgium (Campine region). Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step. The water table level, which is varying between 95 and 170 cm, has been recorded with a frequency of 0.5 hours. Based on the profile description, four soil layers have been distinguished in the podzol and used for the numerical simulation with the hydrus1D model (Simunek and al., 2005). For the inversion procedure the MYGA program (Yedder, 2002), which is an elitism GA, was used. Optimization was based on the water content measurements realized at the depths of 10, 20, 40, 50, 60, 70, 90, 110, and 120 cm to estimate parameters describing the unsaturated hydraulic soil properties of the different soil layers. Comparison between the modeled and measured water contents shows a good similarity during the simulated year. Impacts of short and intensive events (rainfall) on the water content of the soil are also well reproduced. Errors on predictions are on average equal to 5%, which is considered as a good result. A. Ben Haj Yedder. Numerical optimization and optimal control : (molecular chemistry applications). PhD thesis, Ecole Nationale des Ponts et Chaussées, 2002. Šimůnek, J., M. Th. van Genuchten, and M. Šejna, The HYDRUS-1D software package for simulating the one-dimensional movement
Real-Time Corrected Traffic Correlation Model for Traffic Flow Forecasting
Directory of Open Access Journals (Sweden)
Hua-pu Lu
2015-01-01
Full Text Available This paper focuses on the problems of short-term traffic flow forecasting. The main goal is to put forward traffic correlation model and real-time correction algorithm for traffic flow forecasting. Traffic correlation model is established based on the temporal-spatial-historical correlation characteristic of traffic big data. In order to simplify the traffic correlation model, this paper presents correction coefficients optimization algorithm. Considering multistate characteristic of traffic big data, a dynamic part is added to traffic correlation model. Real-time correction algorithm based on Fuzzy Neural Network is presented to overcome the nonlinear mapping problems. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling methods.
Solving the simple plant location problem using a data correcting approach
Goldengorin, B.; Tijssen, G.A.; Ghosh, D.; Sierksma, G.
The Data Correcting Algorithm is a branch and bound type algorithm in which the data of a given problem instance is `corrected' at each branching in such a way that the new instance will be as close as possible to a polynomially solvable instance and the result satisfies an acceptable accuracy (the
Wang, Chunpeng; Lou, Zhengzhao Johnny; Chen, Xiuhong; Zeng, Xiping; Tao, Wei-Kuo; Huang, Xianglei
2014-01-01
Cloud-top temperature (CTT) is an important parameter for convective clouds and is usually different from the 11-micrometers brightness temperature due to non-blackbody effects. This paper presents an algorithm for estimating convective CTT by using simultaneous passive [Moderate Resolution Imaging Spectroradiometer (MODIS)] and active [CloudSat 1 Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO)] measurements of clouds to correct for the non-blackbody effect. To do this, a weighting function of the MODIS 11-micrometers band is explicitly calculated by feeding cloud hydrometer profiles from CloudSat and CALIPSO retrievals and temperature and humidity profiles based on ECMWF analyses into a radiation transfer model.Among 16 837 tropical deep convective clouds observed by CloudSat in 2008, the averaged effective emission level (EEL) of the 11-mm channel is located at optical depth; approximately 0.72, with a standard deviation of 0.3. The distance between the EEL and cloud-top height determined by CloudSat is shown to be related to a parameter called cloud-top fuzziness (CTF), defined as the vertical separation between 230 and 10 dBZ of CloudSat radar reflectivity. On the basis of these findings a relationship is then developed between the CTF and the difference between MODIS 11-micrometers brightness temperature and physical CTT, the latter being the non-blackbody correction of CTT. Correction of the non-blackbody effect of CTT is applied to analyze convective cloud-top buoyancy. With this correction, about 70% of the convective cores observed by CloudSat in the height range of 6-10 km have positive buoyancy near cloud top, meaning clouds are still growing vertically, although their final fate cannot be determined by snapshot observations.
Study of lung density corrections in a clinical trial (RTOG 88-08)
International Nuclear Information System (INIS)
Orton, Colin G.; Chungbin, Suzanne; Klein, Eric E.; Gillin, Michael T.; Schultheiss, Timothy E.; Sause, William T.
1998-01-01
Purpose: To investigate the effect of lung density corrections on the dose delivered to lung cancer radiotherapy patients in a multi-institutional clinical trial, and to determine whether commonly available density-correction algorithms are sufficient to improve the accuracy and precision of dose calculation in the clinical trials setting. Methods and Materials: A benchmark problem was designed (and a corresponding phantom fabricated) to test density-correction algorithms under standard conditions for photon beams ranging from 60 Co to 24 MV. Point doses and isodose distributions submitted for a Phase III trial in regionally advanced, unresectable non-small-cell lung cancer (Radiation Therapy Oncology Group 88-08) were calculated with and without density correction. Tumor doses were analyzed for 322 patients and 1236 separate fields. Results: For the benchmark problem studied here, the overall correction factor for a four-field treatment varied significantly with energy, ranging from 1.14 ( 60 Co) to 1.05 (24 MV) for measured doses, or 1.17 ( 60 Co) to 1.05 (24 MV) for doses calculated by conventional density-correction algorithms. For the patient data, overall correction factors (calculated) ranged from 0.95 to 1.28, with a mean of 1.05 and distributional standard deviation of 0.05. The largest corrections were for lateral fields, with a mean correction factor of 1.11 and standard deviation of 0.08. Conclusions: Lung inhomogeneities can lead to significant variations in delivered dose between patients treated in a clinical trial. Existing density-correction algorithms are accurate enough to significantly reduce these variations
Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming
2017-05-01
Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.
Advanced signal separation and recovery algorithms for digital x-ray spectroscopy
International Nuclear Information System (INIS)
Mahmoud, Imbaby I.; El-Tokhy, Mohamed S.
2015-01-01
X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and
Somayajula, Srikanth Ayyala; Devred, Emmanuel; Bélanger, Simon; Antoine, David; Vellucci, V; Babin, Marcel
2018-04-20
In this study, we report on the performance of satellite-based photosynthetically available radiation (PAR) algorithms used in published oceanic primary production models. The performance of these algorithms was evaluated using buoy observations under clear and cloudy skies, and for the particular case of low sun angles typically encountered at high latitudes or at moderate latitudes in winter. The PAR models consisted of (i) the standard one from the NASA-Ocean Biology Processing Group (OBPG), (ii) the Gregg and Carder (GC) semi-analytical clear-sky model, and (iii) look-up-tables based on the Santa Barbara DISORT atmospheric radiative transfer (SBDART) model. Various combinations of atmospheric inputs, empirical cloud corrections, and semi-analytical irradiance models yielded a total of 13 (11 + 2 developed in this study) different PAR products, which were compared with in situ measurements collected at high frequency (15 min) at a buoy site in the Mediterranean Sea (the "BOUée pour l'acquiSition d'une Série Optique à Long termE," or, "BOUSSOLE" site). An objective ranking method applied to the algorithm results indicated that seven PAR products out of 13 were well in agreement with the in situ measurements. Specifically, the OBPG method showed the best overall performance with a root mean square difference (RMSD) (bias) of 19.7% (6.6%) and 10% (6.3%) followed by the look-up-table method with a RMSD (bias) of 25.5% (6.8%) and 9.6% (2.6%) at daily and monthly scales, respectively. Among the four methods based on clear-sky PAR empirically corrected for cloud cover, the Dobson and Smith method consistently underestimated daily PAR while the Budyko formulation overestimated daily PAR. Empirically cloud-corrected methods using cloud fraction (CF) performed better under quasi-clear skies (CF0.7), however, all methods showed larger RMSD differences (biases) ranging between 32% and 80.6% (-54.5%-8.7%). Finally, three methods tested for low sun elevations revealed
Color correction for chromatic distortion in a multi-wavelength digital holographic system
International Nuclear Information System (INIS)
Lin, Li-Chien; Huang, Yi-Lun; Tu, Han-Yen; Lai, Xin-Ji; Cheng, Chau-Jern
2011-01-01
A multi-wavelength digital holographic (MWDH) system has been developed to record and reconstruct color images. In comparison to working with digital cameras, however, high-quality color reproduction is difficult to achieve, because of the imperfections from the light sources, optical components, optical recording devices and recording processes. Thus, we face the problem of correcting the colors altered during the digital holographic process. We therefore propose a color correction scheme to correct the chromatic distortion caused by the MWDH system. The scheme consists of two steps: (1) creating a color correction profile and (2) applying it to the correction of the distorted colors. To create the color correction profile, we generate two algorithms: the sequential algorithm and the integrated algorithm. The ColorChecker is used to generate the distorted colors and their desired corrected colors. The relationship between these two color patches is fixed into a specific mathematical model, the parameters of which are estimated, creating the profile. Next, the profile is used to correct the color distortion of images, capturing and preserving the original vibrancy of the reproduced colors for different reconstructed images
Energy Technology Data Exchange (ETDEWEB)
Ming, W.Q.; Chen, J.H., E-mail: jhchen123@hnu.edu.cn
2013-11-15
Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations.
International Nuclear Information System (INIS)
Ming, W.Q.; Chen, J.H.
2013-01-01
Three different types of multislice algorithms, namely the conventional multislice (CMS) algorithm, the propagator-corrected multislice (PCMS) algorithm and the fully-corrected multislice (FCMS) algorithm, have been evaluated in comparison with respect to the accelerating voltages in transmission electron microscopy. Detailed numerical calculations have been performed to test their validities. The results show that the three algorithms are equivalent for accelerating voltage above 100 kV. However, below 100 kV, the CMS algorithm will introduce significant errors, not only for higher-order Laue zone (HOLZ) reflections but also for zero-order Laue zone (ZOLZ) reflections. The differences between the PCMS and FCMS algorithms are negligible and mainly appear in HOLZ reflections. Nonetheless, when the accelerating voltage is further lowered to 20 kV or below, the PCMS algorithm will also yield results deviating from the FCMS results. The present study demonstrates that the propagation of the electron wave from one slice to the next slice is actually cross-correlated with the crystal potential in a complex manner, such that when the accelerating voltage is lowered to 10 kV, the accuracy of the algorithms is dependent of the scattering power of the specimen. - Highlights: • Three multislice algorithms for low-energy transmission electron microscopy are evaluated. • The propagator-corrected algorithm is a good alternative for voltages down to 20 kV. • Below 20 kV, a fully-corrected algorithm has to be employed for quantitative simulations
Image defog algorithm based on open close filter and gradient domain recursive bilateral filter
Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen
2017-11-01
To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.
Using BRDFs for accurate albedo calculations and adjacency effect corrections
Energy Technology Data Exchange (ETDEWEB)
Borel, C.C.; Gerstl, S.A.W.
1996-09-01
In this paper the authors discuss two uses of BRDFs in remote sensing: (1) in determining the clear sky top of the atmosphere (TOA) albedo, (2) in quantifying the effect of the BRDF on the adjacency point-spread function and on atmospheric corrections. The TOA spectral albedo is an important parameter retrieved by the Multi-angle Imaging Spectro-Radiometer (MISR). Its accuracy depends mainly on how well one can model the surface BRDF for many different situations. The authors present results from an algorithm which matches several semi-empirical functions to the nine MISR measured BRFs that are then numerically integrated to yield the clear sky TOA spectral albedo in four spectral channels. They show that absolute accuracies in the albedo of better than 1% are possible for the visible and better than 2% in the near infrared channels. Using a simplified extensive radiosity model, the authors show that the shape of the adjacency point-spread function (PSF) depends on the underlying surface BRDFs. The adjacency point-spread function at a given offset (x,y) from the center pixel is given by the integral of transmission-weighted products of BRDF and scattering phase function along the line of sight.
The Chandra Source Catalog: Algorithms
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
Generating Global Leaf Area Index from Landsat: Algorithm Formulation and Demonstration
Ganguly, Sangram; Nemani, Ramakrishna R.; Zhang, Gong; Hashimoto, Hirofumi; Milesi, Cristina; Michaelis, Andrew; Wang, Weile; Votava, Petr; Samanta, Arindam; Melton, Forrest;
2012-01-01
This paper summarizes the implementation of a physically based algorithm for the retrieval of vegetation green Leaf Area Index (LAI) from Landsat surface reflectance data. The algorithm is based on the canopy spectral invariants theory and provides a computationally efficient way of parameterizing the Bidirectional Reflectance Factor (BRF) as a function of spatial resolution and wavelength. LAI retrievals from the application of this algorithm to aggregated Landsat surface reflectances are consistent with those of MODIS for homogeneous sites represented by different herbaceous and forest cover types. Example results illustrating the physics and performance of the algorithm suggest three key factors that influence the LAI retrieval process: 1) the atmospheric correction procedures to estimate surface reflectances; 2) the proximity of Landsatobserved surface reflectance and corresponding reflectances as characterized by the model simulation; and 3) the quality of the input land cover type in accurately delineating pure vegetated components as opposed to mixed pixels. Accounting for these factors, a pilot implementation of the LAI retrieval algorithm was demonstrated for the state of California utilizing the Global Land Survey (GLS) 2005 Landsat data archive. In a separate exercise, the performance of the LAI algorithm over California was evaluated by using the short-wave infrared band in addition to the red and near-infrared bands. Results show that the algorithm, while ingesting the short-wave infrared band, has the ability to delineate open canopies with understory effects and may provide useful information compared to a more traditional two-band retrieval. Future research will involve implementation of this algorithm at continental scales and a validation exercise will be performed in evaluating the accuracy of the 30-m LAI products at several field sites. ©
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 4. Algorithms - Correctness of Programs. R K Shyamasundar. Series Article Volume 3 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India.
Discrimination of Biomass Burning Smoke and Clouds in MAIAC Algorithm
Lyapustin, A.; Korkin, S.; Wang, Y.; Quayle, B.; Laszlo, I.
2012-01-01
The multi-angle implementation of atmospheric correction (MAIAC) algorithm makes aerosol retrievals from MODIS data at 1 km resolution providing information about the fine scale aerosol variability. This information is required in different applications such as urban air quality analysis, aerosol source identification etc. The quality of high resolution aerosol data is directly linked to the quality of cloud mask, in particular detection of small (sub-pixel) and low clouds. This work continues research in this direction, describing a technique to detect small clouds and introducing the smoke test to discriminate the biomass burning smoke from the clouds. The smoke test relies on a relative increase of aerosol absorption at MODIS wavelength 0.412 micrometers as compared to 0.47-0.67 micrometers due to multiple scattering and enhanced absorption by organic carbon released during combustion. This general principle has been successfully used in the OMI detection of absorbing aerosols based on UV measurements. This paper provides the algorithm detail and illustrates its performance on two examples of wildfires in US Pacific North-West and in Georgia/Florida of 2007.
Drossart, P.; Combes, M.; Encrenaz, T.; Melchiorri, R.; Fouchet, T.; Forget, F.; Moroz, V.; Ignatiev, N.; Bibring, J.-P.; Langevin, Y.; OMEGA Team
Observations of Mars by the OMEGA/Mars Express experiment provide extended maps of the martian disk at all latitudes, and with various conditions of illumination, between 0.4 to 5 micron. The atmospheric investigations so far conducted by our team are focussed on the infrared part of the spectrum (1-5 micron), and include: the development of a correction algorithm for atmospheric gaseous absorption, to give access to fine mineralogic studies, largely decorrelated from atmospheric effects the study of dust opacity effects in the near infrared, with the aim to correct also the rough spectra from dust opacity perturbation the study of minor constituents like CO, to search for regional or global variations the study of CO2 emission at 4.3 micron related to fluorescent emission This last effect is prominently detected in limb observations obtained in 3-axis stabilized mode of Mars Express, with high altitude emission in the CO2 fundamental at 4.3 micron, usually seen in absorption in nadir observations. These emissions are related to non-LTE atmospheric layers, well above the solid surface in the mesosphere. Such emissions are also present in Earth and Venus limb observations. They are present also in nadir observations, but are reinforced in limb viewing geometry due to the tangential view. A numerical model of these emission will be presented.
Self-correcting Multigrid Solver
International Nuclear Information System (INIS)
Lewandowski, Jerome L.V.
2004-01-01
A new multigrid algorithm based on the method of self-correction for the solution of elliptic problems is described. The method exploits information contained in the residual to dynamically modify the source term (right-hand side) of the elliptic problem. It is shown that the self-correcting solver is more efficient at damping the short wavelength modes of the algebraic error than its standard equivalent. When used in conjunction with a multigrid method, the resulting solver displays an improved convergence rate with no additional computational work
Energy Technology Data Exchange (ETDEWEB)
Matthews, Patrick [Nevada Site Office, Las Vegas, NV (United States)
2016-02-01
CAU 573 comprises the following corrective action sites (CASs): • 05-23-02, GMX Alpha Contaminated Area • 05-45-01, Atmospheric Test Site - Hamilton These two CASs include the release at the Hamilton weapons-related tower test and a series of 29 atmospheric experiments conducted at GMX. The two CASs are located in two distinctly separate areas within Area 5. To facilitate site investigation and data quality objective (DQO) decisions, all identified releases (i.e., CAS components) were organized into study groups. The reporting of investigation results and the evaluation of DQO decisions are at the release level. The corrective action alternatives (CAAs) were evaluated at the FFACO CAS level. The purpose of this CADD/CAP is to evaluate potential CAAs, provide the rationale for the selection of recommended CAAs, and provide the plan for implementation of the recommended CAA for CAU 573. Corrective action investigation (CAI) activities were performed from January 2015 through November 2015, as set forth in the CAU 573 Corrective Action Investigation Plan (CAIP). Analytes detected during the CAI were evaluated against appropriate final action levels (FALs) to identify the contaminants of concern. Assessment of the data generated from investigation activities conducted at CAU 573 revealed the following: • Radiological contamination within CAU 573 does not exceed the FALs (based on the Occasional Use Area exposure scenario). • Chemical contamination within CAU 573 does not exceed the FALs. • Potential source material—including lead plates, lead bricks, and lead-shielded cables—was removed during the investigation and requires no additional corrective action.
International Nuclear Information System (INIS)
Matthews, Patrick
2016-01-01
CAU 573 comprises the following corrective action sites (CASs): • 05-23-02, GMX Alpha Contaminated Area • 05-45-01, Atmospheric Test Site - Hamilton These two CASs include the release at the Hamilton weapons-related tower test and a series of 29 atmospheric experiments conducted at GMX. The two CASs are located in two distinctly separate areas within Area 5. To facilitate site investigation and data quality objective (DQO) decisions, all identified releases (i.e., CAS components) were organized into study groups. The reporting of investigation results and the evaluation of DQO decisions are at the release level. The corrective action alternatives (CAAs) were evaluated at the FFACO CAS level. The purpose of this CADD/CAP is to evaluate potential CAAs, provide the rationale for the selection of recommended CAAs, and provide the plan for implementation of the recommended CAA for CAU 573. Corrective action investigation (CAI) activities were performed from January 2015 through November 2015, as set forth in the CAU 573 Corrective Action Investigation Plan (CAIP). Analytes detected during the CAI were evaluated against appropriate final action levels (FALs) to identify the contaminants of concern. Assessment of the data generated from investigation activities conducted at CAU 573 revealed the following: • Radiological contamination within CAU 573 does not exceed the FALs (based on the Occasional Use Area exposure scenario). • Chemical contamination within CAU 573 does not exceed the FALs. • Potential source material - including lead plates, lead bricks, and lead-shielded cables was removed during the investigation and requires no additional corrective action.
Real-time perspective correction in video stream
Directory of Open Access Journals (Sweden)
Glagolev Vladislav
2018-01-01
Full Text Available The paper describes an algorithm used for software perspective correction. The algorithm uses the camera’s orientation angles and transforms the coordinates of pixels on a source image to coordinates on a virtual image form the camera whose focal plane is perpendicular to the gravity vector. This algorithm can be used as a low-cost replacement of a gyrostabilazer in specific applications that restrict using movable parts or heavy and pricey equipment.
Palacios, Sherry L.; Schafer, Chris; Broughton, Jennifer; Guild, Liane S.; Kudela, Raphael M.
2013-01-01
There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in
Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.
2015-12-01
The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.
Advances in orbit drift correction in the advanced photon source storage ring
International Nuclear Information System (INIS)
Emery, L.; Borland, M.
1997-01-01
The Advanced Photon Source storage ring is required to provide X-ray beams of high positional stability, specified as 17 μm rms in the horizontal plane and 4.4 μm rms in the vertical plane. The author reports on the difficult task of stabilizing the slow drift component of the orbit motion down to a few microns rms using workstation-based orbit correction. There are two aspects to consider separately the correction algorithm and the configuration of the beam position monitors (BPMs) and correctors. Three notable features of the correction algorithm are: low-pass digital filtering of BPM readbacks; open-quotes despikingclose quotes of the filtered orbit to desensitize the orbit correction to spurious BPM readbacks without having to change the correction matrix; and BPM intensity-dependent offset compensation. The BPM/corrector configuration includes all of the working BPMs but only a small set of correctors distributed around the ring. Thus only those orbit modes that are most likely to be representative of real beam drift are handled by the correction algorithm
Implementation of electronic crosstalk correction for terra MODIS PV LWIR bands
Geng, Xu; Madhavan, Sriharsha; Chen, Na; Xiong, Xiaoxiong
2015-09-01
The MODerate-resolution Imaging Spectroradiometer (MODIS) is one of the primary instruments in the fleet of NASA's Earth Observing Systems (EOS) in space. Terra MODIS has completed 15 years of operation far exceeding its design lifetime of 6 years. The MODIS Level 1B (L1B) processing is the first in the process chain for deriving various higher level science products. These products are used mainly in understanding the geophysical changes occurring in the Earth's land, ocean, and atmosphere. The L1B code is designed to carefully calibrate the responses of all the detectors of the 36 spectral bands of MODIS and provide accurate L1B radiances (also reflectances in the case of Reflective Solar Bands). To fulfill this purpose, Look Up Tables (LUTs), that contain calibration coefficients derived from both on-board calibrators and Earth-view characterized responses, are used in the L1B processing. In this paper, we present the implementation mechanism of the electronic crosstalk correction in the Photo Voltaic (PV) Long Wave InfraRed (LWIR) bands (Bands 27-30). The crosstalk correction involves two vital components. First, a crosstalk correction modular is implemented in the L1B code to correct the on-board Blackbody and Earth-View (EV) digital number (dn) responses using a linear correction model. Second, the correction coefficients, derived from the EV observations, are supplied in the form of LUTs. Further, the LUTs contain time stamps reflecting to the change in the coefficients assessed using the Noise Equivalent difference Temperature (NEdT) trending. With the algorithms applied in the MODIS L1B processing it is demonstrated that these corrections indeed restore the radiometric balance for each of the affected bands and substantially reduce the striping noise in the processed images.
Wang, W.; Wang, Y.; Hashimoto, H.; Li, S.; Takenaka, H.; Higuchi, A.; Lyapustin, A.; Nemani, R. R.
2017-12-01
The latest generation of geostationary satellite sensors, including the GOES-16/ABI and the Himawari 8/AHI, provide exciting capability to monitor land surface at very high temporal resolutions (5-15 minute intervals) and with spatial and spectral characteristics that mimic the Earth Observing System flagship MODIS. However, geostationary data feature changing sun angles at constant view geometry, which is almost reciprocal to sun-synchronous observations. Such a challenge needs to be carefully addressed before one can exploit the full potential of the new sources of data. Here we take on this challenge with Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm, recently developed for accurate and globally robust applications like the MODIS Collection 6 re-processing. MAIAC first grids the top-of-atmosphere measurements to a fixed grid so that the spectral and physical signatures of each grid cell are stacked ("remembered") over time and used to dramatically improve cloud/shadow/snow detection, which is by far the dominant error source in the remote sensing. It also exploits the changing sun-view geometry of the geostationary sensor to characterize surface BRDF with augmented angular resolution for accurate aerosol retrievals and atmospheric correction. The high temporal resolutions of the geostationary data indeed make the BRDF retrieval much simpler and more robust as compared with sun-synchronous sensors such as MODIS. As a prototype test for the geostationary-data processing pipeline on NASA Earth Exchange (GEONEX), we apply MAIAC to process 18 months of data from Himawari 8/AHI over Australia. We generate a suite of test results, including the input TOA reflectance and the output cloud mask, aerosol optical depth (AOD), and the atmospherically-corrected surface reflectance for a variety of geographic locations, terrain, and land cover types. Comparison with MODIS data indicates a general agreement between the retrieved surface reflectance
Polarimetric Remote Sensing of Atmospheric Particulate Pollutants
Li, Z.; Zhang, Y.; Hong, J.
2018-04-01
Atmospheric particulate pollutants not only reduce atmospheric visibility, change the energy balance of the troposphere, but also affect human and vegetation health. For monitoring the particulate pollutants, we establish and develop a series of inversion algorithms based on polarimetric remote sensing technology which has unique advantages in dealing with atmospheric particulates. A solution is pointed out to estimate the near surface PM2.5 mass concentrations from full remote sensing measurements including polarimetric, active and infrared remote sensing technologies. It is found that the mean relative error of PM2.5 retrieved by full remote sensing measurements is 35.5 % in the case of October 5th 2013, improved to a certain degree compared to previous studies. A systematic comparison with the ground-based observations further indicates the effectiveness of the inversion algorithm and reliability of results. A new generation of polarized sensors (DPC and PCF), whose observation can support these algorithms, will be onboard GF series satellites and launched by China in the near future.
Simulating water hammer with corrective smoothed particle method
Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.
2012-01-01
The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in
3D-radiative transfer in terrestrial atmosphere: An efficient parallel numerical procedure
Bass, L. P.; Germogenova, T. A.; Nikolaeva, O. V.; Kokhanovsky, A. A.; Kuznetsov, V. S.
2003-04-01
, V. V., 1972: Light scattering in planetary atmosphere, M.:Nauka. [2] Evans, K. F., 1998: The spherical harmonic discrete ordinate method for three dimensional atmospheric radiative transfer, J. Atmos. Sci., 55, 429 446. [3] L.P. Bass, T.A. Germogenova, V.S. Kuznetsov, O.V. Nikolaeva. RADUGA 5.1 and RADUGA 5.1(P) codes for stationary transport equation solution in 2D and 3D geometries on one and multiprocessors computers. Report on seminar “Algorithms and Codes for neutron physical of nuclear reactor calculations” (Neutronica 2001), Obninsk, Russia, 30 October 2 November 2001. [4] T.A. Germogenova, L.P. Bass, V.S. Kuznetsov, O.V. Nikolaeva. Mathematical modeling on parallel computers solar and laser radiation transport in 3D atmosphere. Report on International Symposium CIS countries “Atmosphere radiation”, 18 21 June 2002, St. Peterburg, Russia, p. 15 16. [5] L.P. Bass, T.A. Germogenova, O.V. Nikolaeva, V.S. Kuznetsov. Radiative Transfer Universal 2D 3D Code RADUGA 5.1(P) for Multiprocessor Computer. Abstract. Poster report on this Meeting. [6] L.P. Bass, O.V. Nikolaeva. Correct calculation of Angular Flux Distribution in Strongly Heterogeneous Media and Voids. Proc. of Joint International Conference on Mathematical Methods and Supercomputing for Nuclear Applications, Saratoga Springs, New York, October 5 9, 1997, p. 995 1004. [7] http://www/jscc.ru
Algorithmic detectability threshold of the stochastic block model
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Directory of Open Access Journals (Sweden)
Chao Yang
2016-01-01
Full Text Available The artificial bee colony (ABC algorithm is a recently introduced optimization method in the research field of swarm intelligence. This paper presents an improved ABC algorithm named as OGABC based on opposition-based learning (OBL and global best search equation to overcome the shortcomings of the slow convergence rate and sinking into local optima in the process of inversion of atmospheric duct. Taking the inversion of the surface duct using refractivity from clutter (RFC technique as an example to validate the performance of the proposed OGABC, the inversion results are compared with those of the modified invasive weed optimization (MIWO and ABC. The radar sea clutter power calculated by parabolic equation method using the simulated and measured refractivity profile is utilized to carry out the inversion of the surface duct, respectively. The comparative investigation results indicate that the performance of OGABC is superior to that of MIWO and ABC in terms of stability, accuracy, and convergence rate during the process of inversion.
Multiview Trajectory Mapping Using Homography with Lens Distortion Correction
Directory of Open Access Journals (Sweden)
Andrea Cavallaro
2008-11-01
Full Text Available We present a trajectory mapping algorithm for a distributed camera setting that is based on statistical homography estimation accounting for the distortion introduced by camera lenses. Unlike traditional approaches based on the direct linear transformation (DLT algorithm and singular value decomposition (SVD, the planar homography estimation is derived from renormalization. In addition to this, the algorithm explicitly introduces a correction parameter to account for the nonlinear radial lens distortion, thus improving the accuracy of the transformation. We demonstrate the proposed algorithm by generating mosaics of the observed scenes and by registering the spatial locations of moving objects (trajectories from multiple cameras on the mosaics. Moreover, we objectively compare the transformed trajectories with those obtained by SVD and least mean square (LMS methods on standard datasets and demonstrate the advantages of the renormalization and the lens distortion correction.
Multiview Trajectory Mapping Using Homography with Lens Distortion Correction
Directory of Open Access Journals (Sweden)
Kayumbi Gabin
2008-01-01
Full Text Available Abstract We present a trajectory mapping algorithm for a distributed camera setting that is based on statistical homography estimation accounting for the distortion introduced by camera lenses. Unlike traditional approaches based on the direct linear transformation (DLT algorithm and singular value decomposition (SVD, the planar homography estimation is derived from renormalization. In addition to this, the algorithm explicitly introduces a correction parameter to account for the nonlinear radial lens distortion, thus improving the accuracy of the transformation. We demonstrate the proposed algorithm by generating mosaics of the observed scenes and by registering the spatial locations of moving objects (trajectories from multiple cameras on the mosaics. Moreover, we objectively compare the transformed trajectories with those obtained by SVD and least mean square (LMS methods on standard datasets and demonstrate the advantages of the renormalization and the lens distortion correction.
MERIS Retrieval of Water Quality Components in the Turbid Albemarle-Pamlico Sound Estuary, USA
Directory of Open Access Journals (Sweden)
Hans W. Paerl
2011-04-01
Full Text Available Two remote-sensing optical algorithms for the retrieval of the water quality components (WQCs in the Albemarle-Pamlico Estuarine System (APES were developed and validated for chlorophyll a (Chl. Both algorithms were semi-empirical because they incorporated some elements of optical processes in the atmosphere, water, and air/water interface. One incorporated a very simple atmospheric correction and modified quasi-single-scattering approximation (QSSA for estimating the spectral Gordon’s parameter, and the second estimated WQCs directly from the top of atmosphere satellite radiance without atmospheric corrections. A modified version of the Global Meteorological Database for Solar Energy and Applied Meteorology (METEONORM was used to estimate directional atmospheric transmittances. The study incorporated in situ Chl data from the Ferry-Based Monitoring (FerryMon program collected in the Neuse River Estuary (n = 633 and Pamlico Sound (n = 362, along with Medium Resolution Imaging Spectrometer (MERIS satellite imagery collected (2006–2009 across the APES; providing quasi-coinciding samples for Chl algorithm development and validation. Results indicated a coefficient of determination (R2 of 0.70 and mean-normalized root-mean-squares errors (NRMSE of 52% in the Neuse River Estuary and R2 = 0.44 (NRMSE = 75 % in the Pamlico Sound—without atmospheric corrections. The simple atmospheric correction tested provided on performance improvements. Algorithm performance demonstrated the potential for supporting long-term operational WQCs satellite monitoring in the APES.
Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M
2010-03-15
A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Analysis of a parallel multigrid algorithm
Chan, Tony F.; Tuminaro, Ray S.
1989-01-01
The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.
Energy Technology Data Exchange (ETDEWEB)
Albino, Lucas D.; Santos, Gabriela R.; Ribeiro, Victor A.B.; Rodrigues, Laura N., E-mail: lucasdelbem1@gmail.com [Universidade de Sao Paulo (USP), Sao Paulo, SP (Brazil). Faculdade de Medicina. Instituto de Radiologia; Weltman, Eduardo; Braga, Henrique F. [Instituto do Cancer do Estado de Sao Paulo, Sao Paulo, SP (Brazil). Servico de Radioterapia
2013-12-15
The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)
A new trajectory correction technique for linacs
International Nuclear Information System (INIS)
Raubenheimer, T.O.; Ruth, R.D.
1990-06-01
In this paper, we describe a new trajectory correction technique for high energy linear accelerators. Current correction techniques force the beam trajectory to follow misalignments of the Beam Position Monitors. Since the particle bunch has a finite energy spread and particles with different energies are deflected differently, this causes ''chromatic'' dilution of the transverse beam emittance. The algorithm, which we describe in this paper, reduces the chromatic error by minimizing the energy dependence of the trajectory. To test the method we compare the effectiveness of our algorithm with a standard correction technique in simulations on a design linac for a Next Linear Collider. The simulations indicate that chromatic dilution would be debilitating in a future linear collider because of the very small beam sizes required to achieve the necessary luminosity. Thus, we feel that this technique will prove essential for future linear colliders. 3 refs., 6 figs., 2 tabs
See Something, Say Something: Correction of Global Health Misinformation on Social Media.
Bode, Leticia; Vraga, Emily K
2018-09-01
Social media are often criticized for being a conduit for misinformation on global health issues, but may also serve as a corrective to false information. To investigate this possibility, an experiment was conducted exposing users to a simulated Facebook News Feed featuring misinformation and different correction mechanisms (one in which news stories featuring correct information were produced by an algorithm and another where the corrective news stories were posted by other Facebook users) about the Zika virus, a current global health threat. Results show that algorithmic and social corrections are equally effective in limiting misperceptions, and correction occurs for both high and low conspiracy belief individuals. Recommendations for social media campaigns to correct global health misinformation, including encouraging users to refute false or misleading health information, and providing them appropriate sources to accompany their refutation, are discussed.
Energy Technology Data Exchange (ETDEWEB)
Yun, Hyong Geon; Shin, Kyo Chul [Dankook Univ., College of Medicine, Seoul (Korea, Republic of); Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan [Seoul National Univ., College of Medicine, Seoul (Korea, Republic of); Lee, Hyoung Koo [The Catholic Univ., College of Medicine, Seoul (Korea, Republic of)
2002-07-01
Algorithm for estimation of transmission dose was modified for use in partially blocked radiation fields and in cases with tissue deficit. The beam data was measured with flat solid phantom in various conditions of beam block. And an algorithm for correction of transmission dose in cases of partially blocked radiation field was developed from the measured data. The algorithm was tested in some clinical settings with irregular shaped field. Also, another algorithm for correction of transmission dose for tissue deficit was developed by physical reasoning. This algorithm was tested in experimental settings with irregular contours mimicking breast cancer patients by using multiple sheets of solid phantoms. The algorithm for correction of beam block could accurately reflect the effect of beam block, with error within {+-}1.0%, both with square fields and irregularly shaped fields. The correction algorithm for tissue deficit could accurately reflect the effect of tissue deficit with errors within {+-}1.0% in most situations and within {+-}3.0% in experimental settings with irregular contours mimicking breast cancer treatment set-up. Developed algorithms could accurately estimate the transmission dose in most radiation treatment settings including irregularly shaped field and irregularly shaped body contour with tissue deficit in transmission dosimetry.
International Nuclear Information System (INIS)
Yun, Hyong Geon; Shin, Kyo Chul; Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo
2002-01-01
Algorithm for estimation of transmission dose was modified for use in partially blocked radiation fields and in cases with tissue deficit. The beam data was measured with flat solid phantom in various conditions of beam block. And an algorithm for correction of transmission dose in cases of partially blocked radiation field was developed from the measured data. The algorithm was tested in some clinical settings with irregular shaped field. Also, another algorithm for correction of transmission dose for tissue deficit was developed by physical reasoning. This algorithm was tested in experimental settings with irregular contours mimicking breast cancer patients by using multiple sheets of solid phantoms. The algorithm for correction of beam block could accurately reflect the effect of beam block, with error within ±1.0%, both with square fields and irregularly shaped fields. The correction algorithm for tissue deficit could accurately reflect the effect of tissue deficit with errors within ±1.0% in most situations and within ±3.0% in experimental settings with irregular contours mimicking breast cancer treatment set-up. Developed algorithms could accurately estimate the transmission dose in most radiation treatment settings including irregularly shaped field and irregularly shaped body contour with tissue deficit in transmission dosimetry
Hard decoding algorithm for optimizing thresholds under general Markovian noise
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
Ogohara, Kazunori; Takagi, Masahiro; Murakami, Shin-ya; Horinouchi, Takeshi; Yamada, Manabu; Kouyama, Toru; Hashimoto, George L.; Imamura, Takeshi; Yamamoto, Yukio; Kashimura, Hiroki; Hirata, Naru; Sato, Naoki; Yamazaki, Atsushi; Satoh, Takehiko; Iwagami, Naomoto; Taguchi, Makoto; Watanabe, Shigeto; Sato, Takao M.; Ohtsuki, Shoko; Fukuhara, Tetsuya; Futaguchi, Masahiko; Sakanoi, Takeshi; Kameda, Shingo; Sugiyama, Ko-ichiro; Ando, Hiroki; Lee, Yeon Joo; Nakamura, Masato; Suzuki, Makoto; Hirose, Chikako; Ishii, Nobuaki; Abe, Takumi
2017-12-01
We provide an overview of data products from observations by the Japanese Venus Climate Orbiter, Akatsuki, and describe the definition and content of each data-processing level. Levels 1 and 2 consist of non-calibrated and calibrated radiance (or brightness temperature), respectively, as well as geometry information (e.g., illumination angles). Level 3 data are global-grid data in the regular longitude-latitude coordinate system, produced from the contents of Level 2. Non-negligible errors in navigational data and instrumental alignment can result in serious errors in the geometry calculations. Such errors cause mismapping of the data and lead to inconsistencies between radiances and illumination angles, along with errors in cloud-motion vectors. Thus, we carefully correct the boresight pointing of each camera by fitting an ellipse to the observed Venusian limb to provide improved longitude-latitude maps for Level 3 products, if possible. The accuracy of the pointing correction is also estimated statistically by simulating observed limb distributions. The results show that our algorithm successfully corrects instrumental pointing and will enable a variety of studies on the Venusian atmosphere using Akatsuki data.[Figure not available: see fulltext.
On constructing optimistic simulation algorithms for the discrete event system specification
International Nuclear Information System (INIS)
Nutaro, James J.
2008-01-01
This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
Efficient Color-Dressed Calculation of Virtual Corrections
Giele, Walter; Winter, Jan
2010-01-01
With the advent of generalized unitarity and parametric integration techniques, the construction of a generic Next-to-Leading Order Monte Carlo becomes feasible. Such a generator will entail the treatment of QCD color in the amplitudes. We extend the concept of color dressing to one-loop amplitudes, resulting in the formulation of an explicit algorithmic solution for the calculation of arbitrary scattering processes at Next-to-Leading order. The resulting algorithm is of exponential complexity, that is the numerical evaluation time of the virtual corrections grows by a constant multiplicative factor as the number of external partons is increased. To study the properties of the method, we calculate the virtual corrections to $n$-gluon scattering.
The theory of hybrid stochastic algorithms
International Nuclear Information System (INIS)
Duane, S.; Kogut, J.B.
1986-01-01
The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)
International Nuclear Information System (INIS)
Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T
2005-01-01
In this paper, we developed an analytical fan-beam reconstruction algorithm that compensates for uniform attenuation in SPECT. The new fan-beam algorithm is in the form of backprojection first, then filtering, and is mathematically exact. The algorithm is based on three components. The first one is the established generalized central-slice theorem, which relates the 1D Fourier transform of a set of arbitrary data and the 2D Fourier transform of the backprojected image. The second one is the fact that the backprojection of the fan-beam measurements is identical to the backprojection of the parallel measurements of the same object with the same attenuator. The third one is the stable analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan. The fan-beam algorithm is then extended into a cone-beam reconstruction algorithm, where the orbit of the focal point of the cone-beam imaging geometry is a circle. This orbit geometry does not satisfy Tuy's condition and the obtained cone-beam algorithm is an approximation. In the cone-beam algorithm, the cone-beam data are first backprojected into the 3D image volume; then a slice-by-slice filtering is performed. This slice-by-slice filtering procedure is identical to that of the fan-beam algorithm. Both the fan-beam and cone-beam algorithms are efficient, and computer simulations are presented. The new cone-beam algorithm is compared with Bronnikov's cone-beam algorithm, and it is shown to have better performance with noisy projections
Directory of Open Access Journals (Sweden)
Stefan Leger
2017-10-01
Conclusion: The proposed PCM algorithm led to a significantly improved image quality compared to the originally acquired images, suggesting that it is applicable to the correction of MRI data. Thus it may help to reduce intensity non-uniformity which is an important step for advanced image analysis.
POLARIMETRIC REMOTE SENSING OF ATMOSPHERIC PARTICULATE POLLUTANTS
Directory of Open Access Journals (Sweden)
Z. Li
2018-04-01
Full Text Available Atmospheric particulate pollutants not only reduce atmospheric visibility, change the energy balance of the troposphere, but also affect human and vegetation health. For monitoring the particulate pollutants, we establish and develop a series of inversion algorithms based on polarimetric remote sensing technology which has unique advantages in dealing with atmospheric particulates. A solution is pointed out to estimate the near surface PM2.5 mass concentrations from full remote sensing measurements including polarimetric, active and infrared remote sensing technologies. It is found that the mean relative error of PM2.5 retrieved by full remote sensing measurements is 35.5 % in the case of October 5th 2013, improved to a certain degree compared to previous studies. A systematic comparison with the ground-based observations further indicates the effectiveness of the inversion algorithm and reliability of results. A new generation of polarized sensors (DPC and PCF, whose observation can support these algorithms, will be onboard GF series satellites and launched by China in the near future.
The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes
Energy Technology Data Exchange (ETDEWEB)
Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu [National Radio Astronomy Observatory, 1003 Lopezville Road, Socorro, NM 87801 (United States)
2017-11-01
This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.
The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes
Bhatnagar, S.; Cornwell, T. J.
2017-11-01
This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.
A formal analysis of a dynamic distributed spanning tree algorithm
Mooij, A.J.; Wesselink, J.W.
2003-01-01
Abstract. We analyze the spanning tree algorithm in the IEEE 1394.1 draft standard, which correctness has not previously been proved. This algorithm is a fully-dynamic distributed graph algorithm, which, in general, is hard to develop. The approach we use is to formally develop an algorithm that is
Corrections to the Predicitions for Atmospheric Neutrino Observations
Poirier, J.
2000-01-01
The theoretical Monte Carlo calculations of the production of neutrinos via cosmic rays incident upon the earth's atmosphere are examined. The calculations are sensitive to the assumed ratio of pi+ / pi- production cross sections; this ratio appears to be underestimated in the theory relative to the experimentally measured ratio. Since the neutrino detection cross section is three times larger than that for the antineutrino, the theoretical predicted detection ratio (nu_mu / nu_e) is correspo...
"Accelerated Perceptron": A Self-Learning Linear Decision Algorithm
Zuev, Yu. A.
2003-01-01
The class of linear decision rules is studied. A new algorithm for weight correction, called an "accelerated perceptron", is proposed. In contrast to classical Rosenblatt's perceptron this algorithm modifies the weight vector at each step. The algorithm may be employed both in learning and in self-learning modes. The theoretical aspects of the behaviour of the algorithm are studied when the algorithm is used for the purpose of increasing the decision reliability by means of weighted voting. I...
Atmospheres of polygons and knotted polygons
International Nuclear Information System (INIS)
Janse Rensburg, E J Janse; Rechnitzer, A
2008-01-01
In this paper we define two statistics a + (ω) and a - (ω), the positive and negative atmospheres of a lattice polygon ω of fixed length n. These statistics have the property that (a + (ω))/(a - (ω)) = p n+2 /p n , where p n is the number of polygons of length n, counted modulo translations. We use the pivot algorithm to sample polygons and to compute the corresponding average atmospheres. Using these data, we directly estimate the growth constants of polygons in two and three dimensions. We find that μ=2.63805±0.00012 in two dimensions and μ=4.683980±0.000042±0.000067 in three dimensions, where the error bars are 67% confidence intervals, and the second error bar in the three-dimensional estimate of μ is an estimated systematic error. We also compute atmospheres of polygons of fixed knot type K sampled by the BFACF algorithm. We discuss the implications of our results and show that different knot types have atmospheres which behave dramatically differently at small values of n
Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin
2012-06-01
Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.
The correctness of Newman’s typability algorithm and some of its extensions
Geuvers, J.H.; Krebbers, R.
2011-01-01
We study Newman’s typability algorithm (Newman, 1943) [14] for simple type theory. The algorithm originates from 1943, but was left unnoticed until (Newman, 1943) [14] was recently rediscovered by Hindley (2008) [10]. The remarkable thing is that it decides typability without computing a type. We
Energy Technology Data Exchange (ETDEWEB)
Matthews, Patrick K. [Navarro-Intera, LLC (N-I), Las Vegas, NV (United States)
2015-02-01
This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 550: Smoky Contamination Area, Nevada National Security Site, Nevada. CAU 550 includes 19 corrective action sites (CASs), which consist of one weapons-related atmospheric test (Smoky), three safety experiments (Ceres, Oberon, Titania), and 15 debris sites (Table ES-1). The CASs were sorted into the following study groups based on release potential and technical similarities: • Study Group 1, Atmospheric Test • Study Group 2, Safety Experiments • Study Group 3, Washes • Study Group 4, Debris The purpose of this document is to provide justification and documentation supporting the conclusion that no further corrective action is needed for CAU 550 based on implementation of the corrective actions listed in Table ES-1. Corrective action investigation (CAI) activities were performed between August 2012 and October 2013 as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 550: Smoky Contamination Area; and in accordance with the Soils Activity Quality Assurance Plan. The approach for the CAI was to investigate and make data quality objective (DQO) decisions based on the types of releases present. The purpose of the CAI was to fulfill data needs as defined during the DQO process. The CAU 550 dataset of investigation results was evaluated based on a data quality assessment. This assessment demonstrated the dataset is complete and acceptable for use in fulfilling the DQO data needs.
''adding'' algorithm for the Markov chain formalism for radiation transfer
International Nuclear Information System (INIS)
Esposito, L.W.
1979-01-01
The Markov chain radiative transfer method of Esposito and House has been shown to be both efficient and accurate for calculation of the diffuse reflection from a homogeneous scattering planetary atmosphere. The use of a new algorithm similar to the ''adding'' formula of Hansen and Travis extends the application of this formalism to an arbitrarily deep atmosphere. The basic idea for this algorithm is to consider a preceding calculation as a single state of a new Markov chain. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. The time required for the algorithm is comparable to that for a doubling calculation for a homogeneous atmosphere, but for a non-homogeneous atmosphere the new method is considerably faster than the standard ''adding'' routine. As with he standard ''adding'' method, the information on the internal radiation field is lost during the calculation. This method retains the advantage of the earlier Markov chain method that the time required is relatively insensitive to the number of illumination angles or observation angles for which the diffuse reflection is calculated. A technical write-up giving fuller details of the algorithm and a sample code are available from the author
International Nuclear Information System (INIS)
Evenson, Grant
2012-01-01
Corrective Action Unit (CAU) 550 is located in Areas 7, 8, and 10 of the Nevada National Security Site, which is approximately 65 miles northwest of Las Vegas, Nevada. CAU 550, Smoky Contamination Area, comprises 19 corrective action sites (CASs). Based on process knowledge of the releases associated with the nuclear tests and radiological survey information about the location and shape of the resulting contamination plumes, it was determined that some of the CAS releases are co-located and will be investigated as study groups. This document describes the planned investigation of the following CASs (by study group): (1) Study Group 1, Atmospheric Test - CAS 08-23-04, Atmospheric Test Site T-2C; (2) Study Group 2, Safety Experiments - CAS 08-23-03, Atmospheric Test Site T-8B - CAS 08-23-06, Atmospheric Test Site T-8A - CAS 08-23-07, Atmospheric Test Site T-8C; (3) Study Group 3, Washes - Potential stormwater migration of contaminants from CASs; (4) Study Group 4, Debris - CAS 08-01-01, Storage Tank - CAS 08-22-05, Drum - CAS 08-22-07, Drum - CAS 08-22-08, Drums (3) - CAS 08-22-09, Drum - CAS 08-24-03, Battery - CAS 08-24-04, Battery - CAS 08-24-07, Batteries (3) - CAS 08-24-08, Batteries (3) - CAS 08-26-01, Lead Bricks (200) - CAS 10-22-17, Buckets (3) - CAS 10-22-18, Gas Block/Drum - CAS 10-22-19, Drum; Stains - CAS 10-22-20, Drum - CAS 10-24-10, Battery. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each study group. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed
Penenko, Alexey; Penenko, Vladimir
2014-05-01
Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate
Betatron tune correction schemes in nuclotron
International Nuclear Information System (INIS)
Shchepunov, V.A.
1992-01-01
Algorithms of the betatron tune corrections in Nuclotron with sextupolar and octupolar magnets are considered. Second order effects caused by chromaticity correctors are taken into account and sextupolar compensation schemes are proposed to suppress them. 6 refs.; 1 tab
Remsberg, E. E.; Marshall, B. T.; Garcia-Comas, M.; Krueger, D.; Lingenfelser, G. S.; Martin-Torres, J.; Mlynczak, M. G.; Russell, J. M., III; Smith, A. K.; Zhao, Y.;
2008-01-01
The quality of the retrieved temperature-versus-pressure (or T(p)) profiles is described for the middle atmosphere for the publicly available Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) Version 1.07 (V1.07) data set. The primary sources of systematic error for the SABER results below about 70 km are (1) errors in the measured radiances, (2) biases in the forward model, and (3) uncertainties in the corrections for ozone and in the determination of the reference pressure for the retrieved profiles. Comparisons with other correlative data sets indicate that SABER T(p) is too high by 1-3 K in the lower stratosphere but then too low by 1 K near the stratopause and by 2 K in the middle mesosphere. There is little difference between the local thermodynamic equilibrium (LTE) algorithm results below about 70 km from V1.07 and V1.06, but there are substantial improvements/differences for the non-LTE results of V1.07 for the upper mesosphere and lower thermosphere (UMLT) region. In particular, the V1.07 algorithm uses monthly, diurnally averaged CO2 profiles versus latitude from the Whole Atmosphere Community Climate Model. This change has improved the consistency of the character of the tides in its kinetic temperature (T(sub k)). The T(sub k) profiles agree with UMLT values obtained from ground-based measurements of column-averaged OH and O2 emissions and of the Na lidar returns, at least within their mutual uncertainties. SABER T(sub k) values obtained near the mesopause with its daytime algorithm also agree well with the falling sphere climatology at high northern latitudes in summer. It is concluded that the SABER data set can be the basis for improved, diurnal-to-interannual-scale temperatures for the middle atmosphere and especially for its UMLT region.
Algorithms For Integrating Nonlinear Differential Equations
Freed, A. D.; Walker, K. P.
1994-01-01
Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.
Making the error-controlling algorithm of observable operator models constructive.
Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael
2009-12-01
Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.
A 3D inversion for all-space magnetotelluric data with static shift correction
Zhang, Kun
2017-04-01
Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.
Global intensity correction in dynamic scenes
Withagen, P.J.; Schutte, K.; Groen, F.C.A.
2007-01-01
Changing image intensities causes problems for many computer vision applications operating in unconstrained environments. We propose generally applicable algorithms to correct for global differences in intensity between images recorded with a static or slowly moving camera, regardless of the cause
The influence of the atmosphere on geoid and potential coefficient determinations from gravity data
Rummel, R.; Rapp, R. H.
1976-01-01
For the precise computation of geoid undulations the effect of the attraction of the atmosphere on the solution of the basic boundary value problem of gravimetric geodesy must be considered. This paper extends the theory of Moritz for deriving an atmospheric correction to the case when the undulations are computed by combining anomalies in a cap surrounding the computation point with information derived from potential coefficients. The correction term is a function of the cap size and the topography within the cap. It reaches a value of 3.0 m for a cap size of 30 deg, variations on the decimeter level being caused by variations in the topography. The effect of the atmospheric correction terms on potential coefficients is found to be small, reaching a maximum of 0.0055 millionths at n = 2, m = 2 when terrestrial gravity data are considered. The magnitude of this correction indicates that in future potential coefficient determination from gravity data the atmospheric correction should be made to such data.
Assessing the Application of Cloud-Shadow Atmospheric Correction Algorithm on HICO
2014-05-01
August 30, 2011, and over northern Gulf of Mexico on March 13, 2012, for which in situ AERONET-OC data were also acquired from the Acqua Alta...chlorophyll fluorescence in eutrophic turbid waters is to fill the 670-nm reflectance trough and to augment the shorter wavelength shoulder of the 690...also be used for chlorophyll estimation in the turbid waters [26]. For the July 13, 2010, Azov Sea scene, Gitelson <?/«/. [16] determined the
Robust Active Label Correction
DEFF Research Database (Denmark)
Kremer, Jan; Sha, Fei; Igel, Christian
2018-01-01
for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...
Goldengorin, B.; Ghosh, D.
Maximization of submodular functions on a ground set is a NP-hard combinatorial optimization problem. Data correcting algorithms are among the several algorithms suggested for solving this problem exactly and approximately. From the point of view of Hasse diagrams data correcting algorithms use
Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei
2018-04-01
We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.
Weighted divergence correction scheme and its fast implementation
Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun
2017-05-01
Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Biased Monte Carlo algorithms on unitary groups
International Nuclear Information System (INIS)
Creutz, M.; Gausterer, H.; Sanielevici, S.
1989-01-01
We introduce a general updating scheme for the simulation of physical systems defined on unitary groups, which eliminates the systematic errors due to inexact exponentiation of algebra elements. The essence is to work directly with group elements for the stochastic noise. Particular cases of the scheme include the algorithm of Metropolis et al., overrelaxation algorithms, and globally corrected Langevin and hybrid algorithms. The latter are studied numerically for the case of SU(3) theory
Impact of MODIS SWIR Band Calibration Improvements on Level-3 Atmospheric Products
Wald, Andrew; Levy, Robert; Angal, Amit; Geng, Xu; Xiong, Jack; Hoffman, Kurt
2016-01-01
The spectral reflectance measured by the MODIS reflective solar bands (RSB) is used for retrieving many atmospheric science products. The accuracy of these products depends on the accuracy of the calibration of the RSB. To this end, the RSB of the MODIS instruments are primarily calibrated on-orbit using regular solar diffuser (SD) observations. For lambda 0.94 microns, the MODIS Characterization Support Team (MCST) developed, in MODIS Collection 6 (C6), a time-dependent correction using observations from pseudo-invariant earth-scene targets. This correction has been implemented in C6 for the Terra MODIS 1.24 micron band over the entire mission, and for the 1.375 micron band in the forward processing. As the instruments continue to operate beyond their design lifetime of six years, a similar correction is planned for other short-wave infrared (SWIR) bands as well. MODIS SWIR bands are used in deriving atmosphere products, including aerosol optical thickness, atmospheric total column water vapor, cloud fraction and cloud optical depth. The SD degradation correction in Terra bands 5 and 26 impact the spectral radiance and therefore the retrieval of these atmosphere products. Here, we describe the corrections to Bands 5 (1.24 microns) and 26 (1.375 microns), and produce three sets (B5, B26 correction on/on, on/off, and off/off) of Terra-MODIS Level 1B (calibrated radiance product) data. By comparing products derived from these corrected and uncorrected Terra MODIS Level 1B (L1B) calibrations, dozens of L3 atmosphere products are surveyed for changes caused by the corrections, and representative results are presented. Aerosol and water vapor products show only small local changes, while some cloud products can change locally by > 10%, which is a large change.
Quantum Computations: Fundamentals and Algorithms
International Nuclear Information System (INIS)
Duplij, S.A.; Shapoval, I.I.
2007-01-01
Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
International Nuclear Information System (INIS)
Boehlecke, Robert
2004-01-01
The six bunkers included in CAU 204 were primarily used to monitor atmospheric testing or store munitions. The 'Corrective Action Investigation Plan (CAIP) for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada' (NNSA/NV, 2002a) provides information relating to the history, planning, and scope of the investigation; therefore, it will not be repeated in this CADD. This CADD identifies potential corrective action alternatives and provides a rationale for the selection of a recommended corrective action alternative for each CAS within CAU 204. The evaluation of corrective action alternatives is based on process knowledge and the results of investigative activities conducted in accordance with the CAIP (NNSA/NV, 2002a) that was approved prior to the start of the Corrective Action Investigation (CAI). Record of Technical Change (ROTC) No. 1 to the CAIP (approval pending) documents changes to the preliminary action levels (PALs) agreed to by the Nevada Division of Environmental Protection (NDEP) and DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO). This ROTC specifically discusses the radiological PALs and their application to the findings of the CAU 204 corrective action investigation. The scope of this CADD consists of the following: (1) Develop corrective action objectives; (2) Identify corrective action alternative screening criteria; (3) Develop corrective action alternatives; (4) Perform detailed and comparative evaluations of corrective action alternatives in relation to corrective action objectives and screening criteria; and (5) Recommend and justify a preferred corrective action alternative for each CAS within CAU 204
Construct validation of an interactive digital algorithm for ostomy care.
Beitz, Janice M; Gerlach, Mary A; Schafer, Vickie
2014-01-01
The purpose of this study was to evaluate construct validity for a previously face and content validated Ostomy Algorithm using digital real-life clinical scenarios. A cross-sectional, mixed-methods Web-based survey design study was conducted. Two hundred ninety-seven English-speaking RNs completed the study; participants practiced in both acute care and postacute settings, with 1 expert ostomy nurse (WOC nurse) and 2 nonexpert nurses. Following written consent, respondents answered demographic questions and completed a brief algorithm tutorial. Participants were then presented with 7 ostomy-related digital scenarios consisting of real-life photos and pertinent clinical information. Respondents used the 11 assessment components of the digital algorithm to choose management options. Participant written comments about the scenarios and the research process were collected. The mean overall percentage of correct responses was 84.23%. Mean percentage of correct responses for respondents with a self-reported basic ostomy knowledge was 87.7%; for those with a self-reported intermediate ostomy knowledge was 85.88% and those who were self-reported experts in ostomy care achieved 82.77% correct response rate. Five respondents reported having no prior ostomy care knowledge at screening and achieved an overall 45.71% correct response rate. No negative comments regarding the algorithm were recorded by participants. The new standardized Ostomy Algorithm remains the only face, content, and construct validated digital clinical decision instrument currently available. Further research on application at the bedside while tracking patient outcomes is warranted.
Risk assessment of atmospheric emissions using machine learning
Cervone, G.; Franzese, P.; Ezber, Y.; Boybeyi, Z.
2008-01-01
Supervised and unsupervised machine learning algorithms are used to perform statistical and logical analysis of several transport and dispersion model runs which simulate emissions from a fixed source under different atmospheric conditions.
First, a clustering algorithm is used to automatically group the results of different transport and dispersion simulations according to specific cloud characteristics. Then, a symbolic classification algorithm is employed to find compl...
Directory of Open Access Journals (Sweden)
Rodrigo Moura Pereira
2016-06-01
Full Text Available Large farmland areas and the knowledge on the interaction between solar radiation and vegetation canopies have increased the use of data from orbital remote sensors in sugarcane monitoring. However, the constituents of the atmosphere affect the reflectance values obtained by imaging sensors. This study aimed at improving a sugarcane Leaf Area Index (LAI estimation model, concerning the Normalized Difference Vegetation Index (NDVI subjected to atmospheric correction. The model generated by the NDVI with atmospheric correction showed the best results (R2 = 0.84; d = 0.95; MAE = 0.44; RMSE = 0.55, in relation to the other models compared. LAI estimation with this model, during the sugarcane plant cycle, reached a maximum of 4.8 at the vegetative growth phase and 2.3 at the end of the maturation phase. Thus, the use of atmospheric correction to estimate the sugarcane LAI is recommended, since this procedure increases the correlations between the LAI estimated by image and by plant parameters.
Exact fan-beam and 4π-acquisition cone-beam SPECT algorithms with uniform attenuation correction
International Nuclear Information System (INIS)
Tang Qiulin; Zeng, Gengsheng L.; Wu Jiansheng; Gullberg, Grant T.
2005-01-01
This paper presents analytical fan-beam and cone-beam reconstruction algorithms that compensate for uniform attenuation in single photon emission computed tomography. First, a fan-beam algorithm is developed by obtaining a relationship between the two-dimensional (2D) Fourier transform of parallel-beam projections and fan-beam projections. Using this relationship, 2D Fourier transforms of equivalent parallel-beam projection data are obtained from the fan-beam projection data. Then a quasioptimal analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan, is used to reconstruct the image. A cone-beam algorithm is developed by extending the fan-beam algorithm to 4π solid angle geometry. The cone-beam algorithm is also an exact algorithm
Hu, Chuanmin; Lee, Zhongping; Muller-Karger, Frank E.; Carder, Kendall L.
2003-05-01
A spectra-matching optimization algorithm, designed for hyperspectral sensors, has been implemented to process SeaWiFS-derived multi-spectral water-leaving radiance data. The algorithm has been tested over Southwest Florida coastal waters. The total spectral absorption and backscattering coefficients can be well partitioned with the inversion algorithm, resulting in RMS errors generally less than 5% in the modeled spectra. For extremely turbid waters that come from either river runoff or sediment resuspension, the RMS error is in the range of 5-15%. The bio-optical parameters derived in this optically complex environment agree well with those obtained in situ. Further, the ability to separate backscattering (a proxy for turbidity) from the satellite signal makes it possible to trace water movement patterns, as indicated by the total absorption imagery. The derived patterns agree with those from concurrent surface drifters. For waters where CDOM overwhelmingly dominates the optical signal, however, the procedure tends to regard CDOM as the sole source of absorption, implying the need for better atmospheric correction and for adjustment of some model coefficients for this particular region.
inverse correction of fourier transforms for one-dimensional strongly ...
African Journals Online (AJOL)
Hsin Ying-Fei
2016-05-01
May 1, 2016 ... As it is widely used in periodic lattice design theory and is particularly useful in aperiodic lattice design [12,13], the accuracy of the FT algorithm under strong scattering conditions is the focus of this paper. We propose an inverse correction approach for the inaccurate FT algorithm in strongly scattering ...
Risk assessment of atmospheric emissions using machine learning
Directory of Open Access Journals (Sweden)
G. Cervone
2008-09-01
Full Text Available Supervised and unsupervised machine learning algorithms are used to perform statistical and logical analysis of several transport and dispersion model runs which simulate emissions from a fixed source under different atmospheric conditions.
First, a clustering algorithm is used to automatically group the results of different transport and dispersion simulations according to specific cloud characteristics. Then, a symbolic classification algorithm is employed to find complex non-linear relationships between the meteorological input conditions and each cluster of clouds. The patterns discovered are provided in the form of probabilistic measures of contamination, thus suitable for result interpretation and dissemination.
The learned patterns can be used for quick assessment of the areas at risk and of the fate of potentially hazardous contaminants released in the atmosphere.
DNA-based watermarks using the DNA-Crypt algorithm
Directory of Open Access Journals (Sweden)
Barnekow Angelika
2007-05-01
Full Text Available Abstract Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.
DNA-based watermarks using the DNA-Crypt algorithm.
Heider, Dominik; Barnekow, Angelika
2007-05-29
The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.
DNA-based watermarks using the DNA-Crypt algorithm
Heider, Dominik; Barnekow, Angelika
2007-01-01
Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434
Moisture Forecast Bias Correction in GEOS DAS
Dee, D.
1999-01-01
Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.
Design Strategy for a Pipelined ADC Employing Digital Post-correction
Harpe, P.J.A.; Zanikopoulos, A.; Hegt, J.A.; Roermund, van A.H.M.
2004-01-01
This paper describes how the usage of digital post-correction techniques in pipelined analog-to-digital converters (ADC's) can be exploited optimally during the design-phase of the converter. It is known that post-correction algorithms reduce the influence of several cir- cuit impairments on the
Algorithms for orbit control on SPEAR
International Nuclear Information System (INIS)
Corbett, J.; Keeley, D.; Hettel, R.; Linscott, I.; Sebek, J.
1994-06-01
A global orbit feedback system has been installed on SPEAR to help stabilize the position of the photon beams. The orbit control algorithms depend on either harmonic reconstruction of the orbit or eigenvector decomposition. The orbit motion is corrected by dipole corrector kicks determined from the inverse corrector-to-bpm response matrix. This paper outlines features of these control algorithms as applied to SPEAR
Hologram production and representation for corrected image
Jiao, Gui Chao; Zhang, Rui; Su, Xue Mei
2015-12-01
In this paper, a CCD sensor device is used to record the distorted homemade grid images which are taken by a wide angle camera. The distorted images are corrected by using methods of position calibration and correction of gray with vc++ 6.0 and opencv software. Holography graphes for the corrected pictures are produced. The clearly reproduced images are obtained where Fresnel algorithm is used in graph processing by reducing the object and reference light from Fresnel diffraction to delete zero-order part of the reproduced images. The investigation is useful in optical information processing and image encryption transmission.
A correctness proof of sorting by means of formal procedures
Fokkinga, M.M.
1987-01-01
We consider a recursive sorting algorithm in which, in each invocation, a new variable and a new procedure (using the variable globally) are defined and the procedure is passed to recursive calls. This algorithm is proved correct with Hoare-style pre- and postassertions. We also discuss the same
Fully 3D refraction correction dosimetry system
International Nuclear Information System (INIS)
Manjappa, Rakesh; Makki, S Sharath; Kanhirodan, Rajan; Kumar, Rajesh; Vasu, Ram Mohan
2016-01-01
The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched
Fully 3D refraction correction dosimetry system.
Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan
2016-02-21
The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched
Calculation of infrared radiation in the atmosphere by a numerical method
International Nuclear Information System (INIS)
Nunes, G.S.S.; Viswanadham, Y.
1981-01-01
A numerical method is described for the calculations of the atmospheric infrared flux and radiative cooling rate in the atmosphere. It is suitable for use at all levels below lower stratosphere. The square root pressure correction factor is incorporated in the computation of the corrected optical depth. The water vapour flux emissivity data of Staley and Jurica are used in the model. The versatility of the computing scheme sugests that this method is adequate to evaluate infrared flux and flux divergence in the problems involving a large amount of atmospheric data. (Author) [pt
Correction of longitudinal errors in accelerators for heavy-ion fusion
International Nuclear Information System (INIS)
Sharp, W.M.; Callahan, D.A.; Barnard, J.J.; Langdon, A.B.; Fessenden, T.J.
1993-01-01
Longitudinal space-charge waves develop on a heavy-ion inertial-fusion pulse from initial mismatches or from inappropriately timed or shaped accelerating voltages. Without correction, waves moving backward along the beam can grow due to the interaction with their resistivity retarded image fields, eventually degrading the longitudinal emittance. A simple correction algorithm is presented here that uses a time-dependent axial electric field to reverse the direction of backward-moving waves. The image fields then damp these forward-moving waves. The method is demonstrated by fluid simulations of an idealized inertial-fusion driver, and practical problems in implementing the algorithm are discussed
Correction of rotational distortion for catheter-based en face OCT and OCT angiography
Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.
2015-01-01
We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133
Theoretical algorithms for satellite-derived sea surface temperatures
Barton, I. J.; Zavody, A. M.; O'Brien, D. M.; Cutten, D. R.; Saunders, R. W.; Llewellyn-Jones, D. T.
1989-03-01
Reliable climate forecasting using numerical models of the ocean-atmosphere system requires accurate data sets of sea surface temperature (SST) and surface wind stress. Global sets of these data will be supplied by the instruments to fly on the ERS 1 satellite in 1990. One of these instruments, the Along-Track Scanning Radiometer (ATSR), has been specifically designed to provide SST in cloud-free areas with an accuracy of 0.3 K. The expected capabilities of the ATSR can be assessed using transmission models of infrared radiative transfer through the atmosphere. The performances of several different models are compared by estimating the infrared brightness temperatures measured by the NOAA 9 AVHRR for three standard atmospheres. Of these, a computationally quick spectral band model is used to derive typical AVHRR and ATSR SST algorithms in the form of linear equations. These algorithms show that a low-noise 3.7-μm channel is required to give the best satellite-derived SST and that the design accuracy of the ATSR is likely to be achievable. The inclusion of extra water vapor information in the analysis did not improve the accuracy of multiwavelength SST algorithms, but some improvement was noted with the multiangle technique. Further modeling is required with atmospheric data that include both aerosol variations and abnormal vertical profiles of water vapor and temperature.
Projection correction for the pixel-by-pixel basis in diffraction enhanced imaging
International Nuclear Information System (INIS)
Huang Zhifeng; Kang Kejun; Li Zheng
2006-01-01
Theories and methods of x-ray diffraction enhanced imaging (DEI) and computed tomography of the DEI (DEI-CT) have been investigated recently. But the phenomenon of projection offsets which may affect the accuracy of the results of extraction methods of refraction-angle images and reconstruction algorithms of the DEI-CT is seldom of concern. This paper focuses on it. Projection offsets are revealed distinctly according to the equivalent rectilinear propagation model of the DEI. Then, an effective correction method using the equivalent positions of projection data is presented to eliminate the errors induced by projection offsets. The correction method is validated by a computer simulation experiment and extraction methods or reconstruction algorithms based on the corrected data can give more accurate results. The limitations of the correction method are discussed at the end
Palacios, S. L.; Thompson, D. R.; Kudela, R. M.; Negrey, K.; Guild, L. S.; Gao, B. C.; Green, R. O.; Torres-Perez, J. L.
2015-12-01
There is a need in the ocean color community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand ocean biodiversity, to track energy flow through ecosystems, and to identify and monitor for harmful algal blooms. Imaging spectrometer measurements enable use of sophisticated spectroscopic algorithms for applications such as differentiating among coral species, evaluating iron stress of phytoplankton, and discriminating phytoplankton taxa. These advanced algorithms rely on the fine scale, subtle spectral shape of the atmospherically corrected remote sensing reflectance (Rrs) spectrum of the ocean surface. As a consequence, these algorithms are sensitive to inaccuracies in the retrieved Rrs spectrum that may be related to the presence of nearby clouds, inadequate sensor calibration, low sensor signal-to-noise ratio, glint correction, and atmospheric correction. For the HyspIRI Airborne Campaign, flight planning considered optimal weather conditions to avoid flights with significant cloud/fog cover. Although best suited for terrestrial targets, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has enough signal for some coastal chlorophyll algorithms and meets sufficient calibration requirements for most channels. However, the coastal marine environment has special atmospheric correction needs due to error that may be introduced by aerosols and terrestrially sourced atmospheric dust and riverine sediment plumes. For this HyspIRI campaign, careful attention has been given to the correction of AVIRIS imagery of the Monterey Bay to optimize ocean Rrs retrievals for use in estimating chlorophyll (OC3 algorithm) and phytoplankton functional type (PHYDOTax algorithm) data products. This new correction method has been applied to several image collection dates during two oceanographic seasons - upwelling and the warm, stratified oceanic period for 2013 and 2014. These two periods are dominated by either diatom blooms (occasionally
A new bio-optical algorithm for the remote sensing of algal blooms in complex ocean waters
Shanmugam, Palanisamy
2011-04-01
A new bio-optical algorithm has been developed to provide accurate assessments of chlorophyll a (Chl a) concentration for detection and mapping of algal blooms from satellite data in optically complex waters, where the presence of suspended sediments and dissolved substances can interfere with phytoplankton signal and thus confound conventional band ratio algorithms. A global data set of concurrent measurements of pigment concentration and radiometric reflectance was compiled and used to develop this algorithm that uses the normalized water-leaving radiance ratios along with an algal bloom index (ABI) between three visible bands to determine Chl a concentrations. The algorithm is derived using Sea-viewing Wide Field-of-view Sensor bands, and it is subsequently tuned to be applicable to Moderate Resolution Imaging Spectroradiometer (MODIS)/Aqua data. When compared with large in situ data sets and satellite matchups in a variety of coastal and ocean waters the present algorithm makes good retrievals of the Chl a concentration and shows statistically significant improvement over current global algorithms (e.g., OC3 and OC4v4). An examination of the performance of these algorithms on several MODIS/Aqua images in complex waters of the Arabian Sea and west Florida shelf shows that the new algorithm provides a better means for detecting and differentiating algal blooms from other turbid features, whereas the OC3 algorithm has significant errors although yielding relatively consistent results in clear waters. These findings imply that, provided that an accurate atmospheric correction scheme is available to deal with complex waters, the current MODIS/Aqua, MERIS and OCM data could be extensively used for quantitative and operational monitoring of algal blooms in various regional and global waters.
Energy Technology Data Exchange (ETDEWEB)
Grant Evenson
2012-05-01
Corrective Action Unit (CAU) 550 is located in Areas 7, 8, and 10 of the Nevada National Security Site, which is approximately 65 miles northwest of Las Vegas, Nevada. CAU 550, Smoky Contamination Area, comprises 19 corrective action sites (CASs). Based on process knowledge of the releases associated with the nuclear tests and radiological survey information about the location and shape of the resulting contamination plumes, it was determined that some of the CAS releases are co-located and will be investigated as study groups. This document describes the planned investigation of the following CASs (by study group): (1) Study Group 1, Atmospheric Test - CAS 08-23-04, Atmospheric Test Site T-2C; (2) Study Group 2, Safety Experiments - CAS 08-23-03, Atmospheric Test Site T-8B - CAS 08-23-06, Atmospheric Test Site T-8A - CAS 08-23-07, Atmospheric Test Site T-8C; (3) Study Group 3, Washes - Potential stormwater migration of contaminants from CASs; (4) Study Group 4, Debris - CAS 08-01-01, Storage Tank - CAS 08-22-05, Drum - CAS 08-22-07, Drum - CAS 08-22-08, Drums (3) - CAS 08-22-09, Drum - CAS 08-24-03, Battery - CAS 08-24-04, Battery - CAS 08-24-07, Batteries (3) - CAS 08-24-08, Batteries (3) - CAS 08-26-01, Lead Bricks (200) - CAS 10-22-17, Buckets (3) - CAS 10-22-18, Gas Block/Drum - CAS 10-22-19, Drum; Stains - CAS 10-22-20, Drum - CAS 10-24-10, Battery. These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives (CAAs). Additional information will be obtained by conducting a corrective action investigation before evaluating CAAs and selecting the appropriate corrective action for each study group. The results of the field investigation will support a defensible evaluation of viable CAAs that will be presented in the Corrective Action Decision Document. The sites will be investigated based on the data quality objectives (DQOs) developed
Upper Bounds on the Number of Errors Corrected by the Koetter–Vardy Algorithm
DEFF Research Database (Denmark)
Justesen, Jørn
2007-01-01
By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of Reed-Solomon codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when ...
Bio-Inspired Genetic Algorithms with Formalized Crossover Operators for Robotic Applications.
Zhang, Jie; Kang, Man; Li, Xiaojuan; Liu, Geng-Yang
2017-01-01
Genetic algorithms are widely adopted to solve optimization problems in robotic applications. In such safety-critical systems, it is vitally important to formally prove the correctness when genetic algorithms are applied. This paper focuses on formal modeling of crossover operations that are one of most important operations in genetic algorithms. Specially, we for the first time formalize crossover operations with higher-order logic based on HOL4 that is easy to be deployed with its user-friendly programing environment. With correctness-guaranteed formalized crossover operations, we can safely apply them in robotic applications. We implement our technique to solve a path planning problem using a genetic algorithm with our formalized crossover operations, and the results show the effectiveness of our technique.
Effect of Inhomogeneity correction for lung volume model in TPS
International Nuclear Information System (INIS)
Chung, Se Young; Lee, Sang Rok; Kim, Young Bum; Kwon, Young Ho
2004-01-01
The phantom that includes high density materials such as steel was custom-made to fix lung and bone in order to evaluation inhomogeneity correction at the time of conducting radiation therapy to treat lung cancer. Using this, values resulting from the inhomogeneous correction algorithm are compared on the 2 and 3 dimensional radiation therapy planning systems. Moreover, change in dose calculation was evaluated according to inhomogeneous by comparing with the actual measurement. As for the image acquisition, inhomogeneous correction phantom(Pig's vertebra, steel(8.21 g/cm 3 ), cork(0.23 g/cm 3 )) that was custom-made and the CT(Volume zoom, Siemens, Germany) were used. As for the radiation therapy planning system, Marks Plan(2D) and XiO(CMS, USA, 3D) were used. To compare with the measurement value, linear accelerator(CL/1800, Varian, USA) and ion chamber were used. Image, obtained from the CT was used to obtain point dose and dose distribution from the region of interest (ROI) while on the radiation therapy planning device. After measurement was conducted under the same conditions, value on the treatment planning device and measured value were subjected to comparison and analysis. And difference between the resulting for the evaluation on the use (or non-use) of inhomogeneity correction algorithm, and diverse inhomogeneity correction algorithm that is included in the radiation therapy planning device was compared as well. As result of comparing the results of measurement value on the region of interest within the inhomogeneity correction phantom and the value that resulted from the homogeneous and inhomogeneous correction, gained from the therapy planning device, margin of error of the measurement value and inhomogeneous correction value at the location 1 of the lung showed 0.8% on 2D and 0.5% on 3D. Margin of error of the measurement value and inhomogeneous correction value at the location 1 of the steel showed 12% on 2D and 5% on 3D, however, it is possible to
Research and implementation of finger-vein recognition algorithm
Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin
2017-06-01
In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.
Glaser, Johann; Beisteiner, Roland; Bauer, Herbert; Fischmeister, Florian Ph S
2013-11-09
In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230-239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720-737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches.
Task Refinement for Autonomous Robots using Complementary Corrective Human Feedback
Directory of Open Access Journals (Sweden)
Cetin Mericli
2011-06-01
Full Text Available A robot can perform a given task through a policy that maps its sensed state to appropriate actions. We assume that a hand-coded controller can achieve such a mapping only for the basic cases of the task. Refining the controller becomes harder and gets more tedious and error prone as the complexity of the task increases. In this paper, we present a new learning from demonstration approach to improve the robot's performance through the use of corrective human feedback as a complement to an existing hand-coded algorithm. The human teacher observes the robot as it performs the task using the hand-coded algorithm and takes over the control to correct the behavior when the robot selects a wrong action to be executed. Corrections are captured as new state-action pairs and the default controller output is replaced by the demonstrated corrections during autonomous execution when the current state of the robot is decided to be similar to a previously corrected state in the correction database. The proposed approach is applied to a complex ball dribbling task performed against stationary defender robots in a robot soccer scenario, where physical Aldebaran Nao humanoid robots are used. The results of our experiments show an improvement in the robot's performance when the default hand-coded controller is augmented with corrective human demonstration.
Generalised Batho correction factor
International Nuclear Information System (INIS)
Siddon, R.L.
1984-01-01
There are various approximate algorithms available to calculate the radiation dose in the presence of a heterogeneous medium. The Webb and Fox product over layers formulation of the generalised Batho correction factor requires determination of the number of layers and the layer densities for each ray path. It has been shown that the Webb and Fox expression is inefficient for the heterogeneous medium which is expressed as regions of inhomogeneity rather than layers. The inefficiency of the layer formulation is identified as the repeated problem of determining for each ray path which inhomogeneity region corresponds to a particular layer. It has been shown that the formulation of the Batho correction factor as a product over inhomogeneity regions avoids that topological problem entirely. The formulation in terms of a product over regions simplifies the computer code and reduces the time required to calculate the Batho correction factor for the general heterogeneous medium. (U.K.)
An algorithm for gluinos on the lattice
International Nuclear Information System (INIS)
Montvay, I.
1995-10-01
Luescher's local bosonic algorithm for Monte Carlo simulations of quantum field theories with fermions is applied to the simulation of a possibly supersymmetric Yang-Mills theory with a Majorana fermion in the adjoint representation. Combined with a correction step in a two-step polynomial approximation scheme, the obtained algorithm seems to be promising and could be competitive with more conventional algorithms based on discretized classical (''molecular dynamics'') equations of motion. The application of the considered polynomial approximation scheme to optimized hopping parameter expansions is also discussed. (orig.)
a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images
Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei
2018-04-01
Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.
Lazaro, Clara; Fernandes, Joanna M.
2015-12-01
The GNSS-derived Path Delay (GPD) and the Data Combination (DComb) algorithms were developed by University of Porto (U.Porto), in the scope of different projects funded by ESA, to compute a continuous and improved wet tropospheric correction (WTC) for use in satellite altimetry. Both algorithms are mission independent and are based on a linear space-time objective analysis procedure that combines various wet path delay data sources. A new algorithm that gets the best of each aforementioned algorithm (GNSS-derived Path Delay Plus, GPD+) has been developed at U.Porto in the scope of SL_cci project, where the use of consistent and stable in time datasets is of major importance. The algorithm has been applied to the main eight altimetric missions (TOPEX/Poseidon, Jason-1, Jason-2, ERS-1, ERS-2, Envisat and CryoSat-2 and SARAL). Upcoming Sentinel-3 possesses a two-channel on-board radiometer similar to those that were deployed in ERS-1/2 and Envisat. Consequently, the fine-tuning of the GPD+ algorithm to these missions datasets shall enrich it, by increasing its capability to quickly deal with Sentinel-3 data. Foreseeing that the computation of an improved MWR-based WTC for use with Sentinel-3 data will be required, this study focuses on the results obtained for ERS-1/2 and Envisat missions, which are expected to give insight into the computation of this correction for the upcoming ESA altimetric mission. The various WTC corrections available for each mission (in general, the original correction derived from the on-board MWR, the model correction and the one derived from GPD+) are inter-compared either directly or using various sea level anomaly variance statistical analyses. Results show that the GPD+ algorithm is efficient in generating global and continuous datasets, corrected for land and ice contamination and spurious measurements of instrumental origin, with significant impacts on all ESA missions.
Algorithm of Functional Musculoskeletal Disorders Diagnostics
Alexandra P. Eroshenko
2012-01-01
The article scientifically justifies the algorithm of complex diagnostics of functional musculoskeletal disorders during resort treatment, aimed at the optimal application of modern methods of physical rehabilitation (correction programs formation), based on diagnostic methodologies findings
Triangle bipolar pulse shaping and pileup correction based on DSP
International Nuclear Information System (INIS)
Esmaeili-sani, Vahid; Moussavi-zarandi, Ali; Akbar-ashrafi, Nafiseh; Boghrati, Behzad
2011-01-01
Programmable Digital Signal Processing (DSP) microprocessors are capable of doing complex discrete signal processing algorithms with clock rates above 50 MHz. This combined with their low expense, ease of use and selected dedicated hardware make them an ideal option for spectrometer data acquisition systems. For this generation of spectrometers, functions that are typically performed in dedicated circuits, or offline, are being migrated to the field programmable gate array (FPGA). This will not only reduce the electronics, but the features of modern FPGAs can be utilized to add considerable signal processing power to produce higher resolution spectra. In this paper we report on an all-digital triangle bipolar pulse shaping and pileup correction algorithm that is being developed for the DSP. The pileup mitigation algorithm will allow the spectrometers to run at higher count rates or with multiple sources without imposing large data losses due to the overlapping of scintillation signals. This correction technique utilizes a very narrow bipolar triangle digital pulse shaping algorithm to extract energy information for most pileup events.
Triangle bipolar pulse shaping and pileup correction based on DSP
Energy Technology Data Exchange (ETDEWEB)
Esmaeili-sani, Vahid, E-mail: vaheed_esmaeely80@yahoo.com [Department of Nuclear Engineering and Physics, Amirkabir University of Technology, P.O. Box 4155-4494, Tehran (Iran, Islamic Republic of); Moussavi-zarandi, Ali; Akbar-ashrafi, Nafiseh; Boghrati, Behzad [Department of Nuclear Engineering and Physics, Amirkabir University of Technology, P.O. Box 4155-4494, Tehran (Iran, Islamic Republic of)
2011-02-11
Programmable Digital Signal Processing (DSP) microprocessors are capable of doing complex discrete signal processing algorithms with clock rates above 50 MHz. This combined with their low expense, ease of use and selected dedicated hardware make them an ideal option for spectrometer data acquisition systems. For this generation of spectrometers, functions that are typically performed in dedicated circuits, or offline, are being migrated to the field programmable gate array (FPGA). This will not only reduce the electronics, but the features of modern FPGAs can be utilized to add considerable signal processing power to produce higher resolution spectra. In this paper we report on an all-digital triangle bipolar pulse shaping and pileup correction algorithm that is being developed for the DSP. The pileup mitigation algorithm will allow the spectrometers to run at higher count rates or with multiple sources without imposing large data losses due to the overlapping of scintillation signals. This correction technique utilizes a very narrow bipolar triangle digital pulse shaping algorithm to extract energy information for most pileup events.
Algorithms for Cytoplasm Segmentation of Fluorescence Labelled Cells
Directory of Open Access Journals (Sweden)
Carolina Wählby
2002-01-01
Full Text Available Automatic cell segmentation has various applications in cytometry, and while the nucleus is often very distinct and easy to identify, the cytoplasm provides a lot more challenge. A new combination of image analysis algorithms for segmentation of cells imaged by fluorescence microscopy is presented. The algorithm consists of an image pre‐processing step, a general segmentation and merging step followed by a segmentation quality measurement. The quality measurement consists of a statistical analysis of a number of shape descriptive features. Objects that have features that differ to that of correctly segmented single cells can be further processed by a splitting step. By statistical analysis we therefore get a feedback system for separation of clustered cells. After the segmentation is completed, the quality of the final segmentation is evaluated. By training the algorithm on a representative set of training images, the algorithm is made fully automatic for subsequent images created under similar conditions. Automatic cytoplasm segmentation was tested on CHO‐cells stained with calcein. The fully automatic method showed between 89% and 97% correct segmentation as compared to manual segmentation.
Li, Yinlin; Kundu, Bijoy K.
2018-03-01
The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged -1.4 ± 8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4 ± 5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic
Solving the simple plant location problem using a data correcting approach
Goldengorin, Boris
2001-01-01
The Data Correcting Algorithm is a branch and bound algorithm in which thedata of a given problem instance is ‘corrected’ at each branching in such a waythat the new instance will be as close as possible to a polynomially solvableinstance and the result satisfies an acceptable accuracy (the
Energy Technology Data Exchange (ETDEWEB)
Matthews, Patrick
2014-05-01
Corrective Action Unit (CAU) 573 is located in Area 5 of the Nevada National Security Site, which is approximately 65 miles northwest of Las Vegas, Nevada. CAU 573 is a grouping of sites where there has been a suspected release of contamination associated with non-nuclear experiments and nuclear testing. This document describes the planned investigation of CAU 573, which comprises the following corrective action sites (CASs): • 05-23-02, GMX Alpha Contaminated Area • 05-45-01, Atmospheric Test Site - Hamilton These sites are being investigated because existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives.
Advanced Corrections for InSAR Using GPS and Numerical Weather Models
Cossu, F.; Foster, J. H.; Amelung, F.; Varugu, B. K.; Businger, S.; Cherubini, T.
2017-12-01
We present results from an investigation into the application of numerical weather models for generating tropospheric correction fields for Interferometric Synthetic Aperture Radar (InSAR). We apply the technique to data acquired from a UAVSAR campaign as well as from the CosmoSkyMed satellites. The complex spatial and temporal changes in the atmospheric propagation delay of the radar signal remain the single biggest factor limiting InSAR's potential for hazard monitoring and mitigation. A new generation of InSAR systems is being built and launched, and optimizing the science and hazard applications of these systems requires advanced methodologies to mitigate tropospheric noise. We use the Weather Research and Forecasting (WRF) model to generate a 900 m spatial resolution atmospheric models covering the Big Island of Hawaii and an even higher, 300 m resolution grid over the Mauna Loa and Kilauea volcanoes. By comparing a range of approaches, from the simplest, using reanalyses based on typically available meteorological observations, through to the "kitchen-sink" approach of assimilating all relevant data sets into our custom analyses, we examine the impact of the additional data sets on the atmospheric models and their effectiveness in correcting InSAR data. We focus particularly on the assimilation of information from the more than 60 GPS sites in the island. We ingest zenith tropospheric delay estimates from these sites directly into the WRF analyses, and also perform double-difference tomography using the phase residuals from the GPS processing to robustly incorporate heterogeneous information from the GPS data into the atmospheric models. We assess our performance through comparisons of our atmospheric models with external observations not ingested into the model, and through the effectiveness of the derived phase screens in reducing InSAR variance. Comparison of the InSAR data, our atmospheric analyses, and assessments of the active local and mesoscale
Two-loop QED corrections to the Altarelli-Parisi splitting functions
Energy Technology Data Exchange (ETDEWEB)
Florian, Daniel de [International Center for Advanced Studies (ICAS), UNSAM,Campus Miguelete, 25 de Mayo y Francia (1650) Buenos Aires (Argentina); Sborlini, Germán F.R.; Rodrigo, Germán [Instituto de Física Corpuscular, Universitat de València,Consejo Superior de Investigaciones Científicas,Parc Científic, E-46980 Paterna, Valencia (Spain)
2016-10-11
We compute the two-loop QED corrections to the Altarelli-Parisi (AP) splitting functions by using a deconstructive algorithmic Abelianization of the well-known NLO QCD corrections. We present explicit results for the full set of splitting kernels in a basis that includes the leptonic distribution functions that, starting from this order in the QED coupling, couple to the partonic densities. Finally, we perform a phenomenological analysis of the impact of these corrections in the splitting functions.
Integral image rendering procedure for aberration correction and size measurement.
Sommer, Holger; Ihrig, Andreas; Ebenau, Melanie; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion
2014-05-20
The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.
Directory of Open Access Journals (Sweden)
Claudia Giardino
2010-06-01
Full Text Available In this study we present calibration/validation activities associated with satellite MERIS image processing and aimed at estimatingchl a and CDOM in the Curonian Lagoon. Field data were used to validate the performances of two atmospheric correction algorithms,to build a band-ratio algorithm for chl a and to validate MERIS-derived maps. The neural network-based Case 2 Regional processor wasfound suitable for mapping CDOM; for chl a the band-ratio algorithm applied to image data corrected with the 6S code was found moreappropriate. Maps were in agreement with in situ measurements.This study confirmed the importance of atmospheric correction to estimate water quality and demonstrated the usefulness ofMERIS in investigating eutrophic aquatic ecosystems.
a Universal De-Noising Algorithm for Ground-Based LIDAR Signal
Ma, Xin; Xiang, Chengzhi; Gong, Wei
2016-06-01
Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.
Synthesis of atmospheric turbulence point spread functions by sparse and redundant representations
Hunt, Bobby R.; Iler, Amber L.; Bailey, Christopher A.; Rucci, Michael A.
2018-02-01
Atmospheric turbulence is a fundamental problem in imaging through long slant ranges, horizontal-range paths, or uplooking astronomical cases through the atmosphere. An essential characterization of atmospheric turbulence is the point spread function (PSF). Turbulence images can be simulated to study basic questions, such as image quality and image restoration, by synthesizing PSFs of desired properties. In this paper, we report on a method to synthesize PSFs of atmospheric turbulence. The method uses recent developments in sparse and redundant representations. From a training set of measured atmospheric PSFs, we construct a dictionary of "basis functions" that characterize the atmospheric turbulence PSFs. A PSF can be synthesized from this dictionary by a properly weighted combination of dictionary elements. We disclose an algorithm to synthesize PSFs from the dictionary. The algorithm can synthesize PSFs in three orders of magnitude less computing time than conventional wave optics propagation methods. The resulting PSFs are also shown to be statistically representative of the turbulence conditions that were used to construct the dictionary.
The theory of hybrid stochastic algorithms
International Nuclear Information System (INIS)
Kennedy, A.D.
1989-01-01
These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs
International Nuclear Information System (INIS)
Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.; Sanuki, T.
2007-01-01
Using the 'modified DPMJET-III' model explained in the previous paper [T. Sanuki et al., preceding Article, Phys. Rev. D 75, 043005 (2007).], we calculate the atmospheric neutrino flux. The calculation scheme is almost the same as HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).], but the usage of the 'virtual detector' is improved to reduce the error due to it. Then we study the uncertainty of the calculated atmospheric neutrino flux summarizing the uncertainties of individual components of the simulation. The uncertainty of K-production in the interaction model is estimated using other interaction models: FLUKA'97 and FRITIOF 7.02, and modifying them so that they also reproduce the atmospheric muon flux data correctly. The uncertainties of the flux ratio and zenith angle dependence of the atmospheric neutrino flux are also studied
Algorithm of Functional Musculoskeletal Disorders Diagnostics
Directory of Open Access Journals (Sweden)
Alexandra P. Eroshenko
2012-04-01
Full Text Available The article scientifically justifies the algorithm of complex diagnostics of functional musculoskeletal disorders during resort treatment, aimed at the optimal application of modern methods of physical rehabilitation (correction programs formation, based on diagnostic methodologies findings
Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques
2015-12-01
In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.
Automated Detection of Oscillating Regions in the Solar Atmosphere
Ireland, J.; Marsh, M. S.; Kucera, T. A.; Young, C. A.
2010-01-01
Recently observed oscillations in the solar atmosphere have been interpreted and modeled as magnetohydrodynamic wave modes. This has allowed for the estimation of parameters that are otherwise hard to derive, such as the coronal magnetic-field strength. This work crucially relies on the initial detection of the oscillations, which is commonly done manually. The volume of Solar Dynamics Observatory (SDO) data will make manual detection inefficient for detecting all of the oscillating regions. An algorithm is presented that automates the detection of areas of the solar atmosphere that support spatially extended oscillations. The algorithm identifies areas in the solar atmosphere whose oscillation content is described by a single, dominant oscillation within a user-defined frequency range. The method is based on Bayesian spectral analysis of time series and image filtering. A Bayesian approach sidesteps the need for an a-priori noise estimate to calculate rejection criteria for the observed signal, and it also provides estimates of oscillation frequency, amplitude, and noise, and the error in all of these quantities, in a self-consistent way. The algorithm also introduces the notion of quality measures to those regions for which a positive detection is claimed, allowing for simple post-detection discrimination by the user. The algorithm is demonstrated on two Transition Region and Coronal Explorer (TRACE) datasets, and comments regarding its suitability for oscillation detection in SDO are made.
Distribution Bottlenecks in Classification Algorithms
Zwartjes, G.J.; Havinga, Paul J.M.; Smit, Gerardus Johannes Maria; Hurink, Johann L.
2012-01-01
The abundance of data available on Wireless Sensor Networks makes online processing necessary. In industrial applications for example, the correct operation of equipment can be the point of interest while raw sampled data is of minor importance. Classication algorithms can be used to make state
The variable refractive index correction algorithm based on a stereo light microscope
International Nuclear Information System (INIS)
Pei, W; Zhu, Y Y
2010-01-01
Refraction occurs at least twice on both the top and the bottom surfaces of the plastic plate covering the micro channel in a microfluidic chip. The refraction and the nonlinear model of a stereo light microscope (SLM) may severely affect measurement accuracy. In this paper, we study the correlation between optical paths of the SLM and present an algorithm to adjust the refractive index based on the SLM. Our algorithm quantizes the influence of cover plate and double optical paths on the measurement accuracy, and realizes non-destructive, non-contact and precise 3D measurement of a hyaloid and closed container
Detection and correction of patient movement in prostate brachytherapy seed reconstruction
Lam, Steve T.; Cho, Paul S.; Marks, Robert J., II; Narayanan, Sreeram
2005-05-01
Intraoperative dosimetry of prostate brachytherapy can help optimize the dose distribution and potentially improve clinical outcome. Evaluation of dose distribution during the seed implant procedure requires the knowledge of 3D seed coordinates. Fluoroscopy-based seed localization is a viable option. From three x-ray projections obtained at different gantry angles, 3D seed positions can be determined. However, when local anaesthesia is used for prostate brachytherapy, the patient movement during fluoroscopy image capture becomes a practical problem. If uncorrected, the errors introduced by patient motion between image captures would cause seed mismatches. Subsequently, the seed reconstruction algorithm would either fail to reconstruct or yield erroneous results. We have developed an algorithm that permits detection and correction of patient movement that may occur between fluoroscopy image captures. The patient movement is decomposed into translational shifts along the tabletop and rotation about an axis perpendicular to the tabletop. The property of spatial invariance of the co-planar imaging geometry is used for lateral movement correction. Cranio-caudal movement is corrected by analysing the perspective invariance along the x-ray axis. Rotation is estimated by an iterative method. The method can detect and correct for the range of patient movement commonly seen in the clinical environment. The algorithm has been implemented for routine clinical use as the preprocessing step for seed reconstruction.
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams
International Nuclear Information System (INIS)
Papanikolaou, Niko; Stathakis, Sotirios
2009-01-01
Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.
Malicious Cognitive User Identification Algorithm in Centralized Spectrum Sensing System
Directory of Open Access Journals (Sweden)
Jingbo Zhang
2017-11-01
Full Text Available Collaborative spectral sensing can fuse the perceived results of multiple cognitive users, and thus will improve the accuracy of perceived results. However, the multi-source features of the perceived results result in security problems in the system. When there is a high probability of a malicious user attack, the traditional algorithm can correctly identify the malicious users. However, when the probability of attack by malicious users is reduced, it is almost impossible to use the traditional algorithm to correctly distinguish between honest users and malicious users, which greatly reduces the perceived performance. To address the problem above, based on the β function and the feedback iteration mathematical method, this paper proposes a malicious user identification algorithm under multi-channel cooperative conditions (β-MIAMC, which involves comprehensively assessing the cognitive user’s performance on multiple sub-channels to identify the malicious user. Simulation results show under the same attack probability, compared with the traditional algorithm, the β-MIAMC algorithm can more accurately identify the malicious users, reducing the false alarm probability of malicious users by more than 20%. When the attack probability is greater than 7%, the proposed algorithm can identify the malicious users with 100% certainty.
2013-04-12
... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration Proposed Information Collection; Comment Request; Fish and Seafood Promotion; Correction AGENCY: National Oceanic and Atmospheric... Federal Register (78 FR 20092) on the proposed information collection, Fish and Seafood Promotion. The...
A Simplified Algorithm for Statistical Investigation of Damage Spreading
International Nuclear Information System (INIS)
Gecow, Andrzej
2009-01-01
On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead of a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method--function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.
Nikulin, Vladimir V.
2005-10-01
The performance of mobile laser communication systems operating within Earth's atmosphere is generally limited by the pointing errors due to movement of the platforms and mechanical vibrations. In addition, atmospheric turbulence causes changes of the refractive index along the propagation path, creating random redistribution of the optical energy in the spatial domain. Under adverse conditions these effects lead to increased bit error rate. While traditional approaches provide separate treatment of these problems, suggesting high-bandwidth beam steering systems for tracking and wavefront control for the mitigation of atmospheric effects, the two tasks can be integrated. This paper presents a hybrid laser beam-steering-wavefront-control system comprising an electrically addressed spatial light modulator (SLM) installed on the Omni-Wrist sensor mount. The function of the Omni-Wrist is to provide coarse steering over a wide range of pointing angles, while that of the SLM is twofold: wavefront correction and fine steering. The control law for the Omni-Wrist is synthesized using a decentralized approach that provides independent access to the azimuth and declination channels; calculation of the required phase profile for the SLM is optimization-based. This paper presents the control algorithms, the approach to coordinating the operation of the two systems, and the results.
Skornitzke, S; Fritz, F; Klauss, M; Pahn, G; Hansen, J; Hirsch, J; Grenacher, L; Kauczor, H-U; Stiller, W
2015-02-01
To compare six different scenarios for correcting for breathing motion in abdominal dual-energy CT (DECT) perfusion measurements. Rigid [RRComm(80 kVp)] and non-rigid [NRComm(80 kVp)] registration of commercially available CT perfusion software, custom non-rigid registration [NRCustom(80 kVp], demons algorithm) and a control group [CG(80 kVp)] without motion correction were evaluated using 80 kVp images. Additionally, NRCustom was applied to dual-energy (DE)-blended [NRCustom(DE)] and virtual non-contrast [NRCustom(VNC)] images, yielding six evaluated scenarios. After motion correction, perfusion maps were calculated using a combined maximum slope/Patlak model. For qualitative evaluation, three blinded radiologists independently rated motion correction quality and resulting perfusion maps on a four-point scale (4 = best, 1 = worst). For quantitative evaluation, relative changes in metric values, R(2) and residuals of perfusion model fits were calculated. For motion-corrected images, mean ratings differed significantly [NRCustom(80 kVp) and NRCustom(DE), 3.3; NRComm(80 kVp), 3.1; NRCustom(VNC), 2.9; RRComm(80 kVp), 2.7; CG(80 kVp), 2.7; all p VNC), 22.8%; RRComm(80 kVp), 0.6%; CG(80 kVp), 0%]. Regarding perfusion maps, NRCustom(80 kVp) and NRCustom(DE) were rated highest [NRCustom(80 kVp), 3.1; NRCustom(DE), 3.0; NRComm(80 kVp), 2.8; NRCustom(VNC), 2.6; CG(80 kVp), 2.5; RRComm(80 kVp), 2.4] and had significantly higher R(2) and lower residuals. Correlation between qualitative and quantitative evaluation was low to moderate. Non-rigid motion correction improves spatial alignment of the target region and fit of CT perfusion models. Using DE-blended and DE-VNC images for deformable registration offers no significant improvement. Non-rigid algorithms improve the quality of abdominal CT perfusion measurements but do not benefit from DECT post processing.
Self-corrected chip-based dual-comb spectrometer.
Hébert, Nicolas Bourbeau; Genest, Jérôme; Deschênes, Jean-Daniel; Bergeron, Hugo; Chen, George Y; Khurmi, Champak; Lancaster, David G
2017-04-03
We present a dual-comb spectrometer based on two passively mode-locked waveguide lasers integrated in a single Er-doped ZBLAN chip. This original design yields two free-running frequency combs having a high level of mutual stability. We developed in parallel a self-correction algorithm that compensates residual relative fluctuations and yields mode-resolved spectra without the help of any reference laser or control system. Fluctuations are extracted directly from the interferograms using the concept of ambiguity function, which leads to a significant simplification of the instrument that will greatly ease its widespread adoption and commercial deployment. Comparison with a correction algorithm relying on a single-frequency laser indicates discrepancies of only 50 attoseconds on optical timings. The capacities of this instrument are finally demonstrated with the acquisition of a high-resolution molecular spectrum covering 20 nm. This new chip-based multi-laser platform is ideal for the development of high-repetition-rate, compact and fieldable comb spectrometers in the near- and mid-infrared.
Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam
2016-07-01
Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.
International Nuclear Information System (INIS)
Huang Zhenghai; Gu Weizhe
2008-01-01
In this paper, we construct an augmented system of the standard monotone linear complementarity problem (LCP), and establish the relations between the augmented system and the LCP. We present a smoothing-type algorithm for solving the augmented system. The algorithm is shown to be globally convergent without assuming any prior knowledge of feasibility/infeasibility of the problem. In particular, if the LCP has a solution, then the algorithm either generates a maximal complementary solution of the LCP or detects correctly solvability of the LCP, and in the latter case, an existing smoothing-type algorithm can be directly applied to solve the LCP without any additional assumption and it generates a maximal complementary solution of the LCP; and that if the LCP is infeasible, then the algorithm detect correctly infeasibility of the LCP. To the best of our knowledge, such properties have not appeared in the existing literature for smoothing-type algorithms
Predicting Top-of-Atmosphere Thermal Radiance Using MERRA-2 Atmospheric Data with Deep Learning
Directory of Open Access Journals (Sweden)
Tania Kleynhans
2017-11-01
Full Text Available Image data from space-borne thermal infrared (IR sensors are used for a variety of applications, however they are often limited by their temporal resolution (i.e., repeat coverage. To potentially increase the temporal availability of thermal image data, a study was performed to determine the extent to which thermal image data can be simulated from available atmospheric and surface data. The work conducted here explored the use of Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2 developed by The National Aeronautics and Space Administration (NASA to predict top-of-atmosphere (TOA thermal IR radiance globally at time scales finer than available satellite data. For this case study, TOA radiance data was derived for band 31 (10.97 μ m of the Moderate-Resolution Imaging Spectroradiometer (MODIS sensor. Two approaches have been followed, namely an atmospheric radiative transfer forward modeling approach and a supervised learning approach. The first approach uses forward modeling to predict TOA radiance from the available surface and atmospheric data. The second approach applied four different supervised learning algorithms to the atmospheric data. The algorithms included a linear least squares regression model, a non-linear support vector regression (SVR model, a multi-layer perceptron (MLP, and a convolutional neural network (CNN. This research found that the multi-layer perceptron model produced the lowest overall error rates with an root mean square error (RMSE of 1.36 W/m 2 ·sr· μ m when compared to actual Terra/MODIS band 31 image data. These studies found that for radiances above 6 W/m 2 ·sr· μ m, the forward modeling approach could predict TOA radiance to within 12 percent, and the best supervised learning approach can predict TOA to within 11 percent.
Optimization of Broadband Wavefront Correction at the Princeton High Contrast Imaging Laboratory
Groff, Tyler Dean; Kasdin, N.; Carlotti, A.
2011-01-01
Wavefront control for imaging of terrestrial planets using coronagraphic techniques requires improving the performance of the wavefront control techniques to expand the correction bandwidth and the size of the dark hole over which it is effective. At the Princeton High Contrast Imaging Laboratory we have focused on increasing the search area using two deformable mirrors (DMs) in series to achieve symmetric correction by correcting both amplitude and phase aberrations. Here we are concerned with increasing the bandwidth of light over which this correction is effective so we include a finite bandwidth into the optimization problem to generate a new stroke minimization algorithm. This allows us to minimize the actuator stroke on the DMs given contrast constraints at multiple wavelengths which define a window over which the dark hole will persist. This windowed stroke minimization algorithm is written in such a way that a weight may be applied to dictate the relative importance of the outer wavelengths to the central wavelength. In order to supply the estimates at multiple wavelengths a functional relationship to a central estimation wavelength is formed. Computational overhead and new experimental results of this windowed stroke minimization algorithm are discussed. The tradeoff between symmetric correction and achievable bandwidth is compared to the observed contrast degradation with wavelength in the experimental results. This work is supported by NASA APRA Grant #NNX09AB96G. The author is also supported under an NESSF Fellowship.
Software Package for Optics Measurement and Correction in the LHC
Aiba, M; Tomas, R; Vanbavinckhove, G
2010-01-01
A software package has been developed for the LHC on-line optics measurement and correction. This package includes several different algorithms to measure phase advance, beta functions, dispersion, coupling parameters and even some non-linear terms. A Graphical User Interface provides visualization tools to compare measurements to model predictions, fit analytical formula, localize error sources and compute and send corrections to the hardware.
Modeling Effectivity of Atmospheric Advection-Diffusion Processes
International Nuclear Information System (INIS)
Brojewski, R.
1999-01-01
Some methods of solving the advection-diffusion problems useful in the field of atmospheric physics are presented and analyzed in the paper. The most effective one ( from the point of view of computer applications) was chosen. This is the method of problem decomposition with respect to the directions followed by secondary decomposition of the problem with respect to the physical phenomena. Introducing some corrections to the classical numerical methods of solving the problems, a hybrid composed of the finite element method for the advection problems and the implicit method with averaging for the diffusion processes was achieved. This hybrid method and application of the corrections produces a very effective means for solving the problems of substance transportation in atmosphere. (author)
Performance analysis of a decoding algorithm for algebraic-geometry codes
DEFF Research Database (Denmark)
Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund
1999-01-01
The fast decoding algorithm for one point algebraic-geometry codes of Sakata, Elbrond Jensen, and Hoholdt corrects all error patterns of weight less than half the Feng-Rao minimum distance. In this correspondence we analyze the performance of the algorithm for heavier error patterns. It turns out...
Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation
Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei
2016-11-01
Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Directory of Open Access Journals (Sweden)
Tianzhou Chen
2013-09-01
Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.
Energy Technology Data Exchange (ETDEWEB)
Pontone, Gianluca; Bertella, Erika; Baggiano, Andrea; Mushtaq, Saima; Loguercio, Monica; Segurini, Chiara; Conte, Edoardo; Beltrama, Virginia; Annoni, Andrea; Formenti, Alberto; Petulla, Maria; Trabattoni, Daniela; Pepi, Mauro [Centro Cardiologico Monzino, IRCCS, Milan (Italy); Andreini, Daniele; Montorsi, Piero; Bartorelli, Antonio L. [Centro Cardiologico Monzino, IRCCS, Milan (Italy); University of Milan, Department of Cardiovascular Sciences and Community Health, Milan (Italy); Guaricci, Andrea I. [University of Foggia, Department of Cardiology, Foggia (Italy)
2016-01-15
The aim of this study was to evaluate the impact of a novel intra-cycle motion correction algorithm (MCA) on overall evaluability and diagnostic accuracy of cardiac computed tomography coronary angiography (CCT). From a cohort of 900 consecutive patients referred for CCT for suspected coronary artery disease (CAD), we enrolled 160 (18 %) patients (mean age 65.3 ± 11.7 years, 101 male) with at least one coronary segment classified as non-evaluable for motion artefacts. The CCT data sets were evaluated using a standard reconstruction algorithm (SRA) and MCA and compared in terms of subjective image quality, evaluability and diagnostic accuracy. The mean heart rate during the examination was 68.3 ± 9.4 bpm. The MCA showed a higher Likert score (3.1 ± 0.9 vs. 2.5 ± 1.1, p < 0.001) and evaluability (94%vs.79 %, p < 0.001) than the SRA. In a 45-patient subgroup studied by clinically indicated invasive coronary angiography, specificity, positive predictive value and accuracy were higher in MCA vs. SRA in segment-based and vessel-based models, respectively (87%vs.73 %, 50%vs.34 %, 85%vs.73 %, p < 0.001 and 62%vs.28 %, 66%vs.51 % and 75%vs.57 %, p < 0.001). In a patient-based model, MCA showed higher accuracy vs. SCA (93%vs.76 %, p < 0.05). MCA can significantly improve subjective image quality, overall evaluability and diagnostic accuracy of CCT. (orig.)
Segmentation-free empirical beam hardening correction for CT
Energy Technology Data Exchange (ETDEWEB)
Schüller, Sören; Sawall, Stefan [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich [Sirona Dental Systems GmbH, Fabrikstraße 31, 64625 Bensheim (Germany); Kachelrieß, Marc, E-mail: marc.kachelriess@dkfz.de [German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)
2015-02-15
Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the
Segmentation-free empirical beam hardening correction for CT.
Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc
2015-02-01
The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed
A new algorithm for coding geological terminology
Apon, W.
The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.
National Research Council Canada - National Science Library
Matson, Charles; Haji, Alim
2007-01-01
Multi-frame blind deconvolution (MFBD) algorithms can be used to generate a deblurred image of an object from a sequence of short-exposure and atmospherically-blurred images of the object by jointly estimating the common object...
Quantum error correction for beginners
International Nuclear Information System (INIS)
Devitt, Simon J; Nemoto, Kae; Munro, William J
2013-01-01
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.
2013-01-01
Background In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. Results FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230–239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720–737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. Conclusion The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches. PMID:24206927
The Innsbruck/ESO sky models and telluric correction tools*
Directory of Open Access Journals (Sweden)
Kimeswenger S.
2015-01-01
While the ground based astronomical observatories just have to correct for the line-of-sight integral of these effects, the Čerenkov telescopes use the atmosphere as the primary detector. The measured radiation originates at lower altitudes and does not pass through the entire atmosphere. Thus, a decent knowledge of the profile of the atmosphere at any time is required. The latter cannot be achieved by photometric measurements of stellar sources. We show here the capabilities of our sky background model and data reduction tools for ground-based optical/infrared telescopes. Furthermore, we discuss the feasibility of monitoring the atmosphere above any observing site, and thus, the possible application of the method for Čerenkov telescopes.
Quality Assessment of Collection 6 MODIS Atmospheric Science Products
Manoharan, V. S.; Ridgway, B.; Platnick, S. E.; Devadiga, S.; Mauoka, E.
2015-12-01
Since the launch of the NASA Terra and Aqua satellites in December 1999 and May 2002, respectively, atmosphere and land data acquired by the MODIS (Moderate Resolution Imaging Spectroradiometer) sensor on-board these satellites have been reprocessed five times at the MODAPS (MODIS Adaptive Processing System) located at NASA GSFC. The global land and atmosphere products use science algorithms developed by the NASA MODIS science team investigators. MODAPS completed Collection 6 reprocessing of MODIS Atmosphere science data products in April 2015 and is currently generating the Collection 6 products using the latest version of the science algorithms. This reprocessing has generated one of the longest time series of consistent data records for understanding cloud, aerosol, and other constituents in the earth's atmosphere. It is important to carefully evaluate and assess the quality of this data and remove any artifacts to maintain a useful climate data record. Quality Assessment (QA) is an integral part of the processing chain at MODAPS. This presentation will describe the QA approaches and tools adopted by the MODIS Land/Atmosphere Operational Product Evaluation (LDOPE) team to assess the quality of MODIS operational Atmospheric products produced at MODAPS. Some of the tools include global high resolution images, time series analysis and statistical QA metrics. The new high resolution global browse images with pan and zoom have provided the ability to perform QA of products in real time through synoptic QA on the web. This global browse generation has been useful in identifying production error, data loss, and data quality issues from calibration error, geolocation error and algorithm performance. A time series analysis for various science datasets in the Level-3 monthly product was recently developed for assessing any long term drifts in the data arising from instrument errors or other artifacts. This presentation will describe and discuss some test cases from the
Effects of visualization on algorithm comprehension
Mulvey, Matthew
Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.
Methods to Increase Educational Effectiveness in an Adult Correctional Setting.
Kuster, Byron
1998-01-01
A correctional educator reflects on methods that improve instructional effectiveness. These include teacher-student collaboration, clear goals, student accountability, positive classroom atmosphere, high expectations, and mutual respect. (SK)
A Horizontal Tilt Correction Method for Ship License Numbers Recognition
Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi
2018-02-01
An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.
Petri nets SM-cover-based on heuristic coloring algorithm
Tkacz, Jacek; Doligalski, Michał
2015-09-01
In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.
Ionospheric correction for spaceborne single-frequency GPS based ...
Indian Academy of Sciences (India)
A modified ionospheric correction method and the corresponding approximate algorithm for spaceborne single-frequency Global Positioning System (GPS) users are proposed in this study. Single Layer Model (SLM) mapping function for spaceborne GPS was analyzed. SLM mapping functions at different altitudes were ...
NPOESS Tools for Rapid Algorithm Updates
Route, G.; Grant, K. D.; Hughes, B.; Reed, B.
2009-12-01
The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes both NPP and NPOESS satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization is responsible for the algorithms that produce the EDRs, including their quality aspects. As the Calibration and Validation activities move forward following both the NPP launch and subsequent NPOESS launches, rapid algorithm updates may be required. Raytheon and Northrop Grumman have developed tools and processes to enable changes to be evaluated, tested, and moved into the operational baseline in a rapid and efficient manner. This presentation will provide an overview of the tools available to the Cal/Val teams to ensure rapid and accurate assessment of algorithm changes, along with the processes in place to ensure baseline integrity.
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-01-01
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974
Energy Technology Data Exchange (ETDEWEB)
Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary; Wiersma, Rodney D., E-mail: rwiersma@uchicago.edu [Department of Radiation and Cellular Oncology, The University of Chicago, Chicago, Illinois 60637 (United States)
2015-06-15
Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared head position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.
International Nuclear Information System (INIS)
Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary; Wiersma, Rodney D.
2015-01-01
Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared head position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS
Energy Technology Data Exchange (ETDEWEB)
Robert Boehlecke
2004-04-01
The six bunkers included in CAU 204 were primarily used to monitor atmospheric testing or store munitions. The ''Corrective Action Investigation Plan (CAIP) for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada'' (NNSA/NV, 2002a) provides information relating to the history, planning, and scope of the investigation; therefore, it will not be repeated in this CADD. This CADD identifies potential corrective action a