WorldWideScience

Sample records for psf-matched difference imaging

  1. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  2. Noise and signal properties in PSF-based fully 3D PET image reconstruction: an experimental evaluation

    International Nuclear Information System (INIS)

    Tong, S; Alessio, A M; Kinahan, P E

    2010-01-01

    The addition of accurate system modeling in PET image reconstruction results in images with distinct noise texture and characteristics. In particular, the incorporation of point spread functions (PSF) into the system model has been shown to visually reduce image noise, but the noise properties have not been thoroughly studied. This work offers a systematic evaluation of noise and signal properties in different combinations of reconstruction methods and parameters. We evaluate two fully 3D PET reconstruction algorithms: (1) OSEM with exact scanner line of response modeled (OSEM+LOR), (2) OSEM with line of response and a measured point spread function incorporated (OSEM+LOR+PSF), in combination with the effects of four post-reconstruction filtering parameters and 1-10 iterations, representing a range of clinically acceptable settings. We used a modified NEMA image quality (IQ) phantom, which was filled with 68 Ge and consisted of six hot spheres of different sizes with a target/background ratio of 4:1. The phantom was scanned 50 times in 3D mode on a clinical system to provide independent noise realizations. Data were reconstructed with OSEM+LOR and OSEM+LOR+PSF using different reconstruction parameters, and our implementations of the algorithms match the vendor's product algorithms. With access to multiple realizations, background noise characteristics were quantified with four metrics. Image roughness and the standard deviation image measured the pixel-to-pixel variation; background variability and ensemble noise quantified the region-to-region variation. Image roughness is the image noise perceived when viewing an individual image. At matched iterations, the addition of PSF leads to images with less noise defined as image roughness (reduced by 35% for unfiltered data) and as the standard deviation image, while it has no effect on background variability or ensemble noise. In terms of signal to noise performance, PSF-based reconstruction has a 7% improvement in

  3. Matching rendered and real world images by digital image processing

    Science.gov (United States)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  4. Practical considerations for image-based PSF and blobs reconstruction in PET

    International Nuclear Information System (INIS)

    Stute, Simon; Comtat, Claude

    2013-01-01

    Iterative reconstructions in positron emission tomography (PET) need a model relating the recorded data to the object/patient being imaged, called the system matrix (SM). The more realistic this model, the better the spatial resolution in the reconstructed images. However, a serious concern when using a SM that accurately models the resolution properties of the PET system is the undesirable edge artefact, visible through oscillations near sharp discontinuities in the reconstructed images. This artefact is a natural consequence of solving an ill-conditioned inverse problem, where the recorded data are band-limited. In this paper, we focus on practical aspects when considering image-based point-spread function (PSF) reconstructions. To remove the edge artefact, we propose to use a particular case of the method of sieves (Grenander 1981 Abstract Inference New York: Wiley), which simply consists in performing a standard PSF reconstruction, followed by a post-smoothing using the PSF as the convolution kernel. Using analytical simulations, we investigate the impact of different reconstruction and PSF modelling parameters on the edge artefact and its suppression, in the case of noise-free data and an exactly known PSF. Using Monte-Carlo simulations, we assess the proposed method of sieves with respect to the choice of the geometric projector and the PSF model used in the reconstruction. When the PSF model is accurately known, we show that the proposed method of sieves succeeds in completely suppressing the edge artefact, though after a number of iterations higher than typically used in practice. When applying the method to realistic data (i.e. unknown true SM and noisy data), we show that the choice of the geometric projector and the PSF model does not impact the results in terms of noise and contrast recovery, as long as the PSF has a width close to the true PSF one. Equivalent results were obtained using either blobs or voxels in the same conditions (i.e. the blob

  5. Implementation and Application of PSF-Based EPI Distortion Correction to High Field Animal Imaging

    Directory of Open Access Journals (Sweden)

    Dominik Paul

    2009-01-01

    Full Text Available The purpose of this work is to demonstrate the functionality and performance of a PSF-based geometric distortion correction for high-field functional animal EPI. The EPI method was extended to measure the PSF and a postprocessing chain was implemented in Matlab for offline distortion correction. The correction procedure was applied to phantom and in vivo imaging of mice and rats at 9.4T using different SE-EPI and DWI-EPI protocols. Results show the significant improvement in image quality for single- and multishot EPI. Using a reduced FOV in the PSF encoding direction clearly reduced the acquisition time for PSF data by an acceleration factor of 2 or 4, without affecting the correction quality.

  6. Model-based PSF and MTF estimation and validation from skeletal clinical CT images.

    Science.gov (United States)

    Pakdel, Amirreza; Mainprize, James G; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M

    2014-01-01

    A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the scanner-specific parameters.

  7. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    International Nuclear Information System (INIS)

    Pakdel, Amirreza; Mainprize, James G.; Robert, Normand; Fialkov, Jeffery; Whyne, Cari M.

    2014-01-01

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge about the

  8. Model-based PSF and MTF estimation and validation from skeletal clinical CT images

    Energy Technology Data Exchange (ETDEWEB)

    Pakdel, Amirreza [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Mainprize, James G.; Robert, Normand [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada); Fialkov, Jeffery [Division of Plastic Surgery, Sunnybrook Health Sciences Center, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, University of Toronto, Toronto, Ontario M5S 3M2 (Canada); Whyne, Cari M., E-mail: cari.whyne@sunnybrook.ca [Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada and Department of Surgery, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3M2 (Canada)

    2014-01-15

    Purpose: A method was developed to correct for systematic errors in estimating the thickness of thin bones due to image blurring in CT images using bone interfaces to estimate the point-spread-function (PSF). This study validates the accuracy of the PSFs estimated using said method from various clinical CT images featuring cortical bones. Methods: Gaussian PSFs, characterized by a different extent in the z (scan) direction than in the x and y directions were obtained using our method from 11 clinical CT scans of a cadaveric craniofacial skeleton. These PSFs were estimated for multiple combinations of scanning parameters and reconstruction methods. The actual PSF for each scan setting was measured using the slanted-slit technique within the image slice plane and the longitudinal axis. The Gaussian PSF and the corresponding modulation transfer function (MTF) are compared against the actual PSF and MTF for validation. Results: The differences (errors) between the actual and estimated full-width half-max (FWHM) of the PSFs were 0.09 ± 0.05 and 0.14 ± 0.11 mm for the xy and z axes, respectively. The overall errors in the predicted frequencies measured at 75%, 50%, 25%, 10%, and 5% MTF levels were 0.06 ± 0.07 and 0.06 ± 0.04 cycles/mm for the xy and z axes, respectively. The accuracy of the estimates was dependent on whether they were reconstructed with a standard kernel (Toshiba's FC68, mean error of 0.06 ± 0.05 mm, MTF mean error 0.02 ± 0.02 cycles/mm) or a high resolution bone kernel (Toshiba's FC81, PSF FWHM error 0.12 ± 0.03 mm, MTF mean error 0.09 ± 0.08 cycles/mm). Conclusions: The method is accurate in 3D for an image reconstructed using a standard reconstruction kernel, which conforms to the Gaussian PSF assumption but less accurate when using a high resolution bone kernel. The method is a practical and self-contained means of estimating the PSF in clinical CT images featuring cortical bones, without the need phantoms or any prior knowledge

  9. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  10. Image-based Modeling of PSF Deformation with Application to Limited Angle PET Data

    Science.gov (United States)

    Matej, Samuel; Li, Yusheng; Panetta, Joseph; Karp, Joel S.; Surti, Suleman

    2016-01-01

    The point-spread-functions (PSFs) of reconstructed images can be deformed due to detector effects such as resolution blurring and parallax error, data acquisition geometry such as insufficient sampling or limited angular coverage in dual-panel PET systems, or reconstruction imperfections/simplifications. PSF deformation decreases quantitative accuracy and its spatial variation lowers consistency of lesion uptake measurement across the imaging field-of-view (FOV). This can be a significant problem with dual panel PET systems even when using TOF data and image reconstruction models of the detector and data acquisition process. To correct for the spatially variant reconstructed PSF distortions we propose to use an image-based resolution model (IRM) that includes such image PSF deformation effects. Originally the IRM was mostly used for approximating data resolution effects of standard PET systems with full angular coverage in a computationally efficient way, but recently it was also used to mitigate effects of simplified geometric projectors. Our work goes beyond this by including into the IRM reconstruction imperfections caused by combination of the limited angle, parallax errors, and any other (residual) deformation effects and testing it for challenging dual panel data with strongly asymmetric and variable PSF deformations. We applied and tested these concepts using simulated data based on our design for a dedicated breast imaging geometry (B-PET) consisting of dual-panel, time-of-flight (TOF) detectors. We compared two image-based resolution models; i) a simple spatially invariant approximation to PSF deformation, which captures only the general PSF shape through an elongated 3D Gaussian function, and ii) a spatially variant model using a Gaussian mixture model (GMM) to more accurately capture the asymmetric PSF shape in images reconstructed from data acquired with the B-PET scanner geometry. Results demonstrate that while both IRMs decrease the overall uptake

  11. PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems

    Directory of Open Access Journals (Sweden)

    Petr Janout

    2017-02-01

    Full Text Available Ultra-wide-field of view (UWFOV imaging systems are affected by various aberrations, most of which are highly angle-dependent. A description of UWFOV imaging systems, such as microscopy optics, security camera systems and other special space-variant imaging systems, is a difficult task that can be achieved by estimating the Point Spread Function (PSF of the system. This paper proposes a novel method for modeling the space-variant PSF of an imaging system using the Zernike polynomials wavefront description. The PSF estimation algorithm is based on obtaining field-dependent expansion coefficients of the Zernike polynomials by fitting real image data of the analyzed imaging system using an iterative approach in an initial estimate of the fitting parameters to ensure convergence robustness. The method is promising as an alternative to the standard approach based on Shack–Hartmann interferometry, since the estimate of the aberration coefficients is processed directly in the image plane. This approach is tested on simulated and laboratory-acquired image data that generally show good agreement. The resulting data are compared with the results of other modeling methods. The proposed PSF estimation method provides around 5% accuracy of the optical system model.

  12. Computing the PSF with high-resolution reconstruction technique

    Science.gov (United States)

    Su, Xiaofeng; Chen, FanSheng; Yang, Xue; Xue, Yulong; Dong, YucCui

    2016-05-01

    Point spread function (PSF) is a very important indicator of the imaging system; it can describe the filtering characteristics of the imaging system. The image is fuzzy when the PSF is not very well and vice versa. In the remote sensing image process, the image could be restored by using the PSF of the image system to get a clearer picture. So, to measure the PSF of the system is very necessary. Usually we can use the knife edge method, line spread function (LSF) method and streak plate method to get the modulation transfer function (MTF), and then use the relationship of the parameters to calculate the PSF of the system. In the knife edge method, the non-uniformity (NU) of the detector would lead an unstable precision of the edge angle; using the streak plate could get a more stable MTF, but it is only at one frequency point in one direction, so it is not very helpful to get a high-precision PSF. In this paper, we used the image of the point target directly and combined with the energy concentration to calculate the PSF. First we make a point matrix target board and make sure the point can image to a sub-pixel position at the detector array; then we use the center of gravity to locate the point targets image to get the energy concentration; then we fusion the targets image together by using the characteristics of sub-pixel and get a stable PSF of the system. Finally we use the simulation results to confirm the accuracy of the method.

  13. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    Science.gov (United States)

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  14. Fuzzy Matching Based on Gray-scale Difference for Quantum Images

    Science.gov (United States)

    Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia

    2018-05-01

    Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.

  15. Iterative PSF Estimation and Its Application to Shift Invariant and Variant Blur Reduction

    Directory of Open Access Journals (Sweden)

    Seung-Won Jung

    2009-01-01

    Full Text Available Among image restoration approaches, image deconvolution has been considered a powerful solution. In image deconvolution, a point spread function (PSF, which describes the blur of the image, needs to be determined. Therefore, in this paper, we propose an iterative PSF estimation algorithm which is able to estimate an accurate PSF. In real-world motion-blurred images, a simple parametric model of the PSF fails when a camera moves in an arbitrary direction with an inconsistent speed during an exposure time. Moreover, the PSF normally changes with spatial location. In order to accurately estimate the complex PSF of a real motion blurred image, we iteratively update the PSF by using a directional spreading operator. The directional spreading is applied to the PSF when it reduces the amount of the blur and the restoration artifacts. Then, to generalize the proposed technique to the linear shift variant (LSV model, a piecewise invariant approach is adopted by the proposed image segmentation method. Experimental results show that the proposed method effectively estimates the PSF and restores the degraded images.

  16. Relationship between line spread function (LSF), or slice sensitivity profile (SSP), and point spread function (PSF) in CT image system

    International Nuclear Information System (INIS)

    Ohkubo, Masaki; Wada, Shinichi; Kobayashi, Teiji; Lee, Yongbum; Tsai, Du-Yih

    2004-01-01

    In the CT image system, we revealed the relationship between line spread function (LSF), or slice sensitivity profile (SSP), and point spread function (PSF). In the system, the following equation has been reported; I(x,y)=O(x,y) ** PSF(x,y), in which I(x,y) and O(x,y) are CT image and object function, respectively, and ** is 2-dimensional convolution. In the same way, the following 3-dimensional expression applies; I'(x,y,z)=O'(x,y,z) *** PSF'(x,y,z), in which z-axis is the direction perpendicular to the x/y-scan plane. We defined that the CT image system was separable, when the above two equations could be transformed into following equations; I(x,y)=[O(x,y) * LSF x (x)] * LSF y (y) and I'(x,y,z) =[O'(x,y,z) * SSP(z)] ** PSF(x,y), respectively, in which LSF x (x) and LSF y (y) are LSFs in x- and y-direction, respectively. Previous reports for the LSF and SSP are considered to assume the separable-system. Under the condition of separable-system, we derived following equations; PSF(x,y)=LSF x (x) ·LSF y (y) and PSF'(x,y,z)=PSF(x,y)·SSP(z). They were validated by the computer-simulations. When the study based on 1-dimensional functions of LSF and SSP are expanded to that based on 2- or 3-dimensional functions of PSF, derived equations must be required. (author)

  17. Experimental evaluation and basis function optimization of the spatially variant image-space PSF on the Ingenuity PET/MR scanner

    International Nuclear Information System (INIS)

    Kotasidis, Fotis A.; Zaidi, Habib

    2014-01-01

    Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailed investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function

  18. Experimental evaluation and basis function optimization of the spatially variant image-space PSF on the Ingenuity PET/MR scanner

    Energy Technology Data Exchange (ETDEWEB)

    Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, 9700 RB (Netherlands)

    2014-06-15

    Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailed investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis

  19. Iterative PSF Estimation and Its Application to Shift Invariant and Variant Blur Reduction

    OpenAIRE

    Seung-Won Jung; Byeong-Doo Choi; Sung-Jea Ko

    2009-01-01

    Among image restoration approaches, image deconvolution has been considered a powerful solution. In image deconvolution, a point spread function (PSF), which describes the blur of the image, needs to be determined. Therefore, in this paper, we propose an iterative PSF estimation algorithm which is able to estimate an accurate PSF. In real-world motion-blurred images, a simple parametric model of the PSF fails when a camera moves in an arbitrary direction with an inconsistent speed during an e...

  20. Validation of PSF-based 3D reconstruction for myocardial blood flow measurements with Rb-82 PET

    DEFF Research Database (Denmark)

    Tolbod, Lars Poulsen; Christensen, Nana Louise; Møller, Lone W.

    images, filtered backprojection (FBP). Furthermore, since myocardial segmentation might be affected by image quality, two different approaches to segmentation implemented in standard software (Carimas (Turku PET Centre) and QPET (Cedar Sinai)) are utilized. Method:14 dynamic rest-stress Rb-82 patient......-scans performed on a GE Discovery 690 PET/CT were included. Images were reconstructed in an isotropic matrix (3.27x3.27x3.27 mm) using PSF (SharpIR: 3 iterations and 21 subsets) and FBP (FORE FBP) with the same edge-preserving filter (3D Butterworth: cut-off 10 mm, power 10). Analysis: The dynamic PET......Aim:The use of PSF-based 3D reconstruction algorithms (PSF) is desirable in most clinical PET-exams due to their superior image quality. Rb-82 cardiac PET is inherently noisy due to short half-life and prompt gammas and would presumably benefit from PSF. However, the quantitative behavior of PSF...

  1. A novel SURE-based criterion for parametric PSF estimation.

    Science.gov (United States)

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  2. Implications of a wavelength dependent PSF for weak lensing measurements.

    Science.gov (United States)

    Eriksen, Martin; Hoekstra, Henk

    2018-05-01

    The convolution of galaxy images by the point-spread function (PSF) is the dominant source of bias for weak gravitational lensing studies, and an accurate estimate of the PSF is required to obtain unbiased shape measurements. The PSF estimate for a galaxy depends on its spectral energy distribution (SED), because the instrumental PSF is generally a function of the wavelength. In this paper we explore various approaches to determine the resulting `effective' PSF using broad-band data. Considering the Euclid mission as a reference, we find that standard SED template fitting methods result in biases that depend on source redshift, although this may be remedied if the algorithms can be optimised for this purpose. Using a machine-learning algorithm we show that, at least in principle, the required accuracy can be achieved with the current survey parameters. It is also possible to account for the correlations between photometric redshift and PSF estimates that arise from the use of the same photometry. We explore the impact of errors in photometric calibration, errors in the assumed wavelength dependence of the PSF model and limitations of the adopted template libraries. Our results indicate that the required accuracy for Euclid can be achieved using the data that are planned to determine photometric redshifts.

  3. Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter

    Energy Technology Data Exchange (ETDEWEB)

    Ruffio, Jean-Baptiste; Macintosh, Bruce; Nielsen, Eric L.; Czekala, Ian; Bailey, Vanessa P.; Follette, Katherine B. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA, 94305 (United States); Wang, Jason J.; Rosa, Robert J. De; Duchêne, Gaspard [Astronomy Department, University of California, Berkeley CA, 94720 (United States); Pueyo, Laurent [Space Telescope Science Institute, Baltimore, MD, 21218 (United States); Marley, Mark S. [NASA Ames Research Center, Mountain View, CA, 94035 (United States); Arriaga, Pauline; Fitzgerald, Michael P. [Department of Physics and Astronomy, University of California, Los Angeles, CA, 90095 (United States); Barman, Travis [Lunar and Planetary Laboratory, University of Arizona, Tucson AZ, 85721 (United States); Bulger, Joanna [Subaru Telescope, NAOJ, 650 North A’ohoku Place, Hilo, HI 96720 (United States); Chilcote, Jeffrey [Dunlap Institute for Astronomy and Astrophysics, University of Toronto, Toronto, ON, M5S 3H4 (Canada); Cotten, Tara [Department of Physics and Astronomy, University of Georgia, Athens, GA, 30602 (United States); Doyon, Rene [Institut de Recherche sur les Exoplanètes, Départment de Physique, Université de Montréal, Montréal QC, H3C 3J7 (Canada); Gerard, Benjamin L. [University of Victoria, 3800 Finnerty Road, Victoria, BC, V8P 5C2 (Canada); Goodsell, Stephen J., E-mail: jruffio@stanford.edu [Gemini Observatory, 670 N. A’ohoku Place, Hilo, HI, 96720 (United States); and others

    2017-06-10

    We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Loéve image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signal-to-noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.

  4. Effective image differencing with convolutional neural networks for real-time transient hunting

    Science.gov (United States)

    Sedaghat, Nima; Mahabal, Ashish

    2018-06-01

    Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.

  5. PSF support pilot program

    Science.gov (United States)

    Anderson, Jay

    2013-10-01

    The goal of this program is to observe the center of Omega Cen {which has a nice flat distribution of reasonably-spaced-out stars} in order to construct a PSF model for ACS's three workhorse filters: F435W, F606W, and F814W. These also happen to be the three ACS filters that will be used in the Frontier-Field program. PI-Anderson will use the data to consturct an 9x10 array of fiducial PSFs that describe the static variation of the PSF across the frame for each filter. He will also provide some simple routines that the public can use to insert PSFs into images.The observations will dither the center of the cluster around in a circle with a radius of about 30" such that any single star never falls in the ACS gap more than once. This has the additional benefit that we can use this large dither to validate or improve the distortion solution at the same time we are solving for the PSF. We will get four exposures through each of the ACS filters. The exposure times for the three ACS filters {F435W, F606W, and F814W} were chosen to maximize the number of bright unsaturated stars while simultaneously minimizing the number of saturated stars present. To do this, we made sure that the SGB {which is where the LF rises precipitously} is just below the saturation level. We used archival images from GO-9444 and GO-10775 to ensure that 339s for F435W, 80s in F606W, and 90s in F814W is perfect for this.In addition to the ACS exposures, we also take parallels with WFC3/IR. These exposures will sample a field that is 6' off center. The core radius is 2.5', so this outer field should have a density that is 5x lower than at the center, meaning the typical star is maybe 2.5x farther away. This should compensate for the larger WFC3/IR pixels and will allow us to construct PSFs that are appropriate. We take a total of 32 WFC3/IR exposures, each with an exposure time of 103s, and divide these 32 exposures among the four FF WFC3/IR exposures: F105W, F125W, F140W, and F160W. We will use

  6. The Chandra X-ray Observatory PSF Library

    Science.gov (United States)

    Karovska, M.; Beikman, S. J.; Elvis, M. S.; Flanagan, J. M.; Gaetz, T.; Glotfelty, K. J.; Jerius, D.; McDowell, J. C.; Rots, A. H.

    Pre-flight and on-orbit calibration of the Chandra X-Ray Observatory provided a unique base for developing detailed models of the optics and detectors. Using these models we have produced a set of simulations of the Chandra point spread function (PSF) which is available to the users via PSF library files. We describe here how the PSF models are generated and the design and content of the Chandra PSF library files.

  7. Automatic UAV Image Geo-Registration by Matching UAV Images to Georeferenced Image Data

    Directory of Open Access Journals (Sweden)

    Xiangyu Zhuo

    2017-04-01

    Full Text Available Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles. As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.

  8. A FPGA-based architecture for real-time image matching

    Science.gov (United States)

    Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo

    2013-10-01

    Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.

  9. Unsupervised image matching based on manifold alignment.

    Science.gov (United States)

    Pei, Yuru; Huang, Fengchun; Shi, Fuhao; Zha, Hongbin

    2012-08-01

    This paper challenges the issue of automatic matching between two image sets with similar intrinsic structures and different appearances, especially when there is no prior correspondence. An unsupervised manifold alignment framework is proposed to establish correspondence between data sets by a mapping function in the mutual embedding space. We introduce a local similarity metric based on parameterized distance curves to represent the connection of one point with the rest of the manifold. A small set of valid feature pairs can be found without manual interactions by matching the distance curve of one manifold with the curve cluster of the other manifold. To avoid potential confusions in image matching, we propose an extended affine transformation to solve the nonrigid alignment in the embedding space. The comparatively tight alignments and the structure preservation can be obtained simultaneously. The point pairs with the minimum distance after alignment are viewed as the matchings. We apply manifold alignment to image set matching problems. The correspondence between image sets of different poses, illuminations, and identities can be established effectively by our approach.

  10. Generation of synthetic image sequences for the verification of matching and tracking algorithms for deformation analysis

    Science.gov (United States)

    Bethmann, F.; Jepping, C.; Luhmann, T.

    2013-04-01

    This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.

  11. A Quick and Affine Invariance Matching Method for Oblique Images

    Directory of Open Access Journals (Sweden)

    XIAO Xiongwu

    2015-04-01

    Full Text Available This paper proposed a quick, affine invariance matching method for oblique images. It calculated the initial affine matrix by making full use of the two estimated camera axis orientation parameters of an oblique image, then recovered the oblique image to a rectified image by doing the inverse affine transform, and left over by the SIFT method. We used the nearest neighbor distance ratio(NNDR, normalized cross correlation(NCC measure constraints and consistency check to get the coarse matches, then used RANSAC method to calculate the fundamental matrix and the homography matrix. And we got the matches that they were interior points when calculating the homography matrix, then calculated the average value of the matches' principal direction differences. During the matching process, we got the initial matching features by the nearest neighbor(NN matching strategy, then used the epipolar constrains, homography constrains, NCC measure constrains and consistency check of the initial matches' principal direction differences with the calculated average value of the interior matches' principal direction differences to eliminate false matches. Experiments conducted on three pairs of typical oblique images demonstrate that our method takes about the same time as SIFT to match a pair of oblique images with a plenty of corresponding points distributed evenly and an extremely low mismatching rate.

  12. Performance measurement of PSF modeling reconstruction (True X) on Siemens Biograph TruePoint TrueV PET/CT.

    Science.gov (United States)

    Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung

    2014-05-01

    The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.

  13. Identification and characterization of mouse PSF1-binding protein, SLD5

    International Nuclear Information System (INIS)

    Kong, Lingyu; Ueno, Masaya; Itoh, Machiko; Yoshioka, Katsuji; Takakura, Nobuyuki

    2006-01-01

    Although most somatic cells cannot proliferate, immature cells proliferate continuously to produce mature cells. Recently, we cloned mouse PSF1 from a hematopoietic stem cell specific cDNA library and reported that PSF1 is indispensable for the proliferation of immature cells. To identify the PSF1-binding protein, we used the yeast two-hybrid system with PSF1 as bait, and identified and cloned SLD5. SLD5 interacted with a central region of PSF1. Tissue distribution of SLD5 was quite similar to that of PSF1. When overexpressed, SLD5 protein was co-localized with PSF1. These data suggest that PSF1 and SLD5 may cooperate in the proliferation of immature cell populations

  14. LWR surveillance dosimetry improvement program: PSF metallurgical blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Stallmann, F.W.; Guthrie, G.; McElroy, W.N.

    1985-01-01

    The ORR-PSF benchmark experiment was designed to simulate the surveillance capsule-pressure vessel configuration in power reactors and to test the validity of procedures which determine the radiation damage in the vessel from test results in the surveillance capsule. The PSF metallurgical blind test was initiated to give participants an opportunity to test their current embrittlement prediction methodologies. Experimental results were withheld from the participants except for the type of information which is normally contained in surveillance reports. Preliminary analysis of the PSF metallurgical blind test results shows that: (1) current prediction methodologies, as used by the PSF Blind Test participants, are adequate, falling within +- 20 0 C of the measured values for Δ NDT. None of the different methods is clearly superior; (2) the proposed revision of Reg. Guide 1.99 (Rev. 2) gives a better representation of the fluence and chemistry dependency of Δ NDT than the current version (Rev. 1); and (3) fluence rate effects can be seen but not quantified. Fluence spectral effects are too small to be detectable in this experiment. (orig.)

  15. Technical Note: Deformable image registration on partially matched images for radiotherapy applications

    International Nuclear Information System (INIS)

    Yang Deshan; Goddu, S. Murty; Lu Wei; Pechenaya, Olga L.; Wu Yu; Deasy, Joseph O.; El Naqa, Issam; Low, Daniel A.

    2010-01-01

    In radiation therapy applications, deformable image registrations (DIRs) are often carried out between two images that only partially match. Image mismatching could present as superior-inferior coverage differences, field-of-view (FOV) cutoffs, or motion crossing the image boundaries. In this study, the authors propose a method to improve the existing DIR algorithms so that DIR can be carried out in such situations. The basic idea is to extend the image volumes and define the extension voxels (outside the FOV or outside the original image volume) as NaN (not-a-number) values that are transparent to all floating-point computations in the DIR algorithms. Registrations are then carried out with one additional rule that NaN voxels can match any voxels. In this way, the matched sections of the images are registered properly, and the mismatched sections of the images are registered to NaN voxels. This method makes it possible to perform DIR on partially matched images that otherwise are difficult to register. It may also improve DIR accuracy, especially near or in the mismatched image regions.

  16. An accurate algorithm to match imperfectly matched images for lung tumor detection without markers.

    Science.gov (United States)

    Rozario, Timothy; Bereg, Sergey; Yan, Yulong; Chiu, Tsuicheng; Liu, Honghuan; Kearney, Vasant; Jiang, Lan; Mao, Weihua

    2015-05-08

    In order to locate lung tumors on kV projection images without internal markers, digitally reconstructed radiographs (DRRs) are created and compared with projection images. However, lung tumors always move due to respiration and their locations change on projection images while they are static on DRRs. In addition, global image intensity discrepancies exist between DRRs and projections due to their different image orientations, scattering, and noises. This adversely affects comparison accuracy. A simple but efficient comparison algorithm is reported to match imperfectly matched projection images and DRRs. The kV projection images were matched with different DRRs in two steps. Preprocessing was performed in advance to generate two sets of DRRs. The tumors were removed from the planning 3D CT for a single phase of planning 4D CT images using planning contours of tumors. DRRs of background and DRRs of tumors were generated separately for every projection angle. The first step was to match projection images with DRRs of background signals. This method divided global images into a matrix of small tiles and similarities were evaluated by calculating normalized cross-correlation (NCC) between corresponding tiles on projections and DRRs. The tile configuration (tile locations) was automatically optimized to keep the tumor within a single projection tile that had a bad matching with the corresponding DRR tile. A pixel-based linear transformation was determined by linear interpolations of tile transformation results obtained during tile matching. The background DRRs were transformed to the projection image level and subtracted from it. The resulting subtracted image now contained only the tumor. The second step was to register DRRs of tumors to the subtracted image to locate the tumor. This method was successfully applied to kV fluoro images (about 1000 images) acquired on a Vero (BrainLAB) for dynamic tumor tracking on phantom studies. Radiation opaque markers were

  17. OBJECT-SPACE MULTI-IMAGE MATCHING OF MOBILE-MAPPING-SYSTEM IMAGE SEQUENCES

    Directory of Open Access Journals (Sweden)

    Y. C. Chen

    2012-07-01

    Full Text Available This paper proposes an object-space multi-image matching procedure of terrestrial MMS (Mobile Mapping System image sequences to determine the coordinates of an object point automatically and reliably. This image matching procedure can be applied to find conjugate points of MMS image sequences efficiently. Conventional area-based image matching methods are not reliable to deliver accurate matching results for this application due to image scale variations, viewing angle variations, and object occlusions. In order to deal with these three matching problems, an object space multi-image matching is proposed. A modified NCC (Normalized Cross Correlation coefficient is proposed to measure the similarity of image patches. A modified multi-window matching procedure will also be introduced to solve the problem of object occlusion. A coarse-to-fine procedure with a combination of object-space multi-image matching and multi-window matching is adopted. The proposed procedure has been implemented for the purpose of matching terrestrial MMS image sequences. The ratio of correct matches of this experiment was about 80 %. By providing an approximate conjugate point in an overlapping image manually, most of the incorrect matches could be fixed properly and the ratio of correct matches was improved up to 98 %.

  18. Point spread functions and deconvolution of ultrasonic images.

    Science.gov (United States)

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  19. SU-G-IeP3-08: Image Reconstruction for Scanning Imaging System Based On Shape-Modulated Point Spreading Function

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ruixing; Yang, LV [College of Optoelectronic Science and Engineering, National University of Defense Technology, Changsha, Hunan (China); Xu, Kele [College of Electronical Science and Engineering, National University of Defense Technology, Changsha, Hunan (China); Zhu, Li [Institute of Electrostatic and Electromagnetic Protection, Mechanical Engineering College, Shijiazhuang, Hebei (China)

    2016-06-15

    Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape - to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.

  20. Sidescan Sonar Image Matching Using Cross Correlation

    DEFF Research Database (Denmark)

    Thisen, Erik; Sørensen, Helge Bjarup Dissing; Stage, Bjarne

    2003-01-01

    When surveying an area for sea mines with a sidescan sonar, the ability to find the same object in two different sonar images is helpful to determine the nature of the object. The main problem with matching two sidescan sonar images is that a scene changes appearance when viewed from different vi...

  1. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  2. Poor textural image tie point matching via graph theory

    Science.gov (United States)

    Yuan, Xiuxiao; Chen, Shiyu; Yuan, Wei; Cai, Yang

    2017-07-01

    Feature matching aims to find corresponding points to serve as tie points between images. Robust matching is still a challenging task when input images are characterized by low contrast or contain repetitive patterns, occlusions, or homogeneous textures. In this paper, a novel feature matching algorithm based on graph theory is proposed. This algorithm integrates both geometric and radiometric constraints into an edge-weighted (EW) affinity tensor. Tie points are then obtained by high-order graph matching. Four pairs of poor textural images covering forests, deserts, bare lands, and urban areas are tested. For comparison, three state-of-the-art matching techniques, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), and features from accelerated segment test (FAST), are also used. The experimental results show that the matching recall obtained by SIFT, SURF, and FAST varies from 0 to 35% in different types of poor textures. However, through the integration of both geometry and radiometry and the EW strategy, the recall obtained by the proposed algorithm is better than 50% in all four image pairs. The better matching recall improves the number of correct matches, dispersion, and positional accuracy.

  3. Content Based Image Matching for Planetary Science

    Science.gov (United States)

    Deans, M. C.; Meyer, C.

    2006-12-01

    Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct

  4. Matching methods evaluation framework for stereoscopic breast x-ray images.

    Science.gov (United States)

    Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric

    2016-01-01

    Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.

  5. Image Relaxation Matching Based on Feature Points for DSM Generation

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shunyi; ZHANG Zuxun; ZHANG Jianqing

    2004-01-01

    In photogrammetry and remote sensing, image matching is a basic and crucial process for automatic DEM generation. In this paper we presented a image relaxation matching method based on feature points. This method can be considered as an extention of regular grid point based matching. It avoids the shortcome of grid point based matching. For example, with this method, we can avoid low or even no texture area where errors frequently appear in cross correlaton matching. In the mean while, it makes full use of some mature techniques such as probability relaxation, image pyramid and the like which have already been successfully used in grid point matching process. Application of the technique to DEM generaton in different regions proved that it is more reasonable and reliable.

  6. THE EFFECT OF IMAGE ENHANCEMENT METHODS DURING FEATURE DETECTION AND MATCHING OF THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    O. Akcay

    2017-05-01

    Full Text Available A successful image matching is essential to provide an automatic photogrammetric process accurately. Feature detection, extraction and matching algorithms have performed on the high resolution images perfectly. However, images of cameras, which are equipped with low-resolution thermal sensors are problematic with the current algorithms. In this paper, some digital image processing techniques were applied to the low-resolution images taken with Optris PI 450 382 x 288 pixel optical resolution lightweight thermal camera to increase extraction and matching performance. Image enhancement methods that adjust low quality digital thermal images, were used to produce more suitable images for detection and extraction. Three main digital image process techniques: histogram equalization, high pass and low pass filters were considered to increase the signal-to-noise ratio, sharpen image, remove noise, respectively. Later on, the pre-processed images were evaluated using current image detection and feature extraction methods Maximally Stable Extremal Regions (MSER and Speeded Up Robust Features (SURF algorithms. Obtained results showed that some enhancement methods increased number of extracted features and decreased blunder errors during image matching. Consequently, the effects of different pre-process techniques were compared in the paper.

  7. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kotasidis, Fotis A. [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, M20 3LJ, Manchester (United Kingdom); Angelis, Georgios I. [Faculty of Health Sciences, Brain and Mind Research Institute, University of Sydney, NSW 2006, Sydney (Australia); Anton-Rodriguez, Jose; Matthews, Julian C. [Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Reader, Andrew J. [Montreal Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada and Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King' s College London, St. Thomas’ Hospital, London SE1 7EH (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30 001, Groningen 9700 RB (Netherlands)

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  8. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    International Nuclear Information System (INIS)

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-01-01

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  9. Isotope specific resolution recovery image reconstruction in high resolution PET imaging.

    Science.gov (United States)

    Kotasidis, Fotis A; Angelis, Georgios I; Anton-Rodriguez, Jose; Matthews, Julian C; Reader, Andrew J; Zaidi, Habib

    2014-05-01

    Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution recovery image reconstruction. The

  10. LINE-BASED MULTI-IMAGE MATCHING FOR FAÇADE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    T. A. Teo

    2012-07-01

    Full Text Available This research integrates existing LOD 2 building models and multiple close-range images for façade structural lines extraction. The major works are orientation determination and multiple image matching. In the orientation determination, Speeded Up Robust Features (SURF is applied to extract tie points automatically. Then, tie points and control points are combined for block adjustment. An object-based multi-images matching is proposed to extract the façade structural lines. The 2D lines in image space are extracted by Canny operator followed by Hough transform. The role of LOD 2 building models is to correct the tilt displacement of image from different views. The wall of LOD 2 model is also used to generate hypothesis planes for similarity measurement. Finally, average normalized cross correlation is calculated to obtain the best location in object space. The test images are acquired by a nonmetric camera Nikon D2X. The total number of image is 33. The experimental results indicate that the accuracy of orientation determination is about 1 pixel from 2515 tie points and 4 control points. It also indicates that line-based matching is more flexible than point-based matching.

  11. Nuclear safety research project (PSF). 1999 annual report

    International Nuclear Information System (INIS)

    Muehl, B.

    2000-08-01

    The reactor safety R and D work of the Karlsruhe Research Centre (FZK) has been part of the Nuclear Safety Research Project (PSF) since 1990. The present annual report summarizes the R and D results of PSF during 1999. The research tasks cover three main topics: Light Water Reactor safety, innovative systems, and studies related to the transmutation of actinides. The importance of the Light Water Reactor safety, however, has decreased during the last year in favour of the transmutation of actinides. Numerous institutes of the research centre contribute to the PSF programme, as well as several external partners. The tasks are coordinated in agreement with internal and external working groups. The contributions to this report, which are either written in German or in English, correspond to the status of early/mid 2000. (orig.) [de

  12. High Density Aerial Image Matching: State-Of and Future Prospects

    Science.gov (United States)

    Haala, N.; Cavegn, S.

    2016-06-01

    Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.

  13. Fast image matching algorithm based on projection characteristics

    Science.gov (United States)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  14. Illumination invariant feature point matching for high-resolution planetary remote sensing images

    Science.gov (United States)

    Wu, Bo; Zeng, Hai; Hu, Han

    2018-03-01

    Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.

  15. Implementation of PSF engineering in high-resolution 3D microscopy imaging with a LCoS (reflective) SLM

    Science.gov (United States)

    King, Sharon V.; Doblas, Ana; Patwary, Nurmohammed; Saavedra, Genaro; Martínez-Corral, Manuel; Preza, Chrysanthe

    2014-03-01

    Wavefront coding techniques are currently used to engineer unique point spread functions (PSFs) that enhance existing microscope modalities or create new ones. Previous work in this field demonstrated that simulated intensity PSFs encoded with a generalized cubic phase mask (GCPM) are invariant to spherical aberration or misfocus; dependent on parameter selection. Additional work demonstrated that simulated PSFs encoded with a squared cubic phase mask (SQUBIC) produce a depth invariant focal spot for application in confocal scanning microscopy. Implementation of PSF engineering theory with a liquid crystal on silicon (LCoS) spatial light modulator (SLM) enables validation of WFC phase mask designs and parameters by manipulating optical wavefront properties with a programmable diffractive element. To validate and investigate parameters of the GCPM and SQUBIC WFC masks, we implemented PSF engineering in an upright microscope modified with a dual camera port and a LCoS SLM. We present measured WFC PSFs and compare them to simulated PSFs through analysis of their effect on the microscope imaging system properties. Experimentally acquired PSFs show the same intensity distribution as simulation for the GCPM phase mask, the SQUBIC-mask and the well-known and characterized cubic-phase mask (CPM), first applied to high NA microscopy by Arnison et al.10, for extending depth of field. These measurements provide experimental validation of new WFC masks and demonstrate the use of the LCoS SLM as a WFC design tool. Although efficiency improvements are needed, this application of LCoS technology renders the microscope capable of switching among multiple WFC modes.

  16. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  17. Probabilistic seismic history matching using binary images

    Science.gov (United States)

    Davolio, Alessandra; Schiozer, Denis Jose

    2018-02-01

    Currently, the goal of history-matching procedures is not only to provide a model matching any observed data but also to generate multiple matched models to properly handle uncertainties. One such approach is a probabilistic history-matching methodology based on the discrete Latin Hypercube sampling algorithm, proposed in previous works, which was particularly efficient for matching well data (production rates and pressure). 4D seismic (4DS) data have been increasingly included into history-matching procedures. A key issue in seismic history matching (SHM) is to transfer data into a common domain: impedance, amplitude or pressure, and saturation. In any case, seismic inversions and/or modeling are required, which can be time consuming. An alternative to avoid these procedures is using binary images in SHM as they allow the shape, rather than the physical values, of observed anomalies to be matched. This work presents the incorporation of binary images in SHM within the aforementioned probabilistic history matching. The application was performed with real data from a segment of the Norne benchmark case that presents strong 4D anomalies, including softening signals due to pressure build up. The binary images are used to match the pressurized zones observed in time-lapse data. Three history matchings were conducted using: only well data, well and 4DS data, and only 4DS. The methodology is very flexible and successfully utilized the addition of binary images for seismic objective functions. Results proved the good convergence of the method in few iterations for all three cases. The matched models of the first two cases provided the best results, with similar well matching quality. The second case provided models presenting pore pressure changes according to the expected dynamic behavior (pressurized zones) observed on 4DS data. The use of binary images in SHM is relatively new with few examples in the literature. This work enriches this discussion by presenting a new

  18. PIV uncertainty quantification by image matching

    International Nuclear Information System (INIS)

    Sciacchitano, Andrea; Scarano, Fulvio; Wieneke, Bernhard

    2013-01-01

    A novel method is presented to quantify the uncertainty of PIV data. The approach is a posteriori, i.e. the unknown actual error of the measured velocity field is estimated using the velocity field itself as input along with the original images. The principle of the method relies on the concept of super-resolution: the image pair is matched according to the cross-correlation analysis and the residual distance between matched particle image pairs (particle disparity vector) due to incomplete match between the two exposures is measured. The ensemble of disparity vectors within the interrogation window is analyzed statistically. The dispersion of the disparity vector returns the estimate of the random error, whereas the mean value of the disparity indicates the occurrence of a systematic error. The validity of the working principle is first demonstrated via Monte Carlo simulations. Two different interrogation algorithms are considered, namely the cross-correlation with discrete window offset and the multi-pass with window deformation. In the simulated recordings, the effects of particle image displacement, its gradient, out-of-plane motion, seeding density and particle image diameter are considered. In all cases good agreement is retrieved, indicating that the error estimator is able to follow the trend of the actual error with satisfactory precision. Experiments where time-resolved PIV data are available are used to prove the concept under realistic measurement conditions. In this case the ‘exact’ velocity field is unknown; however a high accuracy estimate is obtained with an advanced interrogation algorithm that exploits the redundant information of highly temporally oversampled data (pyramid correlation, Sciacchitano et al (2012 Exp. Fluids 53 1087–105)). The image-matching estimator returns the instantaneous distribution of the estimated velocity measurement error. The spatial distribution compares very well with that of the actual error with maxima in the

  19. Preparation, characterisation and critical flux determination of graphene oxide blended polysulfone (PSf) membranes in an MBR system.

    Science.gov (United States)

    Ravishankar, Harish; Roddick, Felicity; Navaratna, Dimuth; Jegatheesan, Veeriah

    2018-05-01

    Microfiltration membranes having different blends of graphene-oxide (GO) (0-1 wt%) and Polysulfone (PSf) (15-20 wt%) were prepared using the classical non-solvent induced phase inversion process. The prepared membranes were characterised for their structural morphology, surface properties, mechanical strength, porosity and pure water flux. Based on the initial characterisation results, four membranes (15 wt% PSf, 15 wt% PSf + 0.25 wt% GO, 15 wt% PSf + 1 wt% GO and 20 wt% PSf + 1 wt% GO) were chosen for critical flux study, that was conducted using flux-step method in a lab scale MBR system. In order to study the application potential of GO blended membranes, the critical flux of each membrane was evaluated in two operational modes i.e., continuous and intermittent modes with backwash. The membranes with maximal GO concentration (15 wt% PSf + 1 wt% GO and 20 wt% PSf + 1 wt% GO) showed higher critical flux (16.5, 12.8 L/m 2 h and 19, 15 L/m 2 h for continuous and intermittent mode, respectively). It was observed that the operational modes did not have a significant effect on the critical flux of the membranes with low GO concentration (15 wt% PSf and 15 wt% PSf + 0.25 wt% GO), indicating a minimal of 1 wt% GO was required for an observable effect that favoured intermittent mode of operation. Through these results, ideal operating condition was arrived (i.e., flux maintained at 6.4 L/m 2 h operated under intermittent mode) and the membranes 15 wt% PSf and 15 wt% PSf + 1 wt% GO were studied for their long-term operation. The positive effect of GO on filtration time, cleaning frequency and against fouling was demonstrated through long term TMP profile of the membranes, indicating the suitability of GO blended membrane for real time wastewater treatment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. An improved ASIFT algorithm for indoor panorama image matching

    Science.gov (United States)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  1. Analysis and improvement of the quantum image matching

    Science.gov (United States)

    Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin

    2017-11-01

    We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.

  2. Preparation of PANI/PSF conductive composite films and their characteristic

    Institute of Scientific and Technical Information of China (English)

    Yang Yuying; Shang Xiuli; Kong Chao; Zhao Hongxiao; Hu Zhong'ai

    2006-01-01

    Polyaniline (PANI)/polysulfone (PSF) composite films are successfully prepared by phase separation and one-step in-situ polymerization.It is found that the head-on face (in contact with solution) of the films is green while the back face is white.The chemical component and the surface morphology of both surfaces of the films are characterized by FT-IR spectra and SEM,respectively.The effect of the polymerization temperature,time and concentration of the reactants on the electrical properties of the films are discussed in details.The thermo-oxidative degradation of the films is studied by thermogravimetric analysis (TGA).The results indicate that the thermal stability of the PANI/PSF films is higher than that of the pure PSF film.

  3. Multi-image Matching of Airborne SAR Imagery by SANCC

    Directory of Open Access Journals (Sweden)

    DING Hao

    2015-03-01

    Full Text Available In order to improve accuracy of SAR matching, a multi-image matching method based on sum of adaptive normalized cross-correlation (SANCC is proposed. It utilizes geometrical and radiometric information of multi-baselinesynthetic aperture radar (SARimages effectively. Firstly, imaging parameters, platform parameters and approximate digital surface model (DSM are used to predict matching line. Secondly, similarity and proximity in Gestalt theory are introduced to SANCC, and SANCC measures of potential matching points along the matching line are calculated. Thirdly, multi-image matching results and object coordinates of matching points are obtained by winner-take-all (WTA optimization strategy. The approach has been demonstrated with airborne SAR images acquired by a Chinese airborne SAR system (CASMSAR system. The experimental results indicate that the proposed algorithm is effective for providing dense and accuracy matching points, reducing the number of mismatches caused by repeated textures, and offering a better solution to match in poor textured areas.

  4. Self-Similar Spin Images for Point Cloud Matching

    Science.gov (United States)

    Pulido, Daniel

    based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  5. POOR TEXTURAL IMAGE MATCHING BASED ON GRAPH THEORY

    Directory of Open Access Journals (Sweden)

    S. Chen

    2016-06-01

    Full Text Available Image matching lies at the heart of photogrammetry and computer vision. For poor textural images, the matching result is affected by low contrast, repetitive patterns, discontinuity or occlusion, few or homogeneous textures. Recently, graph matching became popular for its integration of geometric and radiometric information. Focused on poor textural image matching problem, it is proposed an edge-weight strategy to improve graph matching algorithm. A series of experiments have been conducted including 4 typical landscapes: Forest, desert, farmland, and urban areas. And it is experimentally found that our new algorithm achieves better performance. Compared to SIFT, doubled corresponding points were acquired, and the overall recall rate reached up to 68%, which verifies the feasibility and effectiveness of the algorithm.

  6. Difference Image Analysis of Galactic Microlensing. II. Microlensing Events

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K. (and others)

    1999-09-01

    The MACHO collaboration has been carrying out difference image analysis (DIA) since 1996 with the aim of increasing the sensitivity to the detection of gravitational microlensing. This is a preliminary report on the application of DIA to galactic bulge images in one field. We show how the DIA technique significantly increases the number of detected lensing events, by removing the positional dependence of traditional photometry schemes and lowering the microlensing event detection threshold. This technique, unlike PSF photometry, gives the unblended colors and positions of the microlensing source stars. We present a set of criteria for selecting microlensing events from objects discovered with this technique. The 16 pixel and classical microlensing events discovered with the DIA technique are presented. (c) (c) 1999. The American Astronomical Society.

  7. How to COAAD Images. II. A Coaddition Image that is Optimal for Any Purpose in the Background-dominated Noise Limit

    Energy Technology Data Exchange (ETDEWEB)

    Zackay, Barak; Ofek, Eran O. [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel)

    2017-02-20

    Image coaddition is one of the most basic operations that astronomers perform. In Paper I, we presented the optimal ways to coadd images in order to detect faint sources and to perform flux measurements under the assumption that the noise is approximately Gaussian. Here, we build on these results and derive from first principles a coaddition technique that is optimal for any hypothesis testing and measurement (e.g., source detection, flux or shape measurements, and star/galaxy separation), in the background-noise-dominated case. This method has several important properties. The pixels of the resulting coadded image are uncorrelated. This image preserves all the information (from the original individual images) on all spatial frequencies. Any hypothesis testing or measurement that can be done on all the individual images simultaneously, can be done on the coadded image without any loss of information. The PSF of this image is typically as narrow, or narrower than the PSF of the best image in the ensemble. Moreover, this image is practically indistinguishable from a regular single image, meaning that any code that measures any property on a regular astronomical image can be applied to it unchanged. In particular, the optimal source detection statistic derived in Paper I is reproduced by matched filtering this image with its own PSF. This coaddition process, which we call proper coaddition, can be understood as the maximum signal-to-noise ratio measurement of the Fourier transform of the image, weighted in such a way that the noise in the entire Fourier domain is of equal variance. This method has important implications for multi-epoch seeing-limited deep surveys, weak lensing galaxy shape measurements, and diffraction-limited imaging via speckle observations. The last topic will be covered in depth in future papers. We provide an implementation of this algorithm in MATLAB.

  8. Analysis of Operators Comments on the PSF Questionnaire of the Task Complexity Experiment 2003/2004

    Energy Technology Data Exchange (ETDEWEB)

    Torralba, B.; Martinez-Arias, R.

    2007-07-01

    Human Reliability Analysis (HRA) methods usually take into account the effect of Performance Shaping Factors (PSF). Therefore, the adequate treatment of PSFs in HRA of Probabilistic Safety Assessment (PSA) models has a crucial importance. There is an important need for collecting PSF data based on simulator experiments. During the task complexity experiment 2003-2004, carried out in the BWR simulator of Halden Man-Machine Laboratory (HAMMLAB), there was a data collection on PSF by means of a PSF Questionnaire. Seven crews (composed of shift supervisor, reactor operator and turbine operator) from Swedish Nuclear Power Plants participated in the experiment. The PSF Questionnaire collected data on the factors: procedures, training and experience, indications, controls, team management, team communication, individual work practice, available time for the tasks, number of tasks or information load, masking and seriousness. The main statistical significant results are presented on Performance Shaping Factors data collection and analysis of the task complexity experiment 2003/2004 (HWR-810). The analysis of the comments about PSFs, which were provided by operators on the PSF Questionnaire, is described. It has been summarised the comments provided for each PSF on the scenarios, using a content analysis technique. (Author)

  9. Analysis of Operators Comments on the PSF Questionnaire of the Task Complexity Experiment 2003/2004

    International Nuclear Information System (INIS)

    Torralba, B.; Martinez-Arias, R.

    2007-01-01

    Human Reliability Analysis (HRA) methods usually take into account the effect of Performance Shaping Factors (PSF). Therefore, the adequate treatment of PSFs in HRA of Probabilistic Safety Assessment (PSA) models has a crucial importance. There is an important need for collecting PSF data based on simulator experiments. During the task complexity experiment 2003-2004, carried out in the BWR simulator of Halden Man-Machine Laboratory (HAMMLAB), there was a data collection on PSF by means of a PSF Questionnaire. Seven crews (composed of shift supervisor, reactor operator and turbine operator) from Swedish Nuclear Power Plants participated in the experiment. The PSF Questionnaire collected data on the factors: procedures, training and experience, indications, controls, team management, team communication, individual work practice, available time for the tasks, number of tasks or information load, masking and seriousness. The main statistical significant results are presented on Performance Shaping Factors data collection and analysis of the task complexity experiment 2003/2004 (HWR-810). The analysis of the comments about PSFs, which were provided by operators on the PSF Questionnaire, is described. It has been summarised the comments provided for each PSF on the scenarios, using a content analysis technique. (Author)

  10. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    International Nuclear Information System (INIS)

    Sensakovic, William F.; O'Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-01-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA"2 by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing

  11. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    OpenAIRE

    Kotasidis Fotis A.; Kotasidis Fotis A.; Angelis Georgios I.; Anton-Rodriguez Jose; Matthews Julian C.; Reader Andrew J.; Reader Andrew J.; Zaidi Habib; Zaidi Habib; Zaidi Habib

    2014-01-01

    Purpose: Measuring and incorporating a scanner specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However due to the short half life of clinically used isotopes other long lived isotopes not used in clinical practice are used to perform the PSF measurements. As such non optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction usuall...

  12. An Image Matching Method Based on Fourier and LOG-Polar Transform

    Directory of Open Access Journals (Sweden)

    Zhijia Zhang

    2014-04-01

    Full Text Available This Traditional template matching methods are not appropriate for the situation of large angle rotation between two images in the online detection for industrial production. Aiming at this problem, Fourier transform algorithm was introduced to correct image rotation angle based on its rotatary invariance in time-frequency domain, orienting image under test in the same direction with reference image, and then match these images using matching algorithm based on log-polar transform. Compared with the current matching algorithms, experimental results show that the proposed algorithm can not only match two images with rotation of arbitrary angle, but also possess a high matching accuracy and applicability. In addition, the validity and reliability of algorithm was verified by simulated matching experiment targeting circular images.

  13. Deconvolving the Nucleus of Centaurus A Using Chandra PSF Library

    Science.gov (United States)

    Karovska, Margarita

    2000-01-01

    Centaurus A (NGC 5128) is a giant early-type galaxy containing the nearest (at 3.5 Mpc) radio-bright Active Galactic Nucleus (AGN). Cen A was observed with the High Resolution Camera (HRC) on the Chandra X-ray Observatory on several occasions since the launch in July 1999. The high-angular resolution (less than 0.5 arcsecond) Chandra/HRC images reveal X ray multi-scale structures in this object with unprecedented detail and clarity, including the bright nucleus believed to be associated with a supermassive black hole. We explored the spatial extent of the Cen A nucleus using deconvolution techniques on the full resolution Chandra images. Model point spread functions (PSFs) were derived from the standard Chandra raytrace PSF library as well as unresolved point sources observed with Chandra. The deconvolved images show that the Cen A nucleus is resolved and asymmetric. We discuss several possible causes of this extended emission and of the asymmetries.

  14. In-flight PSF calibration of the NuSTAR hard X-ray optics

    DEFF Research Database (Denmark)

    An, Hongjun; Madsen, Kristin K.; Westergaard, Niels J.

    2014-01-01

    We present results of the point spread function (PSF) calibration of the hard X-ray optics of the Nuclear Spectroscopic Telescope Array (NuSTAR). Immediately post-launch, NuSTAR has observed bright point sources such as Cyg X-1, Vela X-1, and Her X-1 for the PSF calibration. We use the point source...... observations taken at several off-axis angles together with a ray-trace model to characterize the in-orbit angular response, and find that the ray-trace model alone does not fit the observed event distributions and applying empirical corrections to the ray-trace model improves the fit significantly. We...... describe the corrections applied to the ray-trace model and show that the uncertainties in the enclosed energy fraction (EEF) of the new PSF model is less than or similar to 3 for extraction apertures of R greater than or similar to 60" with no significant energy dependence. We also show that the PSF...

  15. Evaluation of polynomial image deformation for matching of 3D- abdominal MR-images using anatomical landmarks and for atlas construction

    CERN Document Server

    Kimiaei, S; Jonsson, E; Crafoord, J; Maguire, G Q

    1999-01-01

    The aim of this study is to compare and evaluate the potential usability of linear and non-linear (polynomial) 3D-warping for constructing an atlas by matching abdominal MR-images from a number of different individuals using manually picked anatomical landmarks. The significance of this study lies in the fact that it illustrates the potential to use polynomial matching at a local or organ level. This is a necessary requirement for constructing an atlas and for fine intra-patient image matching and fusion. Finally 3D-image warping using anatomical landmark for inter-patient intra-modality image co-registration and fusion was found to be a very powerful and robust method. Additionally it can be used for intra-patient inter- modality image matching.

  16. AN INTEGRATED RANSAC AND GRAPH BASED MISMATCH ELIMINATION APPROACH FOR WIDE-BASELINE IMAGE MATCHING

    Directory of Open Access Journals (Sweden)

    M. Hasheminasab

    2015-12-01

    Full Text Available In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and

  17. An adaptive clustering algorithm for image matching based on corner feature

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  18. Image matching as a data source for forest inventory - Comparison of Semi-Global Matching and Next-Generation Automatic Terrain Extraction algorithms in a typical managed boreal forest environment

    Science.gov (United States)

    Kukkonen, M.; Maltamo, M.; Packalen, P.

    2017-08-01

    Image matching is emerging as a compelling alternative to airborne laser scanning (ALS) as a data source for forest inventory and management. There is currently an open discussion in the forest inventory community about whether, and to what extent, the new method can be applied to practical inventory campaigns. This paper aims to contribute to this discussion by comparing two different image matching algorithms (Semi-Global Matching [SGM] and Next-Generation Automatic Terrain Extraction [NGATE]) and ALS in a typical managed boreal forest environment in southern Finland. Spectral features from unrectified aerial images were included in the modeling and the potential of image matching in areas without a high resolution digital terrain model (DTM) was also explored. Plot level predictions for total volume, stem number, basal area, height of basal area median tree and diameter of basal area median tree were modeled using an area-based approach. Plot level dominant tree species were predicted using a random forest algorithm, also using an area-based approach. The statistical difference between the error rates from different datasets was evaluated using a bootstrap method. Results showed that ALS outperformed image matching with every forest attribute, even when a high resolution DTM was used for height normalization and spectral information from images was included. Dominant tree species classification with image matching achieved accuracy levels similar to ALS regardless of the resolution of the DTM when spectral metrics were used. Neither of the image matching algorithms consistently outperformed the other, but there were noticeably different error rates depending on the parameter configuration, spectral band, resolution of DTM, or response variable. This study showed that image matching provides reasonable point cloud data for forest inventory purposes, especially when a high resolution DTM is available and information from the understory is redundant.

  19. PSF blind test SSC, SPVC, and SVBC physics-dosimetry-metallurgy data packages

    International Nuclear Information System (INIS)

    1984-01-01

    Information is presented concerning the final PSF radiometric data; calculated spectral fluences and dosimeter activities for the metallurgical blind test irradiations at the ORR-PSF; fabrication data package for HEDL dosimetry in the ORNL Poolside Facility LWR pressure vessel mock-up irradiation; SSC-1; NUREG-CR-3457; and NUREG-CR-3295

  20. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Sensakovic, William F.; O' Dell, M.C.; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura [Florida Hospital, Imaging Administration, Orlando, FL (United States)

    2016-10-15

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA{sup 2} by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  1. Image quality and dose differences caused by vendor-specific image processing of neonatal radiographs.

    Science.gov (United States)

    Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura

    2016-10-01

    Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image

  2. Signature detection and matching for document image retrieval.

    Science.gov (United States)

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  3. FEATURE MATCHING OF HISTORICAL IMAGES BASED ON GEOMETRY OF QUADRILATERALS

    Directory of Open Access Journals (Sweden)

    F. Maiwald

    2018-05-01

    Full Text Available This contribution shows an approach to match historical images from the photo library of the Saxon State and University Library Dresden (SLUB in the context of a historical three-dimensional city model of Dresden. In comparison to recent images, historical photography provides diverse factors which make an automatical image analysis (feature detection, feature matching and relative orientation of images difficult. Due to e.g. film grain, dust particles or the digitalization process, historical images are often covered by noise interfering with the image signal needed for a robust feature matching. The presented approach uses quadrilaterals in image space as these are commonly available in man-made structures and façade images (windows, stones, claddings. It is explained how to generally detect quadrilaterals in images. Consequently, the properties of the quadrilaterals as well as the relationship to neighbouring quadrilaterals are used for the description and matching of feature points. The results show that most of the matches are robust and correct but still small in numbers.

  4. Harmonizing SUVs in multicentre trials when using different generation PET systems: prospective validation in non-small cell lung cancer patients

    Energy Technology Data Exchange (ETDEWEB)

    Lasnon, Charline; Quak, Elske [Francois Baclesse Cancer Centre, Nuclear Medicine Department, Caen (France); Desmonts, Cedric [Caen University Hospital, Nuclear Medicine Department, Caen (France); Gervais, Radj; Do, Pascal; Dubos-Arvis, Catherine [Francois Baclesse Cancer Centre, Thoracic Oncology, Caen (France); Aide, Nicolas [Francois Baclesse Cancer Centre, Nuclear Medicine Department, Caen (France); Centre Francois Baclesse, Service de Medecine Nucleaire, Caen cedex 5 (France)

    2013-07-15

    We prospectively evaluated whether a strategy using point spread function (PSF) reconstruction for both diagnostic and quantitative analysis in non-small cell lung cancer (NSCLC) patients meets the European Association of Nuclear Medicine (EANM) guidelines for harmonization of quantitative values. The NEMA NU-2 phantom was used to determine the optimal filter to apply to PSF-reconstructed images in order to obtain recovery coefficients (RCs) fulfilling the EANM guidelines for tumour positron emission tomography (PET) imaging (PSF{sub EANM}). PET data of 52 consecutive NSCLC patients were reconstructed with unfiltered PSF reconstruction (PSF{sub allpass}), PSF{sub EANM} and with a conventional ordered subset expectation maximization (OSEM) algorithm known to meet EANM guidelines. To mimic a situation in which a patient would undergo pre- and post-therapy PET scans on different generation PET systems, standardized uptake values (SUVs) for OSEM reconstruction were compared to SUVs for PSF{sub EANM} and PSF{sub allpass} reconstruction. Overall, in 195 lesions, Bland-Altman analysis demonstrated that the mean ratio between PSF{sub EANM} and OSEM data was 1.03 [95 % confidence interval (CI) 0.94-1.12] and 1.02 (95 % CI 0.90-1.14) for SUV{sub max} and SUV{sub mean}, respectively. No difference was noticed when analysing lesions based on their size and location or on patient body habitus and image noise. Ten patients (84 lesions) underwent two PET scans for response monitoring. Using the European Organization for Research and Treatment of Cancer (EORTC) criteria, there was an almost perfect agreement between OSEM{sub PET1}/OSEM{sub PET2} (current standard) and OSEM{sub PET1}/PSF{sub EANM-PET2} or PSF{sub EANM-PET1}/OSEM{sub PET2} with kappa values of 0.95 (95 % CI 0.91-1.00) and 0.99 (95 % CI 0.96-1.00), respectively. The use of PSF{sub allpass} either for pre- or post-treatment (i.e. OSEM{sub PET1}/PSF{sub allpass-PET2} or PSF{sub allpass-PET1}/OSEM{sub PET2}) showed

  5. Reliable Line Matching Algorithm for Stereo Images with Topological Relationship

    Directory of Open Access Journals (Sweden)

    WANG Jingxue

    2017-11-01

    Full Text Available Because of the lack of relationships between matching line and adjacent lines in the process of individual line matching, and the weak reliability of the individual line descriptor facing on discontinue texture, this paper presents a reliable line matching algorithm for stereo images with topological relationship. The algorithm firstly generates grouped line pairs from lines extracted from the reference image and searching image according to the basic topological relationships such as distance and angle between the lines. Then it takes the grouped line pairs as matching primitives, and matches these grouped line pairs by using epipolar constraint, homography constraint, quadrant constraint and gray correlation constraint of irregular triangle in order. And finally, it resolves the corresponding line pairs into two pairs of corresponding individual lines, and obtains one to one matching results after the post-processing of integrating, fitting, and checking. This paper adopts digital aerial images and close-range images with typical texture features to deal with the parameter analysis and line matching, and the experiment results demonstrate that the proposed algorithm in this paper can obtain reliable line matching results.

  6. Murine hematopoietic stem cell dormancy controlled by induction of a novel short form of PSF1 by histone deacetylase inhibitors

    International Nuclear Information System (INIS)

    Han, Yinglu; Gong, Zhi-Yuan; Takakura, Nobuyuki

    2015-01-01

    Hematopoietic stem cells (HSCs) can survive long-term in a state of dormancy. Little is known about how histone deacetylase inhibitors (HDACi) affect HSC kinetics. Here, we use trichostatin A (TSA), a histone deacetylase inhibitor, to enforce histone acetylation and show that this suppresses cell cycle entry by dormant HSCs. Previously, we found that haploinsufficiency of PSF1, a DNA replication factor, led to attenuation of the bone marrow (BM) HSC pool size and lack of acute proliferation after 5-FU ablation. Because PSF1 protein is present in CD34 + transiently amplifying HSCs but not in CD34 − long-term reconstituting-HSCs which are resting in a dormant state, we analyzed the relationship between dormancy and PSF1 expression, and how a histone deacetylase inhibitor affects this. We found that CD34 + HSCs produce long functional PSF1 (PSF1a) but CD34 − HSCs produce a shorter possibly non-functional PSF1 (PSF1b, c, dominantly PSF1c). Using PSF1a-overexpressing NIH-3T3 cells in which the endogenous PSF1 promoter is suppressed, we found that TSA treatment promotes production of the shorter form of PSF1 possibly by inducing recruitment of E2F family factors upstream of the PSF1 transcription start site. Our data document one mechanism by which histone deacetylase inhibitors affect the dormancy of HSCs by regulating the DNA replication factor PSF1. - Highlights: • Hematopoetic stem cell dormancy is controlled by histone deacetylation inhibitors. • Dormancy of HSCs is associated with a shorter form of non-functional PSF1. • Histone deacetylase inhibitors suppress PSF1 promoter activity

  7. Murine hematopoietic stem cell dormancy controlled by induction of a novel short form of PSF1 by histone deacetylase inhibitors

    Energy Technology Data Exchange (ETDEWEB)

    Han, Yinglu; Gong, Zhi-Yuan [Department of Signal Transduction, Research Institute for Microbial Diseases, Osaka University, 3-1 Yamada-oka, Suita, Osaka 565-0871 (Japan); Takakura, Nobuyuki, E-mail: ntakaku@biken.osaka-u.ac.jp [Department of Signal Transduction, Research Institute for Microbial Diseases, Osaka University, 3-1 Yamada-oka, Suita, Osaka 565-0871 (Japan); Japan Science Technology Agency, CREST, K' s Gobancho, 7, Gobancho, Chiyoda-ku, Tokyo 102-0076 (Japan)

    2015-06-10

    Hematopoietic stem cells (HSCs) can survive long-term in a state of dormancy. Little is known about how histone deacetylase inhibitors (HDACi) affect HSC kinetics. Here, we use trichostatin A (TSA), a histone deacetylase inhibitor, to enforce histone acetylation and show that this suppresses cell cycle entry by dormant HSCs. Previously, we found that haploinsufficiency of PSF1, a DNA replication factor, led to attenuation of the bone marrow (BM) HSC pool size and lack of acute proliferation after 5-FU ablation. Because PSF1 protein is present in CD34{sup +} transiently amplifying HSCs but not in CD34{sup −} long-term reconstituting-HSCs which are resting in a dormant state, we analyzed the relationship between dormancy and PSF1 expression, and how a histone deacetylase inhibitor affects this. We found that CD34{sup +} HSCs produce long functional PSF1 (PSF1a) but CD34{sup −} HSCs produce a shorter possibly non-functional PSF1 (PSF1b, c, dominantly PSF1c). Using PSF1a-overexpressing NIH-3T3 cells in which the endogenous PSF1 promoter is suppressed, we found that TSA treatment promotes production of the shorter form of PSF1 possibly by inducing recruitment of E2F family factors upstream of the PSF1 transcription start site. Our data document one mechanism by which histone deacetylase inhibitors affect the dormancy of HSCs by regulating the DNA replication factor PSF1. - Highlights: • Hematopoetic stem cell dormancy is controlled by histone deacetylation inhibitors. • Dormancy of HSCs is associated with a shorter form of non-functional PSF1. • Histone deacetylase inhibitors suppress PSF1 promoter activity.

  8. Automated matching of corresponding seed images of three simulator radiographs to allow 3D triangulation of implanted seeds

    Science.gov (United States)

    Altschuler, Martin D.; Kassaee, Alireza

    1997-02-01

    To match corresponding seed images in different radiographs so that the 3D seed locations can be triangulated automatically and without ambiguity requires (at least) three radiographs taken from different perspectives, and an algorithm that finds the proper permutations of the seed-image indices. Matching corresponding images in only two radiographs introduces inherent ambiguities which can be resolved only with the use of non-positional information obtained with intensive human effort. Matching images in three or more radiographs is an `NP (Non-determinant in Polynomial time)-complete' problem. Although the matching problem is fundamental, current methods for three-radiograph seed-image matching use `local' (seed-by-seed) methods that may lead to incorrect matchings. We describe a permutation-sampling method which not only gives good `global' (full permutation) matches for the NP-complete three-radiograph seed-matching problem, but also determines the reliability of the radiographic data themselves, namely, whether the patient moved in the interval between radiographic perspectives.

  9. Characterization and simulation of noise in PET images reconstructed with OSEM: Development of a method for the generation of synthetic images.

    Science.gov (United States)

    Castro, P; Huerga, C; Chamorro, P; Garayoa, J; Roch, M; Pérez, L

    2018-04-17

    The goals of the study are to characterize imaging properties in 2D PET images reconstructed with the iterative algorithm ordered-subset expectation maximization (OSEM) and to propose a new method for the generation of synthetic images. The noise is analyzed in terms of its magnitude, spatial correlation, and spectral distribution through standard deviation, autocorrelation function, and noise power spectrum (NPS), respectively. Their variations with position and activity level are also analyzed. This noise analysis is based on phantom images acquired from 18 F uniform distributions. Experimental recovery coefficients of hot spheres in different backgrounds are employed to study the spatial resolution of the system through point spread function (PSF). The NPS and PSF functions provide the baseline for the proposed simulation method: convolution with PSF as kernel and noise addition from NPS. The noise spectral analysis shows that the main contribution is of random nature. It is also proven that attenuation correction does not alter noise texture but it modifies its magnitude. Finally, synthetic images of 2 phantoms, one of them an anatomical brain, are quantitatively compared with experimental images showing a good agreement in terms of pixel values and pixel correlations. Thus, the contrast to noise ratio for the biggest sphere in the NEMA IEC phantom is 10.7 for the synthetic image and 8.8 for the experimental image. The properties of the analyzed OSEM-PET images can be described by NPS and PSF functions. Synthetic images, even anatomical ones, are successfully generated by the proposed method based on the NPS and PSF. Copyright © 2018 Sociedad Española de Medicina Nuclear e Imagen Molecular. Publicado por Elsevier España, S.L.U. All rights reserved.

  10. Object-Oriented Hierarchy Radiation Consistency for Different Temporal and Different Sensor Images

    Directory of Open Access Journals (Sweden)

    Nan Su

    2018-02-01

    Full Text Available In the paper, we propose a novel object-oriented hierarchy radiation consistency method for dense matching of different temporal and different sensor data in the 3D reconstruction. For different temporal images, our illumination consistency method is proposed to solve both the illumination uniformity for a single image and the relative illumination normalization for image pairs. Especially in the relative illumination normalization step, singular value equalization and linear relationship of the invariant pixels is combined used for the initial global illumination normalization and the object-oriented refined illumination normalization in detail, respectively. For different sensor images, we propose the union group sparse method, which is based on improving the original group sparse model. The different sensor images are set to a similar smoothness level by the same threshold of singular value from the union group matrix. Our method comprehensively considered the influence factors on the dense matching of the different temporal and different sensor stereoscopic image pairs to simultaneously improve the illumination consistency and the smoothness consistency. The radiation consistency experimental results verify the effectiveness and superiority of the proposed method by comparing two other methods. Moreover, in the dense matching experiment of the mixed stereoscopic image pairs, our method has more advantages for objects in the urban area.

  11. Determination of point spread function for a flat-panel X-ray imager and its application in image restoration

    International Nuclear Information System (INIS)

    Jeon, Sungchae; Cho, Gyuseong; Huh, Young; Jin, Seungoh; Park, Jongduk

    2006-01-01

    We investigate the image blur estimation methods, namely modified the Richardson-Lucy (R-L) estimator and the Wiener estimator. Based on the empirical model of the PSF, an image restoration is applied to radiological images. The accuracy of the PSF estimation under the Poisson noise and readout electronic noise is significantly better for the R-L estimator than the Wiener estimator. In the image restoration using the 2-D PSF from the R-L estimator, the result shows a good improvement in the low and middle range of spatial frequency

  12. TEXTURE-AWARE DENSE IMAGE MATCHING USING TERNARY CENSUS TRANSFORM

    Directory of Open Access Journals (Sweden)

    H. Hu

    2016-06-01

    Full Text Available Textureless and geometric discontinuities are major problems in state-of-the-art dense image matching methods, as they can cause visually significant noise and the loss of sharp features. Binary census transform is one of the best matching cost methods but in textureless areas, where the intensity values are similar, it suffers from small random noises. Global optimization for disparity computation is inherently sensitive to parameter tuning in complex urban scenes, and must compromise between smoothness and discontinuities. The aim of this study is to provide a method to overcome these issues in dense image matching, by extending the industry proven Semi-Global Matching through 1 developing a ternary census transform, which takes three outputs in a single order comparison and encodes the results in two bits rather than one, and also 2 by using texture-information to self-tune the parameters, which both preserves sharp edges and enforces smoothness when necessary. Experimental results using various datasets from different platforms have shown that the visual qualities of the triangulated point clouds in urban areas can be largely improved by these proposed methods.

  13. Matching of Remote Sensing Images with Complex Background Variations via Siamese Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Haiqing He

    2018-02-01

    Full Text Available Feature-based matching methods have been widely used in remote sensing image matching given their capability to achieve excellent performance despite image geometric and radiometric distortions. However, most of the feature-based methods are unreliable for complex background variations, because the gradient or other image grayscale information used to construct the feature descriptor is sensitive to image background variations. Recently, deep learning-based methods have been proven suitable for high-level feature representation and comparison in image matching. Inspired by the progresses made in deep learning, a new technical framework for remote sensing image matching based on the Siamese convolutional neural network is presented in this paper. First, a Siamese-type network architecture is designed to simultaneously learn the features and the corresponding similarity metric from labeled training examples of matching and non-matching true-color patch pairs. In the proposed network, two streams of convolutional and pooling layers sharing identical weights are arranged without the manually designed features. The number of convolutional layers is determined based on the factors that affect image matching. The sigmoid function is employed to compute the matching and non-matching probabilities in the output layer. Second, a gridding sub-pixel Harris algorithm is used to obtain the accurate localization of candidate matches. Third, a Gaussian pyramid coupling quadtree is adopted to gradually narrow down the searching space of the candidate matches, and multiscale patches are compared synchronously. Subsequently, a similarity measure based on the output of the sigmoid is adopted to find the initial matches. Finally, the random sample consensus algorithm and the whole-to-local quadratic polynomial constraints are used to remove false matches. In the experiments, different types of satellite datasets, such as ZY3, GF1, IKONOS, and Google Earth images

  14. IMAGING THE EPOCH OF REIONIZATION: LIMITATIONS FROM FOREGROUND CONFUSION AND IMAGING ALGORITHMS

    International Nuclear Information System (INIS)

    Vedantham, Harish; Udaya Shankar, N.; Subrahmanyan, Ravi

    2012-01-01

    Tomography of redshifted 21 cm transition from neutral hydrogen using Fourier synthesis telescopes is a promising tool to study the Epoch of Reionization (EoR). Limiting the confusion from Galactic and extragalactic foregrounds is critical to the success of these telescopes. The instrumental response or the point-spread function (PSF) of such telescopes is inherently three dimensional with frequency mapping to the line-of-sight (LOS) distance. EoR signals will necessarily have to be detected in data where continuum confusion persists; therefore, it is important that the PSF has acceptable frequency structure so that the residual foreground does not confuse the EoR signature. This paper aims to understand the three-dimensional PSF and foreground contamination in the same framework. We develop a formalism to estimate the foreground contamination along frequency, or equivalently LOS dimension, and establish a relationship between foreground contamination in the image plane and visibility weights on the Fourier plane. We identify two dominant sources of LOS foreground contamination—'PSF contamination' and 'gridding contamination'. We show that PSF contamination is localized in LOS wavenumber space, beyond which there potentially exists an 'EoR window' with negligible foreground contamination where we may focus our efforts to detect EoR. PSF contamination in this window may be substantially reduced by judicious choice of a frequency window function. Gridding and imaging algorithms create additional gridding contamination and we propose a new imaging algorithm using the Chirp Z Transform that significantly reduces this contamination. Finally, we demonstrate the analytical relationships and the merit of the new imaging algorithm for the case of imaging with the Murchison Widefield Array.

  15. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  16. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features

    Science.gov (United States)

    Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia

    2016-01-01

    An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996

  17. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features.

    Directory of Open Access Journals (Sweden)

    Seyed Mostafa Mousavi Kahaki

    Full Text Available An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics--such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient--are insufficient for achieving adequate results under different image deformations. Thus, new descriptor's similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence.

  18. Spatial Specificity in Spatiotemporal Encoding and Fourier Imaging

    Science.gov (United States)

    Goerke, Ute

    2015-01-01

    Purpose Ultrafast imaging techniques based on spatiotemporal-encoding (SPEN), such as RASER (rapid acquisition with sequential excitation and refocusing), is a promising new class of sequences since they are largely insensitive to magnetic field variations which cause signal loss and geometric distortion in EPI. So far, attempts to theoretically describe the point-spread-function (PSF) for the original SPEN-imaging techniques have yielded limited success. To fill this gap a novel definition for an apparent PSF is proposed. Theory Spatial resolution in SPEN-imaging is determined by the spatial phase dispersion imprinted on the acquired signal by a frequency-swept excitation or refocusing pulse. The resulting signal attenuation increases with larger distance from the vertex of the quadratic phase profile. Methods Bloch simulations and experiments were performed to validate theoretical derivations. Results The apparent PSF quantifies the fractional contribution of magnetization to a voxel’s signal as a function of distance to the voxel. In contrast, the conventional PSF represents the signal intensity at various locations. Conclusion The definition of the conventional PSF fails for SPEN-imaging since only the phase of isochromats, but not the amplitude of the signal varies. The concept of the apparent PSF is shown to be generalizable to conventional Fourier- imaging techniques. PMID:26712657

  19. PSF interlaboratory comparison

    International Nuclear Information System (INIS)

    Kellogg, L.S.; Lippincott, E.P.

    1982-01-01

    Two experiments for interlaboratory verification of radiometric analysis methods have been conducted with dosimeters irradiated in the Oak Ridge Research Reactor (ORR) Poolside Facility (PSF) Surveillance Dosimeter Measurement Facility (SDMF). In a preliminary analysis of data supplied by the six participants, biases as large as 60% were observed which could lead to errors of this general magnitude in reported surveillance capsule fluence values. As a result of these comparisons, problems were identified and the spread in final values was greatly reduced. Relative agreement among the final results reported by four of the laboratories now appears to be satisfactory (the non-fission dosimeter results generally falling within +-5% and the fission dosimeter results within +-10%), but improvement is required in order to routinely meet Reactor Vessel Material Surveillance Program goals

  20. The Demographics and Properties of Wide-Orbit, Planetary-Mass Companions from PSF Fitting of Spitzer/IRAC Images

    Science.gov (United States)

    Martinez, Raquel; Kraus, Adam L.

    2017-06-01

    Over the past decade, a growing population of planetary-mass companions ( 100 AU) from their host stars, challenging existing models of both star and planet formation. It is unclear whether these systems represent the low-mass extreme of stellar binary formation or the high-mass and wide-orbit extreme of planet formation theories, as various proposed formation pathways inadequately explain the physical and orbital aspects of these systems. Even so, determining which scenario best reproduces the observed characteristics of the PMCs will come once a statistically robust sample of directly-imaged PMCs are found and studied.We are developing an automated pipeline to search for wide-orbit PMCs to young stars in Spitzer/IRAC images. A Markov Chain Monte Carlo (MCMC) algorithm is the backbone of our novel point spread function (PSF) subtraction routine that efficiently creates and subtracts χ2-minimizing instrumental PSFs, simultaneously measuring astrometry and infrared photometry of these systems across the four IRAC channels (3.6 μm, 4.5 μm, 5.8 μm, and 8 μm). In this work, we present the results of a Spitzer/IRAC archival imaging study of 11 young, low-mass (0.044-0.88 M⊙ K3.5-M7.5) stars known to have faint, low-mass companions in 3 nearby star-forming regions (Chameleon, Taurus, and Upper Scorpius). We characterize the systems found to have low-mass companions with non-zero [I1] - [I4] colors, potentially signifying the presence of a circum(sub?)stellar disk. Plans for future pipeline improvements and paths forward will also be discussed. Once this computational foundation is optimized, the stage is set to quickly scour the nearby star-forming regions already imaged by Spitzer, identify potential candidates for further characterization with ground- or space-based telescopes, and increase the number of widely-separated PMCs known.

  1. Quick probabilistic binary image matching: changing the rules of the game

    Science.gov (United States)

    Mustafa, Adnan A. Y.

    2016-09-01

    A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.

  2. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  3. An Image Matching Algorithm Integrating Global SRTM and Image Segmentation for Multi-Source Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Xiao Ling

    2016-08-01

    Full Text Available This paper presents a novel image matching method for multi-source satellite images, which integrates global Shuttle Radar Topography Mission (SRTM data and image segmentation to achieve robust and numerous correspondences. This method first generates the epipolar lines as a geometric constraint assisted by global SRTM data, after which the seed points are selected and matched. To produce more reliable matching results, a region segmentation-based matching propagation is proposed in this paper, whereby the region segmentations are extracted by image segmentation and are considered to be a spatial constraint. Moreover, a similarity measure integrating Distance, Angle and Normalized Cross-Correlation (DANCC, which considers geometric similarity and radiometric similarity, is introduced to find the optimal correspondences. Experiments using typical satellite images acquired from Resources Satellite-3 (ZY-3, Mapping Satellite-1, SPOT-5 and Google Earth demonstrated that the proposed method is able to produce reliable and accurate matching results.

  4. Robust and efficient method for matching features in omnidirectional images

    Science.gov (United States)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  5. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    Science.gov (United States)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  6. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-02-01

    Full Text Available To solve the problem on inaccuracy when estimating the point spread function (PSF of the ideal original image in traditional projection onto convex set (POCS super-resolution (SR reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40 three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.

  7. A blur-invariant local feature for motion blurred image matching

    Science.gov (United States)

    Tong, Qiang; Aoki, Terumasa

    2017-07-01

    Image matching between a blurred (caused by camera motion, out of focus, etc.) image and a non-blurred image is a critical task for many image/video applications. However, most of the existing local feature schemes fail to achieve this work. This paper presents a blur-invariant descriptor and a novel local feature scheme including the descriptor and the interest point detector based on moment symmetry - the authors' previous work. The descriptor is based on a new concept - center peak moment-like element (CPME) which is robust to blur and boundary effect. Then by constructing CPMEs, the descriptor is also distinctive and suitable for image matching. Experimental results show our scheme outperforms state of the art methods for blurred image matching

  8. HIV-1 pre-mRNA commitment to Rev mediated export through PSF and Matrin 3

    International Nuclear Information System (INIS)

    Kula, Anna; Gharu, Lavina; Marcello, Alessandro

    2013-01-01

    Human immunodeficiency virus gene expression and replication are regulated at several levels. Incompletely spliced viral RNAs and full-length genomic RNA contain the RRE element and are bound by the viral trans-acting protein Rev to be transported out of the nucleus. Previously we found that the nuclear matrix protein MATR3 was a cofactor of Rev-mediated RNA export. Here we show that the pleiotropic protein PSF binds viral RNA and is associated with MATR3. PSF is involved in the maintenance of a pool of RNA available for Rev activity. However, while Rev and PSF bind the viral pre-mRNA at the site of viral transcription, MATR3 interacts at a subsequent step. We propose that PSF and MATR3 define a novel pathway for RRE-containing HIV-1 RNAs that is hijacked by the viral Rev protein.

  9. Feature Matching for SAR and Optical Images Based on Gaussian-Gamma-shaped Edge Strength Map

    Directory of Open Access Journals (Sweden)

    CHEN Min

    2016-03-01

    Full Text Available A matching method for SAR and optical images, robust to pixel noise and nonlinear grayscale differences, is presented. Firstly, a rough correction to eliminate rotation and scale change between images is performed. Secondly, features robust to speckle noise of SAR image are detected by improving the original phase congruency based method. Then, feature descriptors are constructed on the Gaussian-Gamma-shaped edge strength map according to the histogram of oriented gradient pattern. Finally, descriptor similarity and geometrical relationship are combined to constrain the matching processing.The experimental results demonstrate that the proposed method provides significant improvement in correct matches number and image registration accuracy compared with other traditional methods.

  10. Effect of Evaporation Time on Separation Performance of Polysulfone/Cellulose Acetate (PSF/CA) Membrane

    Science.gov (United States)

    Syahbanu, Intan; Piluharto, Bambang; Khairi, Syahrul; Sudarko

    2018-01-01

    Polysulfone and cellulose acetate are common material in separation. In this research, polysulfone/cellulose actetate (PSF/CA) blend membrane was prepared. The aim of this research was to study effect of evaporation time in casting of PSF/CA membrane and its performance in filtration. CA was obtained by acetylation process of bacterial cellulose (BC) from fermentation of coconut water. Fourier Transform Infra Red (FTIR) Spectroscopy was used to examine functional groups of BC, CA and commercial cellulose acetate. Subtitution of acetyl groups determined by titration method. Blend membranes were prepared through phase inversion technique in which composition of PSF/PEG/CA/NMP(%w) was 15/5/5/75. Polyethyleneglycol (PEG) and N-methyl-2-pyrrolidone (NMP) were act as pore forming agent and solvent, respectively. Variation of evaporation times were used as parameter to examine water uptake, flux, and morphology of PSF/CA blend membranes. FTIR spectra of CA show characteristic peak of acetyl group at 1220 cm-1 indicated that BC was acetylated succesfully. Degree of subtitution of BCA was found at 2.62. Highest water flux was performed at 2 bar obtained at 106.31 L.m-2.h-1 at 0 minute variation, and decrease as increasing evaporation time. Morphology of PSF/BCA blend membranes were investigated by Scanning Electron Microscopy (SEM) showed that porous asymetric membrane were formed.

  11. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detec...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  12. The edge artifact in the point-spread function-based PET reconstruction at different sphere-to-background ratios of radioactivity.

    Science.gov (United States)

    Kidera, Daisuke; Kihara, Ken; Akamatsu, Go; Mikasa, Shohei; Taniguchi, Takafumi; Tsutsui, Yuji; Takeshita, Toshiki; Maebatake, Akira; Miwa, Kenta; Sasaki, Masayuki

    2016-02-01

    The aim of this study was to quantitatively evaluate the edge artifacts in PET images reconstructed using the point-spread function (PSF) algorithm at different sphere-to-background ratios of radioactivity (SBRs). We used a NEMA IEC body phantom consisting of six spheres with 37, 28, 22, 17, 13 and 10 mm in inner diameter. The background was filled with (18)F solution with a radioactivity concentration of 2.65 kBq/mL. We prepared three sets of phantoms with SBRs of 16, 8, 4 and 2. The PET data were acquired for 20 min using a Biograph mCT scanner. The images were reconstructed with the baseline ordered subsets expectation maximization (OSEM) algorithm, and with the OSEM + PSF correction model (PSF). For the image reconstruction, the number of iterations ranged from one to 10. The phantom PET image analyses were performed by a visual assessment of the PET images and profiles, a contrast recovery coefficient (CRC), which is the ratio of SBR in the images to the true SBR, and the percent change in the maximum count between the OSEM and PSF images (Δ % counts). In the PSF images, the spheres with a diameter of 17 mm or larger were surrounded by a dense edge in comparison with the OSEM images. In the spheres with a diameter of 22 mm or smaller, an overshoot appeared in the center of the spheres as a sharp peak in the PSF images in low SBR. These edge artifacts were clearly observed in relation to the increase of the SBR. The overestimation of the CRC was observed in 13 mm spheres in the PSF images. In the spheres with a diameter of 17 mm or smaller, the Δ % counts increased with an increasing SBR. The Δ % counts increased to 91 % in the 10-mm sphere at the SBR of 16. The edge artifacts in the PET images reconstructed using the PSF algorithm increased with an increasing SBR. In the small spheres, the edge artifact was observed as a sharp peak at the center of spheres and could result in overestimation.

  13. Image matching in Bayer raw domain to de-noise low-light still images, optimized for real-time implementation

    Science.gov (United States)

    Romanenko, I. V.; Edirisinghe, E. A.; Larkin, D.

    2013-03-01

    Temporal accumulation of images is a well-known approach to improve signal to noise ratios of still images taken in a low light conditions. However, the complexity of known algorithms often leads to high hardware resource usage, increased memory bandwidth and computational complexity, making their practical use impossible. In our research we attempt to solve this problem with an implementation of a practical spatial-temporal de-noising algorithm, based on image accumulation. Image matching and spatial-temporal filtering was performed in Bayer RAW data space, which allowed us to benefit from predictable sensor noise characteristics, thus allowing using a range of algorithmic optimizations. The proposed algorithm accurately compensates for global and local motion and efficiently removes different kinds of noise in noisy images taken in low light conditions. In our algorithm we were able to perform global and local motion compensation in Bayer RAW data space, while preserving the resolution and effectively improving signal to noise ratios of moving objects as well as non-stationary background. The proposed algorithm is suitable for implementation in commercial grade FPGA's and capable of processing 16MP images at capturing rate (10 frames per second). The main challenge for matching between still images is the compromise between the quality of the motion prediction and the complexity of the algorithm and required memory bandwidth. Still images taken in a burst sequence must be aligned to compensate for background motion and foreground objects movements in a scene. High resolution still images coupled with significant time between successive frames can produce large displacements between images, which creates additional difficulty for image matching algorithms. In photo applications it is very important that the noise is efficiently removed in both static, and non-static background as well as in a moving objects, maintaining the resolution of the image. In our proposed

  14. Improved SURF Algorithm and Its Application in Seabed Relief Image Matching

    Directory of Open Access Journals (Sweden)

    Zhang Hong-Mei

    2017-01-01

    Full Text Available The matching based on seabed relief image is widely used in underwater relief matching navigation and target recognition, etc. However, being influenced by various factors, some conventional matching algorithms are difficult to obtain an ideal result in the matching of seabed relief image. SURF(Speeded Up Robust Features algorithm is based on feature points pair to achieve matching, and can get good results in the seabed relief image matching. However, in practical applications, the traditional SURF algorithm is easy to get false matching, especially when the area’s features are similar or not obvious, the problem is more seriously. In order to improve the robustness of the algorithm, this paper proposes an improved matching algorithm, which combines the SURF, and RANSAC (Random Sample Consensus algorithms. The new algorithm integrates the two algorithms advantages, firstly, the SURF algorithm is applied to detect and extract the feature points then to pre-match. Secondly, RANSAC algorithm is utilized to eliminate mismatching points, and then the accurate matching is accomplished with the correct matching points. The experimental results show that the improved algorithm overcomes the mismatching problem effectively and have better precision and faster speed than the traditional SURF algorithm.

  15. Blind estimation of blur in hyperspectral images

    Science.gov (United States)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2017-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial

  16. A method of PSF generation for 3D brightfield deconvolution.

    Science.gov (United States)

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  17. Clinical evaluation of PET image reconstruction using a spatial resolution model

    DEFF Research Database (Denmark)

    Andersen, Flemming Littrup; Klausen, Thomas Levin; Loft, Annika

    2013-01-01

    PURPOSE: PET image resolution is variable across the measured field-of-view and described by the point spread function (PSF). When accounting for the PSF during PET image reconstruction image resolution is improved and partial volume effects are reduced. Here, we evaluate the effect of PSF......-based reconstruction on lesion quantification in routine clinical whole-body (WB) PET/CT imaging. MATERIALS AND METHODS: 41 oncology patients were referred for a WB-PET/CT examination (Biograph 40 TruePoint). Emission data were acquired at 2.5min/bed at 1hpi of 400 MBq [18F]-FDG. Attenuation-corrected PET images were...... reconstructed on 336×336-matrices using: (R1) standard AW-OSEM (4 iter, 8 subsets, 4mm Gaussian) and (R2) AW-OSEM with PSF (3 iter, 21 subsets, 2mm). Blinded and randomised reading of R1- and R2-PET images was performed. Individual lesions were located and counted independently on both sets of images...

  18. AN AERIAL-IMAGE DENSE MATCHING APPROACH BASED ON OPTICAL FLOW FIELD

    Directory of Open Access Journals (Sweden)

    W. Yuan

    2016-06-01

    Full Text Available Dense matching plays an important role in many fields, such as DEM (digital evaluation model producing, robot navigation and 3D environment reconstruction. Traditional approaches may meet the demand of accuracy. But the calculation time and out puts density is hardly be accepted. Focus on the matching efficiency and complex terrain surface matching feasibility an aerial image dense matching method based on optical flow field is proposed in this paper. First, some high accurate and uniformed control points are extracted by using the feature based matching method. Then the optical flow is calculated by using these control points, so as to determine the similar region between two images. Second, the optical flow field is interpolated by using the multi-level B-spline interpolation in the similar region and accomplished the pixel by pixel coarse matching. Final, the results related to the coarse matching refinement based on the combined constraint, which recognizes the same points between images. The experimental results have shown that our method can achieve per-pixel dense matching points, the matching accuracy achieves sub-pixel level, and fully meet the three-dimensional reconstruction and automatic generation of DSM-intensive matching’s requirements. The comparison experiments demonstrated that our approach’s matching efficiency is higher than semi-global matching (SGM and Patch-based multi-view stereo matching (PMVS which verifies the feasibility and effectiveness of the algorithm.

  19. Person identification based on multiscale matching of cortical images

    NARCIS (Netherlands)

    Kruizinga, P; Petkov, N; Hertzberger, B; Serazzi, G

    1995-01-01

    A set of so-called cortical images, motivated by the function of simple cells in the primary visual cortex of mammals, is computed from each of two input images and an image pyramid is constructed for each cortical image. The two sets of cortical image pyramids are matched synchronously and an

  20. A Spherical Model Based Keypoint Descriptor and Matching Algorithm for Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available Omnidirectional images generally have nonlinear distortion in radial direction. Unfortunately, traditional algorithms such as scale-invariant feature transform (SIFT and Descriptor-Nets (D-Nets do not work well in matching omnidirectional images just because they are incapable of dealing with the distortion. In order to solve this problem, a new voting algorithm is proposed based on the spherical model and the D-Nets algorithm. Because the spherical-based keypoint descriptor contains the distortion information of omnidirectional images, the proposed matching algorithm is invariant to distortion. Keypoint matching experiments are performed on three pairs of omnidirectional images, and comparison is made among the proposed algorithm, the SIFT and the D-Nets. The result shows that the proposed algorithm is more robust and more precise than the SIFT, and the D-Nets in matching omnidirectional images. Comparing with the SIFT and the D-Nets, the proposed algorithm has two main advantages: (a there are more real matching keypoints; (b the coverage range of the matching keypoints is wider, including the seriously distorted areas.

  1. MULTI-TEMPORAL AND MULTI-SENSOR IMAGE MATCHING BASED ON LOCAL FREQUENCY INFORMATION

    Directory of Open Access Journals (Sweden)

    X. Liu

    2012-08-01

    Full Text Available Image Matching is often one of the first tasks in many Photogrammetry and Remote Sensing applications. This paper presents an efficient approach to automated multi-temporal and multi-sensor image matching based on local frequency information. Two new independent image representations, Local Average Phase (LAP and Local Weighted Amplitude (LWA, are presented to emphasize the common scene information, while suppressing the non-common illumination and sensor-dependent information. In order to get the two representations, local frequency information is firstly obtained from Log-Gabor wavelet transformation, which is similar to that of the human visual system; then the outputs of odd and even symmetric filters are used to construct the LAP and LWA. The LAP and LWA emphasize on the phase and amplitude information respectively. As these two representations are both derivative-free and threshold-free, they are robust to noise and can keep as much of the image details as possible. A new Compositional Similarity Measure (CSM is also presented to combine the LAP and LWA with the same weight for measuring the similarity of multi-temporal and multi-sensor images. The CSM can make the LAP and LWA compensate for each other and can make full use of the amplitude and phase of local frequency information. In many image matching applications, the template is usually selected without consideration of its matching robustness and accuracy. In order to overcome this problem, a local best matching point detection is presented to detect the best matching template. In the detection method, we employ self-similarity analysis to identify the template with the highest matching robustness and accuracy. Experimental results using some real images and simulation images demonstrate that the presented approach is effective for matching image pairs with significant scene and illumination changes and that it has advantages over other state-of-the-art approaches, which include: the

  2. A new registration method with voxel-matching technique for temporal subtraction images

    Science.gov (United States)

    Itai, Yoshinori; Kim, Hyoungseop; Ishikawa, Seiji; Katsuragawa, Shigehiko; Doi, Kunio

    2008-03-01

    A temporal subtraction image, which is obtained by subtraction of a previous image from a current one, can be used for enhancing interval changes on medical images by removing most of normal structures. One of the important problems in temporal subtraction is that subtraction images commonly include artifacts created by slight differences in the size, shape, and/or location of anatomical structures. In this paper, we developed a new registration method with voxel-matching technique for substantially removing the subtraction artifacts on the temporal subtraction image obtained from multiple-detector computed tomography (MDCT). With this technique, the voxel value in a warped (or non-warped) previous image is replaced by a voxel value within a kernel, such as a small cube centered at a given location, which would be closest (identical or nearly equal) to the voxel value in the corresponding location in the current image. Our new method was examined on 16 clinical cases with MDCT images. Preliminary results indicated that interval changes on the subtraction images were enhanced considerably, with a substantial reduction of misregistration artifacts. The temporal subtraction images obtained by use of the voxel-matching technique would be very useful for radiologists in the detection of interval changes on MDCT images.

  3. Four-dimensional dose reconstruction through in vivo phase matching of cine images of electronic portal imaging device.

    Science.gov (United States)

    Yoon, Jihyung; Jung, Jae Won; Kim, Jong Oh; Yi, Byong Yong; Yeo, Inhwan

    2016-07-01

    A method is proposed to reconstruct a four-dimensional (4D) dose distribution using phase matching of measured cine images to precalculated images of electronic portal imaging device (EPID). (1) A phantom, designed to simulate a tumor in lung (a polystyrene block with a 3 cm diameter embedded in cork), was placed on a sinusoidally moving platform with an amplitude of 1 cm and a period of 4 s. Ten-phase 4D computed tomography (CT) images of the phantom were acquired. A planning target volume (PTV) was created by adding a margin of 1 cm around the internal target volume of the tumor. (2) Three beams were designed, which included a static beam, a theoretical dynamic beam, and a planning-optimized dynamic beam (PODB). While the theoretical beam was made by manually programming a simplistic sliding leaf motion, the planning-optimized beam was obtained from treatment planning. From the three beams, three-dimensional (3D) doses on the phantom were calculated; 4D dose was calculated by means of the ten phase images (integrated over phases afterward); serving as "reference" images, phase-specific EPID dose images under the lung phantom were also calculated for each of the ten phases. (3) Cine EPID images were acquired while the beams were irradiated to the moving phantom. (4) Each cine image was phase-matched to a phase-specific CT image at which common irradiation occurred by intercomparing the cine image with the reference images. (5) Each cine image was used to reconstruct dose in the phase-matched CT image, and the reconstructed doses were summed over all phases. (6) The summation was compared with forwardly calculated 4D and 3D dose distributions. Accounting for realistic situations, intratreatment breathing irregularity was simulated by assuming an amplitude of 0.5 cm for the phantom during a portion of breathing trace in which the phase matching could not be performed. Intertreatment breathing irregularity between the time of treatment and the time of planning CT was

  4. Four-dimensional dose reconstruction through in vivo phase matching of cine images of electronic portal imaging device

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jihyung; Jung, Jae Won, E-mail: jungj@ecu.edu [Department of Physics, East Carolina University, Greenville, North Carolina 27858 (United States); Kim, Jong Oh [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, Pennsylvania 15232 (United States); Yi, Byong Yong [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland 21201 (United States); Yeo, Inhwan [Department of Radiation Medicine, Loma Linda University Medical Center, Loma Linda, California 92354 (United States)

    2016-07-15

    Purpose: A method is proposed to reconstruct a four-dimensional (4D) dose distribution using phase matching of measured cine images to precalculated images of electronic portal imaging device (EPID). Methods: (1) A phantom, designed to simulate a tumor in lung (a polystyrene block with a 3 cm diameter embedded in cork), was placed on a sinusoidally moving platform with an amplitude of 1 cm and a period of 4 s. Ten-phase 4D computed tomography (CT) images of the phantom were acquired. A planning target volume (PTV) was created by adding a margin of 1 cm around the internal target volume of the tumor. (2) Three beams were designed, which included a static beam, a theoretical dynamic beam, and a planning-optimized dynamic beam (PODB). While the theoretical beam was made by manually programming a simplistic sliding leaf motion, the planning-optimized beam was obtained from treatment planning. From the three beams, three-dimensional (3D) doses on the phantom were calculated; 4D dose was calculated by means of the ten phase images (integrated over phases afterward); serving as “reference” images, phase-specific EPID dose images under the lung phantom were also calculated for each of the ten phases. (3) Cine EPID images were acquired while the beams were irradiated to the moving phantom. (4) Each cine image was phase-matched to a phase-specific CT image at which common irradiation occurred by intercomparing the cine image with the reference images. (5) Each cine image was used to reconstruct dose in the phase-matched CT image, and the reconstructed doses were summed over all phases. (6) The summation was compared with forwardly calculated 4D and 3D dose distributions. Accounting for realistic situations, intratreatment breathing irregularity was simulated by assuming an amplitude of 0.5 cm for the phantom during a portion of breathing trace in which the phase matching could not be performed. Intertreatment breathing irregularity between the time of treatment and the

  5. Fingerprint matching algorithm for poor quality images

    Directory of Open Access Journals (Sweden)

    Vedpal Singh

    2015-04-01

    Full Text Available The main aim of this study is to establish an efficient platform for fingerprint matching for low-quality images. Generally, fingerprint matching approaches use the minutiae points for authentication. However, it is not such a reliable authentication method for low-quality images. To overcome this problem, the current study proposes a fingerprint matching methodology based on normalised cross-correlation, which would improve the performance and reduce the miscalculations during authentication. It would decrease the computational complexities. The error rate of the proposed method is 5.4%, which is less than the two-dimensional (2D dynamic programming (DP error rate of 5.6%, while Lee's method produces 5.9% and the combined method has 6.1% error rate. Genuine accept rate at 1% false accept rate is 89.3% but at 0.1% value it is 96.7%, which is higher. The outcome of this study suggests that the proposed methodology has a low error rate with minimum computational effort as compared with existing methods such as Lee's method and 2D DP and the combined method.

  6. Imaging theory of nonlinear second harmonic and third harmonic generations in confocal microscopy

    Institute of Scientific and Technical Information of China (English)

    TANG Zhilie; XING Da; LIU Songhao

    2004-01-01

    The imaging theory of nonlinear second harmonic generation (SHG) and third harmonic generation (THG) in confocal microscopy is presented in this paper. The nonlinear effect of SHG and THG on the imaging properties of confocal microscopy has been analyzed in detail by the imaging theory. It is proved that the imaging process of SHG and THG in confocal microscopy, which is different from conventional coherent imaging or incoherent imaging, can be divided into two different processes of coherent imaging. The three-dimensional point spread functions (3D-PSF) of SHG and THG confocal microscopy are derived based on the nonlinear principles of SHG and THG. The imaging properties of SHG and THG confocal microscopy are discussed in detail according to its 3D-PSF. It is shown that the resolution of SHG and THG confocal microscopy is higher than that of single-and two-photon confocal microscopy.

  7. PLANE MATCHING WITH OBJECT-SPACE SEARCHING USING INDEPENDENTLY RECTIFIED IMAGES

    Directory of Open Access Journals (Sweden)

    H. Takeda

    2012-07-01

    Full Text Available In recent years, the social situation in cities has changed significantly such as redevelopment due to the massive earthquake and large-scale urban development. For example, numerical simulations can be used to study this phenomenon. Such simulations require the construction of high-definition three-dimensional city models that accurately reflect the real world. Progress in sensor technology allows us to easily obtain multi-view images. However, the existing multi-image matching techniques are inadequate. In this paper, we propose a new technique for multi-image matching. Since the existing method of feature searching is complicated, we have developed a rectification method that can be processed independently for each image does not depend on the stereo-pair. The object-space searching method that produces mismatches due to the occlusion or distortion of wall textures on images is the focus of our study. Our proposed technique can also match the building wall surface. The proposed technique has several advantages, and its usefulness is clarified through an experiment using actual images.

  8. An effective approach for iris recognition using phase-based image matching.

    Science.gov (United States)

    Miyazawa, Kazuyuki; Ito, Koichi; Aoki, Takafumi; Kobayashi, Koji; Nakajima, Hiroshi

    2008-10-01

    This paper presents an efficient algorithm for iris recognition using phase-based image matching--an image matching technique using phase components in 2D Discrete Fourier Transforms (DFTs) of given images. Experimental evaluation using CASIA iris image databases (versions 1.0 and 2.0) and Iris Challenge Evaluation (ICE) 2005 database clearly demonstrates that the use of phase components of iris images makes possible to achieve highly accurate iris recognition with a simple matching algorithm. This paper also discusses major implementation issues of our algorithm. In order to reduce the size of iris data and to prevent the visibility of iris images, we introduce the idea of 2D Fourier Phase Code (FPC) for representing iris information. The 2D FPC is particularly useful for implementing compact iris recognition devices using state-of-the-art Digital Signal Processing (DSP) technology.

  9. COSMIC SHEAR MEASUREMENT USING AUTO-CONVOLVED IMAGES

    International Nuclear Information System (INIS)

    Li, Xiangchong; Zhang, Jun

    2016-01-01

    We study the possibility of using quadrupole moments of auto-convolved galaxy images to measure cosmic shear. The autoconvolution of an image corresponds to the inverse Fourier transformation of its power spectrum. The new method has the following advantages: the smearing effect due to the point-spread function (PSF) can be corrected by subtracting the quadrupole moments of the auto-convolved PSF; the centroid of the auto-convolved image is trivially identified; the systematic error due to noise can be directly removed in Fourier space; the PSF image can also contain noise, the effect of which can be similarly removed. With a large ensemble of simulated galaxy images, we show that the new method can reach a sub-percent level accuracy under general conditions, albeit with increasingly large stamp size for galaxies of less compact profiles.

  10. Use of the probability of stone formation (PSF) score to assess stone forming risk and treatment response in a cohort of Brazilian stone formers.

    Science.gov (United States)

    Turney, Benjamin; Robertson, William; Wiseman, Oliver; Amaro, Carmen Regina P R; Leitão, Victor A; Silva, Isabela Leme da; Amaro, João Luiz

    2014-01-01

    The aim was to confirm that PSF (probability of stone formation) changed appropriately following medical therapy on recurrent stone formers. Data were collected on 26 Brazilian stone-formers. A baseline 24-hour urine collection was performed prior to treatment. Details of the medical treatment initiated for stone-disease were recorded. A PSF calculation was performed on the 24 hour urine sample using the 7 urinary parameters required: voided volume, oxalate, calcium, urate, pH, citrate and magnesium. A repeat 24-hour urine sample was performed for PSF calculation after treatment. Comparison was made between the PSF scores before and during treatment. At baseline, 20 of the 26 patients (77%) had a high PSF score (> 0.5). Of the 26 patients, 17 (65%) showed an overall reduction in their PSF profiles with a medical treatment regimen. Eleven patients (42%) changed from a high risk (PSF > 0.5) to a low risk (PSF 0.5) during both assessments. The PSF score reduced following medical treatment in the majority of patients in this cohort.

  11. Use of the probability of stone formation (PSF score to assess stone forming risk and treatment response in a cohort of Brazilian stone formers

    Directory of Open Access Journals (Sweden)

    Benjamin Turney

    2014-08-01

    Full Text Available Introduction The aim was to confirm that PSF (probability of stone formation changed appropriately following medical therapy on recurrent stone formers. Materials and Methods Data were collected on 26 Brazilian stone-formers. A baseline 24-hour urine collection was performed prior to treatment. Details of the medical treatment initiated for stone-disease were recorded. A PSF calculation was performed on the 24 hour urine sample using the 7 urinary parameters required: voided volume, oxalate, calcium, urate, pH, citrate and magnesium. A repeat 24-hour urine sample was performed for PSF calculation after treatment. Comparison was made between the PSF scores before and during treatment. Results At baseline, 20 of the 26 patients (77% had a high PSF score (> 0.5. Of the 26 patients, 17 (65% showed an overall reduction in their PSF profiles with a medical treatment regimen. Eleven patients (42% changed from a high risk (PSF > 0.5 to a low risk (PSF 0.5 during both assessments. Conclusions The PSF score reduced following medical treatment in the majority of patients in this cohort.

  12. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  13. An accelerated image matching technique for UAV orthoimage registration

    Science.gov (United States)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  14. Central obscuration effects on optical synthetic aperture imaging

    Science.gov (United States)

    Wang, Xue-wen; Luo, Xiao; Zheng, Li-gong; Zhang, Xue-jun

    2014-02-01

    Due to the central obscuration problem exists in most optical synthetic aperture systems, it is necessary to analyze its effects on their image performance. Based on the incoherent diffraction limited imaging theory, a Golay-3 type synthetic aperture system was used to study the central obscuration effects on the point spread function (PSF) and the modulation transfer function (MTF). It was found that the central obscuration does not affect the width of the central peak of the PSF and the cutoff spatial frequency of the MTF, but attenuate the first sidelobe of the PSF and the midfrequency of the MTF. The imaging simulation of a Golay-3 type synthetic aperture system with central obscuration proved this conclusion. At last, a Wiener Filter restoration algorithm was used to restore the image of this system, the images were obviously better.

  15. Optimal Matched Filter in the Low-number Count Poisson Noise Regime and Implications for X-Ray Source Detection

    Science.gov (United States)

    Ofek, Eran O.; Zackay, Barak

    2018-04-01

    Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.

  16. SU-C-207B-07: Deep Convolutional Neural Network Image Matching for Ultrasound Guidance in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, N; Najafi, M; Hancock, S; Hristov, D [Stanford University Cancer Center, Palo Alto, CA (United States)

    2016-06-15

    Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculated as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.

  17. SU-C-207B-07: Deep Convolutional Neural Network Image Matching for Ultrasound Guidance in Radiotherapy

    International Nuclear Information System (INIS)

    Zhu, N; Najafi, M; Hancock, S; Hristov, D

    2016-01-01

    Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculated as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.

  18. Gun bore flaw image matching based on improved SIFT descriptor

    Science.gov (United States)

    Zeng, Luan; Xiong, Wei; Zhai, You

    2013-01-01

    In order to increase the operation speed and matching ability of SIFT algorithm, the SIFT descriptor and matching strategy are improved. First, a method of constructing feature descriptor based on sector area is proposed. By computing the gradients histogram of location bins which are parted into 6 sector areas, a descriptor with 48 dimensions is constituted. It can reduce the dimension of feature vector and decrease the complexity of structuring descriptor. Second, it introduce a strategy that partitions the circular region into 6 identical sector areas starting from the dominate orientation. Consequently, the computational complexity is reduced due to cancellation of rotation operation for the area. The experimental results indicate that comparing with the OpenCV SIFT arithmetic, the average matching speed of the new method increase by about 55.86%. The matching veracity can be increased even under some variation of view point, illumination, rotation, scale and out of focus. The new method got satisfied results in gun bore flaw image matching. Keywords: Metrology, Flaw image matching, Gun bore, Feature descriptor

  19. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  20. Fan fault diagnosis based on symmetrized dot pattern analysis and image matching

    Science.gov (United States)

    Xu, Xiaogang; Liu, Haixiao; Zhu, Hao; Wang, Songling

    2016-07-01

    To detect the mechanical failure of fans, a new diagnostic method based on the symmetrized dot pattern (SDP) analysis and image matching is proposed. Vibration signals of 13 kinds of running states are acquired on a centrifugal fan test bed and reconstructed by the SDP technique. The SDP pattern templates of each running state are established. An image matching method is performed to diagnose the fault. In order to improve the diagnostic accuracy, the single template, multiple templates and clustering fault templates are used to perform the image matching.

  1. LWR surveillance dosimetry improvement program: PSF metallurgical blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Maerker, R.E.; Stallmann, F.W.

    1984-01-01

    The metallurgical irradiation experiment at the Oak Ridge Research Reactor Poolside Facility (ORR-PSF) was designed as a benchmark to test the accuracy of radiation embrittlement predictions in the pressure vessel wall of light water reactors on the basis of results from surveillance capsules. The PSF metallurgical Blind Test is concerned with the simulated surveillance capsule (SSC) and the simulated pressure vessel capsule (SPVC). The data from the ORR-PSF benchmark experiment are the basis for comparison with the predictions made by participants of the metallurgical ''Blind Test''. The Blind Test required the participants to predict the embrittlement of the irradiated specimen based only on dosimetry and metallurgical data from the SSC1 capsule. This exercise included both the prediction of damage fluence and the prediction of embrittlement based on the predicted fluence. A variety of prediction methodologies was used by the participants. No glaring biases or other deficiencies were found, but neither were any of the methods clearly superior to the others. Closer analysis shows a rather complex and poorly understood relation between fluence and material damage. Many prediction formulas can give an adequate approximation, but further improvement of the prediction methodology is unlikely at this time given the many unknown factors. Instead, attention should be focused on determining realistic uncertainties for the predicted material changes. The Blind Test comparisons provide some clues for the size of these uncertainties. In particular, higher uncertainties must be assigned to materials whose chemical composition lies outside the data set for which the prediction formula was obtained. 16 references, 14 figures, 5 tables

  2. SU-E-I-74: Image-Matching Technique of Computed Tomography Images for Personal Identification: A Preliminary Study Using Anthropomorphic Chest Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Matsunobu, Y; Shiotsuki, K [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka (Japan); Morishita, J [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, JP (Japan)

    2015-06-15

    Purpose: Fingerprints, dental impressions, and DNA are used to identify unidentified bodies in forensic medicine. Cranial Computed tomography (CT) images and/or dental radiographs are also used for identification. Radiological identification is important, particularly in the absence of comparative fingerprints, dental impressions, and DNA samples. The development of an automated radiological identification system for unidentified bodies is desirable. We investigated the potential usefulness of bone structure for matching chest CT images. Methods: CT images of three anthropomorphic chest phantoms were obtained on different days in various settings. One of the phantoms was assumed to be an unidentified body. The bone image and the bone image with soft tissue (BST image) were extracted from the CT images. To examine the usefulness of the bone image and/or the BST image, the similarities between the two-dimensional (2D) or threedimensional (3D) images of the same and different phantoms were evaluated in terms of the normalized cross-correlation value (NCC). Results: For the 2D and 3D BST images, the NCCs obtained from the same phantom assumed to be an unidentified body (2D, 0.99; 3D, 0.93) were higher than those for the different phantoms (2D, 0.95 and 0.91; 3D, 0.89 and 0.80). The NCCs for the same phantom (2D, 0.95; 3D, 0.88) were greater compared to those of the different phantoms (2D, 0.61 and 0.25; 3D, 0.23 and 0.10) for the bone image. The difference in the NCCs between the same and different phantoms tended to be larger for the bone images than for the BST images. These findings suggest that the image-matching technique is more useful when utilizing the bone image than when utilizing the BST image to identify different people. Conclusion: This preliminary study indicated that evaluating the similarity of bone structure in 2D and 3D images is potentially useful for identifying of an unidentified body.

  3. SU-E-I-74: Image-Matching Technique of Computed Tomography Images for Personal Identification: A Preliminary Study Using Anthropomorphic Chest Phantoms

    International Nuclear Information System (INIS)

    Matsunobu, Y; Shiotsuki, K; Morishita, J

    2015-01-01

    Purpose: Fingerprints, dental impressions, and DNA are used to identify unidentified bodies in forensic medicine. Cranial Computed tomography (CT) images and/or dental radiographs are also used for identification. Radiological identification is important, particularly in the absence of comparative fingerprints, dental impressions, and DNA samples. The development of an automated radiological identification system for unidentified bodies is desirable. We investigated the potential usefulness of bone structure for matching chest CT images. Methods: CT images of three anthropomorphic chest phantoms were obtained on different days in various settings. One of the phantoms was assumed to be an unidentified body. The bone image and the bone image with soft tissue (BST image) were extracted from the CT images. To examine the usefulness of the bone image and/or the BST image, the similarities between the two-dimensional (2D) or threedimensional (3D) images of the same and different phantoms were evaluated in terms of the normalized cross-correlation value (NCC). Results: For the 2D and 3D BST images, the NCCs obtained from the same phantom assumed to be an unidentified body (2D, 0.99; 3D, 0.93) were higher than those for the different phantoms (2D, 0.95 and 0.91; 3D, 0.89 and 0.80). The NCCs for the same phantom (2D, 0.95; 3D, 0.88) were greater compared to those of the different phantoms (2D, 0.61 and 0.25; 3D, 0.23 and 0.10) for the bone image. The difference in the NCCs between the same and different phantoms tended to be larger for the bone images than for the BST images. These findings suggest that the image-matching technique is more useful when utilizing the bone image than when utilizing the BST image to identify different people. Conclusion: This preliminary study indicated that evaluating the similarity of bone structure in 2D and 3D images is potentially useful for identifying of an unidentified body

  4. MapX: 2D XRF for Planetary Exploration - Image Formation and Optic Characterization

    Science.gov (United States)

    Sarrazin, P.; Blake, D.; Gailhanou, M.; Marchis, F.; Chalumeau, C.; Webb, S.; Walter, P.; Schyns, E.; Thompson, K.; Bristow, T.

    2018-04-01

    Map-X is a planetary instrument concept for 2D X-Ray Fluorescence (XRF) spectroscopy. The instrument is placed directly on the surface of an object and held in a fixed position during the measurement. The formation of XRF images on the CCD detector relies on a multichannel optic configured for 1:1 imaging and can be analyzed through the point spread function (PSF) of the optic. The PSF can be directly measured using a micron-sized monochromatic X-ray source in place of the sample. Such PSF measurements were carried out at the Stanford Synchrotron and are compared with ray tracing simulations. It is shown that artifacts are introduced by the periodicity of the PSF at the channel scale and the proximity of the CCD pixel size and the optic channel size. A strategy of sub-channel random moves was used to cancel out these artifacts and provide a clean experimental PSF directly usable for XRF image deconvolution.

  5. ORNL evaluation of the ORR-PSF metallurgical experiment and blind test

    International Nuclear Information System (INIS)

    Stallmann, F.W.

    1984-01-01

    A methodology is described to evaluate the dosimetry and metallurgical data from the two-year ORR-PSF metallurgical irradiation experiment. The first step is to obtain a three-dimensional map of damage exposure parameter values based on neutron transport calculations and dosimetry measurements which are obtained by means of the LSL-M2 adjustment procedure. Metallurgical test data are then combined with damage parameter, temperature, and chemistry information to determine the correlation between radiation and steel embrittlement in reactor pressure vessels including estimates for the uncertainties. Statistical procedures for the evaluation of Charpy data, developed earlier, are used for this investigation. The data obtained in this investigation provide a benchmark against which the predictions of the PSF Blind Test can be compared. The results of this investigation and the Blind Test comparison are discussed

  6. Composite Match Index with Application of Interior Deformation Field Measurement from Magnetic Resonance Volumetric Images of Human Tissues

    Directory of Open Access Journals (Sweden)

    Penglin Zhang

    2012-01-01

    Full Text Available Whereas a variety of different feature-point matching approaches have been reported in computer vision, few feature-point matching approaches employed in images from nonrigid, nonuniform human tissues have been reported. The present work is concerned with interior deformation field measurement of complex human tissues from three-dimensional magnetic resonance (MR volumetric images. To improve the reliability of matching results, this paper proposes composite match index (CMI as the foundation of multimethod fusion methods to increase the reliability of these various methods. Thereinto, we discuss the definition, components, and weight determination of CMI. To test the validity of the proposed approach, it is applied to actual MR volumetric images obtained from a volunteer’s calf. The main result is consistent with the actual condition.

  7. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET.

    Science.gov (United States)

    Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C

    2010-07-21

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  8. Saturated virtual fluorescence emission difference microscopy based on detector array

    Science.gov (United States)

    Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Ge, Baoliang; Wang, Wensheng; Liu, Xu

    2017-07-01

    Virtual fluorescence emission difference microscopy (vFED) has been proposed recently to enhance the lateral resolution of confocal microscopy with a detector array, implemented by scanning a doughnut-shaped pattern. Theoretically, the resolution can be enhanced by around 1.3-fold compared with that in confocal microscopy. For further improvement of the resolving ability of vFED, a novel method is presented utilizing fluorescence saturation for super-resolution imaging, which we called saturated virtual fluorescence emission difference microscopy (svFED). With a point detector array, matched solid and hollow point spread functions (PSF) can be obtained by photon reassignment, and the difference results between them can be used to boost the transverse resolution. Results show that the diffraction barrier can be surpassed by at least 34% compared with that in vFED and the resolution is around 2-fold higher than that in confocal microscopy.

  9. Detecting unresolved binary stars in Euclid VIS images

    Science.gov (United States)

    Kuntzer, T.; Courbin, F.

    2017-10-01

    Measuring a weak gravitational lensing signal to the level required by the next generation of space-based surveys demands exquisite reconstruction of the point-spread function (PSF). However, unresolved binary stars can significantly distort the PSF shape. In an effort to mitigate this bias, we aim at detecting unresolved binaries in realistic Euclid stellar populations. We tested methods in numerical experiments where (I) the PSF shape is known to Euclid requirements across the field of view; and (II) the PSF shape is unknown. We drew simulated catalogues of PSF shapes for this proof-of-concept paper. Following the Euclid survey plan, the objects were observed four times. We propose three methods to detect unresolved binary stars. The detection is based on the systematic and correlated biases between exposures of the same object. One method is a simple correlation analysis, while the two others use supervised machine-learning algorithms (random forest and artificial neural network). In both experiments, we demonstrate the ability of our methods to detect unresolved binary stars in simulated catalogues. The performance depends on the level of prior knowledge of the PSF shape and the shape measurement errors. Good detection performances are observed in both experiments. Full complexity, in terms of the images and the survey design, is not included, but key aspects of a more mature pipeline are discussed. Finding unresolved binaries in objects used for PSF reconstruction increases the quality of the PSF determination at arbitrary positions. We show, using different approaches, that we are able to detect at least binary stars that are most damaging for the PSF reconstruction process. The code corresponding to the algorithms used in this work and all scripts to reproduce the results are publicly available from a GitHub repository accessible via http://lastro.epfl.ch/software

  10. Enhancement of antibacterial activity in nanofillers incorporated PSF/PVP membranes

    Science.gov (United States)

    Pramila, P.; Gopalakrishnan, N.

    2018-04-01

    An attempt has been made to investigate the nanofillers incorporated polysulfone (PSF) and polyvinylpyrrolidone (PVP) polymer membranes prepared by phase inversion method. Initially, the nanofillers, viz, Zinc Oxide (ZnO) nanoparticle, Graphene Oxide-Zinc Oxide (GO-ZnO) nanocomposite were synthesized and then directly incorporated into PSF/PVP blend during the preparation of membranes. The prepared membranes have been subjected to FE-SEM, AFM, BET, contact angle, tensile test and anti-bacterial studies. Significant membrane morphologies and nanoporous properties have been observed by FE-SEM and BET, respectively. It has been observed that hydrophilicity, mechanical strength and water permeability of the ZnO and GO-ZnO incorporated membranes were enhanced than bare membrane. Antibacterial activity was assessed by measuring the inhibition zones formed around the membrane by disc-diffusion method using Escherichia coli (gram-negative) as a model bacterium. Again, it has been observed that nanofillers incorporated membrane exhibits high antibacterial performance compared to bare membrane.

  11. An automatic system for segmentation, matching, anatomical labeling and measurement of airways from CT images

    DEFF Research Database (Denmark)

    Petersen, Jens; Feragen, Aasa; Owen, Megan

    segmental branches, and longitudinal matching of airway branches in repeated scans of the same subject. Methods and Materials: The segmentation process begins from an automatically detected seed point in the trachea. The airway centerline tree is then constructed by iteratively adding locally optimal paths...... differences. Results: The segmentation method has been used on 9711 low dose CT images from the Danish Lung Cancer Screening Trial (DLCST). Manual inspection of thumbnail images revealed gross errors in a total of 44 images. 29 were missing branches at the lobar level and only 15 had obvious false positives...... measurements to segments matched in multiple images of the same subject using image registration was observed to increase their reproducibility. The anatomical branch labeling tool was validated on a subset of 20 subjects, 5 of each category: asymptomatic, mild, moderate and severe COPD. The average inter...

  12. A Novel Artificial Bee Colony Algorithm Based on Internal-Feedback Strategy for Image Template Matching

    Directory of Open Access Journals (Sweden)

    Bai Li

    2014-01-01

    Full Text Available Image template matching refers to the technique of locating a given reference image over a source image such that they are the most similar. It is a fundamental mission in the field of visual target recognition. In general, there are two critical aspects of a template matching scheme. One is similarity measurement and the other is best-match location search. In this work, we choose the well-known normalized cross correlation model as a similarity criterion. The searching procedure for the best-match location is carried out through an internal-feedback artificial bee colony (IF-ABC algorithm. IF-ABC algorithm is highlighted by its effort to fight against premature convergence. This purpose is achieved through discarding the conventional roulette selection procedure in the ABC algorithm so as to provide each employed bee an equal chance to be followed by the onlooker bees in the local search phase. Besides that, we also suggest efficiently utilizing the internal convergence states as feedback guidance for searching intensity in the subsequent cycles of iteration. We have investigated four ideal template matching cases as well as four actual cases using different searching algorithms. Our simulation results show that the IF-ABC algorithm is more effective and robust for this template matching mission than the conventional ABC and two state-of-the-art modified ABC algorithms do.

  13. A single-photon ecat reconstruction procedure based on a PSF model

    International Nuclear Information System (INIS)

    Ying-Lie, O.

    1984-01-01

    Emission Computed Axial Tomography (ECAT) has been applied in nuclear medicine for the past few years. Owing to attenuation and scatter along the ray path, adequate correction methods are required. In this thesis, a correction method for attenuation, detector response and Compton scatter has been proposed. The method developed is based on a PSF model. The parameters of the models were derived by fitting experimental and simulation data. Because of its flexibility, a Monte Carlo simulation method has been employed. Using the PSF models, it was found that the ECAT problem can be described by the added modified equation. Application of the reconstruction procedure on simulation data yield satisfactory results. The algorithm tends to amplify noise and distortion in the data, however. Therefore, the applicability of the method on patient studies remain to be seen. (Auth.)

  14. 4D rotational x-ray imaging of wrist joint dynamic motion

    International Nuclear Information System (INIS)

    Carelsen, Bart; Bakker, Niels H.; Strackee, Simon D.; Boon, Sjirk N.; Maas, Mario; Sabczynski, Joerg; Grimbergen, Cornelis A.; Streekstra, Geert J.

    2005-01-01

    Current methods for imaging joint motion are limited to either two-dimensional (2D) video fluoroscopy, or to animated motions from a series of static three-dimensional (3D) images. 3D movement patterns can be detected from biplane fluoroscopy images matched with computed tomography images. This involves several x-ray modalities and sophisticated 2D to 3D matching for the complex wrist joint. We present a method for the acquisition of dynamic 3D images of a moving joint. In our method a 3D-rotational x-ray (3D-RX) system is used to image a cyclically moving joint. The cyclic motion is synchronized to the x-ray acquisition to yield multiple sets of projection images, which are reconstructed to a series of time resolved 3D images, i.e., four-dimensional rotational x ray (4D-RX). To investigate the obtained image quality parameters the full width at half maximum (FWHM) of the point spread function (PSF) via the edge spread function and the contrast to noise ratio between air and phantom were determined on reconstructions of a bullet and rod phantom, using 4D-RX as well as stationary 3D-RX images. The CNR in volume reconstructions based on 251 projection images in the static situation and on 41 and 34 projection images of a moving phantom were 6.9, 3.0, and 2.9, respectively. The average FWHM of the PSF of these same images was, respectively, 1.1, 1.7, and 2.2 mm orthogonal to the motion and parallel to direction of motion 0.6, 0.7, and 1.0 mm. The main deterioration of 4D-RX images compared to 3D-RX images is due to the low number of projection images used and not to the motion of the object. Using 41 projection images seems the best setting for the current system. Experiments on a postmortem wrist show the feasibility of the method for imaging 3D dynamic joint motion. We expect that 4D-RX will pave the way to improved assessment of joint disorders by detection of 3D dynamic motion patterns in joints

  15. AN EVOLUTIONARY ALGORITHM FOR FAST INTENSITY BASED IMAGE MATCHING BETWEEN OPTICAL AND SAR SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    P. Fischer

    2018-04-01

    Full Text Available This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.

  16. Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.

    Science.gov (United States)

    Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin

    2017-09-01

    In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.

  17. IMPROVED TOPOGRAPHIC MODELS VIA CONCURRENT AIRBORNE LIDAR AND DENSE IMAGE MATCHING

    Directory of Open Access Journals (Sweden)

    G. Mandlburger

    2017-09-01

    Full Text Available Modern airborne sensors integrate laser scanners and digital cameras for capturing topographic data at high spatial resolution. The capability of penetrating vegetation through small openings in the foliage and the high ranging precision in the cm range have made airborne LiDAR the prime terrain acquisition technique. In the recent years dense image matching evolved rapidly and outperforms laser scanning meanwhile in terms of the achievable spatial resolution of the derived surface models. In our contribution we analyze the inherent properties and review the typical processing chains of both acquisition techniques. In addition, we present potential synergies of jointly processing image and laser data with emphasis on sensor orientation and point cloud fusion for digital surface model derivation. Test data were concurrently acquired with the RIEGL LMS-Q1560 sensor over the city of Melk, Austria, in January 2016 and served as basis for testing innovative processing strategies. We demonstrate that (i systematic effects in the resulting scanned and matched 3D point clouds can be minimized based on a hybrid orientation procedure, (ii systematic differences of the individual point clouds are observable at penetrable, vegetated surfaces due to the different measurement principles, and (iii improved digital surface models can be derived combining the higher density of the matching point cloud and the higher reliability of LiDAR point clouds, especially in the narrow alleys and courtyards of the study site, a medieval city.

  18. Multiple Constraints Based Robust Matching of Poor-Texture Close-Range Images for Monitoring a Simulated Landslide

    Directory of Open Access Journals (Sweden)

    Gang Qiao

    2016-05-01

    Full Text Available Landslides are one of the most destructive geo-hazards that can bring about great threats to both human lives and infrastructures. Landslide monitoring has been always a research hotspot. In particular, landslide simulation experimentation is an effective tool in landslide research to obtain critical parameters that help understand the mechanism and evaluate the triggering and controlling factors of slope failure. Compared with other traditional geotechnical monitoring approaches, the close-range photogrammetry technique shows potential in tracking and recording the 3D surface deformation and failure processes. In such cases, image matching usually plays a critical role in stereo image processing for the 3D geometric reconstruction. However, the complex imaging conditions such as rainfall, mass movement, illumination, and ponding will reduce the texture quality of the stereo images, bringing about difficulties in the image matching process and resulting in very sparse matches. To address this problem, this paper presents a multiple-constraints based robust image matching approach for poor-texture close-range images particularly useful in monitoring a simulated landslide. The Scale Invariant Feature Transform (SIFT algorithm was first applied to the stereo images for generation of scale-invariate feature points, followed by a two-step matching process: feature-based image matching and area-based image matching. In the first feature-based matching step, the triangulation process was performed based on the SIFT matches filtered by the Fundamental Matrix (FM and a robust checking procedure, to serve as the basic constraints for feature-based iterated matching of all the non-matched SIFT-derived feature points inside each triangle. In the following area-based image-matching step, the corresponding points of the non-matched features in each triangle of the master image were predicted in the homologous triangle of the searching image by using geometric

  19. A Novel Fast and Robust Binary Affine Invariant Descriptor for Image Matching

    Directory of Open Access Journals (Sweden)

    Xiujie Qu

    2014-01-01

    Full Text Available As the current binary descriptors have disadvantages of high computational complexity, no affine invariance, and the high false matching rate with viewpoint changes, a new binary affine invariant descriptor, called BAND, is proposed. Different from other descriptors, BAND has an irregular pattern, which is based on local affine invariant region surrounding a feature point, and it has five orientations, which are obtained by LBP effectively. Ultimately, a 256 bits binary string is computed by simple random sampling pattern. Experimental results demonstrate that BAND has a good matching result in the conditions of rotating, image zooming, noising, lighting, and small-scale perspective transformation. It has better matching performance compared with current mainstream descriptors, while it costs less time.

  20. Point spread function modeling and image restoration for cone-beam CT

    International Nuclear Information System (INIS)

    Zhang Hua; Shi Yikai; Huang Kuidong; Xu Zhe

    2015-01-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. (authors)

  1. Automatic Matching of Multi-Source Satellite Images: A Case Study on ZY-1-02C and ETM+

    Directory of Open Access Journals (Sweden)

    Bo Wang

    2017-10-01

    Full Text Available The ever-growing number of applications for satellites is being compromised by their poor direct positioning precision. Existing orthoimages, such as enhanced thematic mapper (ETM+ orthoimages, can provide georeferences or improve the geo-referencing accuracy of satellite images, such ZY-1-02C images that have unsatisfactory positioning precision, thus enhancing their processing efficiency and application. In this paper, a feasible image matching approach using multi-source satellite images is proposed on the basis of an experiment carried out with ZY-1-02C Level 1 images and ETM+ orthoimages. The proposed approach overcame differences in rotation angle, scale, and translation between images. The rotation and scale variances were evaluated on the basis of rational polynomial coefficients. The translation vectors were generated after blocking the overall phase correlation. Then, normalized cross-correlation and least-squares matching were applied for matching. Finally, the gross errors of the corresponding points were eliminated by local statistic vectors in a TIN structure. Experimental results showed a matching precision of less than two pixels (root-mean-square error, and comparison results indicated that the proposed method outperforms Scale-Invariant Feature Transform (SIFT, Speeded Up Robust Features (SURF, and Affine-Scale Invariant Feature Transform (A-SIFT in terms of reliability and efficiency.

  2. 3D OBJECT COORDINATES EXTRACTION BY RADARGRAMMETRY AND MULTI STEP IMAGE MATCHING

    Directory of Open Access Journals (Sweden)

    A. Eftekhari

    2013-09-01

    Full Text Available Nowadays by high resolution SAR imaging systems as Radarsat-2, TerraSAR-X and COSMO-skyMed, three-dimensional terrain data extraction using SAR images is growing. InSAR and Radargrammetry are two most common approaches for removing 3D object coordinate from SAR images. Research has shown that extraction of terrain elevation data using satellite repeat pass interferometry SAR technique due to atmospheric factors and the lack of coherence between the images in areas with dense vegetation cover is a problematic. So the use of Radargrammetry technique can be effective. Generally height derived method by Radargrammetry consists of two stages: Images matching and space intersection. In this paper we propose a multi-stage algorithm founded on the combination of feature based and area based image matching. Then the RPCs that calculate for each images use for extracting 3D coordinate in matched points. At the end, the coordinates calculating that compare with coordinates extracted from 1 meters DEM. The results show root mean square errors for 360 points are 3.09 meters. We use a pair of spotlight TerraSAR-X images from JAM (IRAN in this article.

  3. Modification of PSf/SPSf Blended Porous Support for Improving the Reverse Osmosis Performance of Aromatic Polyamide Thin Film Composite Membranes

    Directory of Open Access Journals (Sweden)

    Li-Fen Liu

    2018-06-01

    Full Text Available In this study, modification of polysulfone (PSf/sulfonated polysulfone (SPSf blended porous ultrafiltration (UF support membranes was proposed to improve the reverse osmosis (RO performance of aromatic polyamide thin film composite (TFC membranes. The synergistic effects of solvent, polymer concentration, and SPSf doping content in the casting solution were investigated systematically on the properties of both porous supports and RO membranes. SEM and AFM were combined to characterize the physical properties of the membranes, including surface pore natures (porosity, mean pore radius, surface morphology, and section structure. A contact angle meter was used to analyze the membrane surface hydrophilicity. Permeate experiments were carried out to evaluate the separation performances of the membranes. The results showed that the PSf/SPSf blended porous support modified with 6 wt % SPSf in the presence of DMF and 14 wt % PSf had higher porosity, bigger pore diameter, and a rougher and more hydrophilic surface, which was more beneficial for fabrication of a polyamide TFC membrane with favorable reverse osmosis performance. This modified PSf/SPSf support endowed the RO membrane with a more hydrophilic surface, higher water flux (about 1.2 times, as well as a slight increase in salt rejection than the nascent PSf support. In a word, this work provides a new facile method to improve the separation performance of polyamide TFC RO membranes via the modification of conventional PSf porous support with SPSf.

  4. The Fresnel Zone Light Field Spectral Imager

    Science.gov (United States)

    2017-03-23

    detection efficiency for weak signals . Additionally, further study should be done on spectral calibration methods for a FZLFSI. When dealing with weak ... detection assembly. The different image formation planes for each wavelength are constructed synthetically through processing the collected light ...a single micro-lens image. This character- istic also holds for wavelengths other than the design wavelength. 36 modified light field PSF is detected

  5. Local Deep Hashing Matching of Aerial Images Based on Relative Distance and Absolute Distance Constraints

    Directory of Open Access Journals (Sweden)

    Suting Chen

    2017-12-01

    Full Text Available Aerial images have features of high resolution, complex background, and usually require large amounts of calculation, however, most algorithms used in matching of aerial images adopt the shallow hand-crafted features expressed as floating-point descriptors (e.g., SIFT (Scale-invariant Feature Transform, SURF (Speeded Up Robust Features, which may suffer from poor matching speed and are not well represented in the literature. Here, we propose a novel Local Deep Hashing Matching (LDHM method for matching of aerial images with large size and with lower complexity or fast matching speed. The basic idea of the proposed algorithm is to utilize the deep network model in the local area of the aerial images, and study the local features, as well as the hash function of the images. Firstly, according to the course overlap rate of aerial images, the algorithm extracts the local areas for matching to avoid the processing of redundant information. Secondly, a triplet network structure is proposed to mine the deep features of the patches of the local image, and the learned features are imported to the hash layer, thus obtaining the representation of a binary hash code. Thirdly, the constraints of the positive samples to the absolute distance are added on the basis of the triplet loss, a new objective function is constructed to optimize the parameters of the network and enhance the discriminating capabilities of image patch features. Finally, the obtained deep hash code of each image patch is used for the similarity comparison of the image patches in the Hamming space to complete the matching of aerial images. The proposed LDHM algorithm evaluates the UltraCam-D dataset and a set of actual aerial images, simulation result demonstrates that it may significantly outperform the state-of-the-art algorithm in terms of the efficiency and performance.

  6. Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches

    Science.gov (United States)

    Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

    1994-09-01

    A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

  7. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    Science.gov (United States)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  8. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    Science.gov (United States)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  9. Quantification of rat brain SPECT with 123I-ioflupane: evaluation of different reconstruction methods and image degradation compensations using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Roé-Vellvé, N; Pino, F; Cot, A; Ros, D; Falcon, C; Gispert, J D; Pavía, J; Marin, C

    2014-01-01

    SPECT studies with 123 I-ioflupane facilitate the diagnosis of Parkinson’s disease (PD). The effect on quantification of image degradations has been extensively evaluated in human studies but their impact on studies of experimental PD models is still unclear. The aim of this work was to assess the effect of compensating for the degrading phenomena on the quantification of small animal SPECT studies using 123 I-ioflupane. This assessment enabled us to evaluate the feasibility of quantitatively detecting small pathological changes using different reconstruction methods and levels of compensation for the image degrading phenomena. Monte Carlo simulated studies of a rat phantom were reconstructed and quantified. Compensations for point spread function (PSF), scattering, attenuation and partial volume effect were progressively included in the quantification protocol. A linear relationship was found between calculated and simulated specific uptake ratio (SUR) in all cases. In order to significantly distinguish disease stages, noise-reduction during the reconstruction process was the most relevant factor, followed by PSF compensation. The smallest detectable SUR interval was determined by biological variability rather than by image degradations or coregistration errors. The quantification methods that gave the best results allowed us to distinguish PD stages with SUR values that are as close as 0.5 using groups of six rats to represent each stage. (paper)

  10. DEM GENERATION FROM HIGH RESOLUTION SATELLITE IMAGES THROUGH A NEW 3D LEAST SQUARES MATCHING ALGORITHM

    Directory of Open Access Journals (Sweden)

    T. Kim

    2012-09-01

    Full Text Available Automated generation of digital elevation models (DEMs from high resolution satellite images (HRSIs has been an active research topic for many years. However, stereo matching of HRSIs, in particular based on image-space search, is still difficult due to occlusions and building facades within them. Object-space matching schemes, proposed to overcome these problem, often are very time consuming and critical to the dimensions of voxels. In this paper, we tried a new least square matching (LSM algorithm that works in a 3D object space. The algorithm starts with an initial height value on one location of the object space. From this 3D point, the left and right image points are projected. The true height is calculated by iterative least squares estimation based on the grey level differences between the left and right patches centred on the projected left and right points. We tested the 3D LSM to the Worldview images over 'Terrassa Sud' provided by the ISPRS WG I/4. We also compared the performance of the 3D LSM with the correlation matching based on 2D image space and the correlation matching based on 3D object space. The accuracy of the DEM from each method was analysed against the ground truth. Test results showed that 3D LSM offers more accurate DEMs over the conventional matching algorithms. Results also showed that 3D LSM is sensitive to the accuracy of initial height value to start the estimation. We combined the 3D COM and 3D LSM for accurate and robust DEM generation from HRSIs. The major contribution of this paper is that we proposed and validated that LSM can be applied to object space and that the combination of 3D correlation and 3D LSM can be a good solution for automated DEM generation from HRSIs.

  11. Correlating PSf Support Physicochemical Properties with the Formation of Piperazine-Based Polyamide and Evaluating the Resultant Nanofiltration Membrane Performance

    Directory of Open Access Journals (Sweden)

    Micah Belle Marie Yap Ang

    2017-10-01

    Full Text Available Membrane support properties influence the performance of thin-film composite nanofiltration membranes. We fabricated several polysulfone (PSf supports. The physicochemical properties of PSf were altered by adding polyethylene glycol (PEG of varying molecular weights (200–35,000 g/mol. This alteration facilitated the formation of a thin polyamide layer on the PSf surface during the interfacial polymerization reaction involving an aqueous solution of piperazine containing 4-aminobenzoic acid and an organic solution of trimesoyl chloride. Attenuated total reflectance-Fourier transform infrared validated the presence of PEG in the membrane support. Scanning electron microscopy and atomic force microscopy illustrated that the thin-film polyamide layer morphology transformed from a rough to a smooth surface. A cross-flow filtration test indicated that a thin-film composite polyamide membrane comprising a PSf support (TFC-PEG20k with a low surface porosity, small pore size, and suitable hydrophilicity delivered the highest water flux and separation efficiency (J = 81.1 ± 6.4 L·m−2·h−1, RNa2SO4 = 91.1% ± 1.8%, and RNaCl = 35.7% ± 3.1% at 0.60 MPa. This membrane had a molecular weight cutoff of 292 g/mol and also a high rejection for negatively charged dyes. Therefore, a PSf support exhibiting suitable physicochemical properties endowed a thin-film composite polyamide membrane with high performance.

  12. Physics-based shape matching for intraoperative image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Suwelack, Stefan, E-mail: suwelack@kit.edu; Röhl, Sebastian; Bodenstedt, Sebastian; Reichard, Daniel; Dillmann, Rüdiger; Speidel, Stefanie [Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Adenauerring 2, Karlsruhe 76131 (Germany); Santos, Thiago dos; Maier-Hein, Lena [Computer-assisted Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg 69120 (Germany); Wagner, Martin; Wünscher, Josephine; Kenngott, Hannes; Müller, Beat P. [General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 110, Heidelberg 69120 (Germany)

    2014-11-01

    Purpose: Soft-tissue deformations can severely degrade the validity of preoperative planning data during computer assisted interventions. Intraoperative imaging such as stereo endoscopic, time-of-flight or, laser range scanner data can be used to compensate these movements. In this context, the intraoperative surface has to be matched to the preoperative model. The shape matching is especially challenging in the intraoperative setting due to noisy sensor data, only partially visible surfaces, ambiguous shape descriptors, and real-time requirements. Methods: A novel physics-based shape matching (PBSM) approach to register intraoperatively acquired surface meshes to preoperative planning data is proposed. The key idea of the method is to describe the nonrigid registration process as an electrostatic–elastic problem, where an elastic body (preoperative model) that is electrically charged slides into an oppositely charged rigid shape (intraoperative surface). It is shown that the corresponding energy functional can be efficiently solved using the finite element (FE) method. It is also demonstrated how PBSM can be combined with rigid registration schemes for robust nonrigid registration of arbitrarily aligned surfaces. Furthermore, it is shown how the approach can be combined with landmark based methods and outline its application to image guidance in laparoscopic interventions. Results: A profound analysis of the PBSM scheme based on in silico and phantom data is presented. Simulation studies on several liver models show that the approach is robust to the initial rigid registration and to parameter variations. The studies also reveal that the method achieves submillimeter registration accuracy (mean error between 0.32 and 0.46 mm). An unoptimized, single core implementation of the approach achieves near real-time performance (2 TPS, 7–19 s total registration time). It outperforms established methods in terms of speed and accuracy. Furthermore, it is shown that the

  13. Shadow Areas Robust Matching Among Image Sequence in Planetary Landing

    Science.gov (United States)

    Ruoyan, Wei; Xiaogang, Ruan; Naigong, Yu; Xiaoqing, Zhu; Jia, Lin

    2017-01-01

    In this paper, an approach for robust matching shadow areas in autonomous visual navigation and planetary landing is proposed. The approach begins with detecting shadow areas, which are extracted by Maximally Stable Extremal Regions (MSER). Then, an affine normalization algorithm is applied to normalize the areas. Thirdly, a descriptor called Multiple Angles-SIFT (MA-SIFT) that coming from SIFT is proposed, the descriptor can extract more features of an area. Finally, for eliminating the influence of outliers, a method of improved RANSAC based on Skinner Operation Condition is proposed to extract inliers. At last, series of experiments are conducted to test the performance of the approach this paper proposed, the results show that the approach can maintain the matching accuracy at a high level even the differences among the images are obvious with no attitude measurements supplied.

  14. Deconvolution of Defocused Image with Multivariate Local Polynomial Regression and Iterative Wiener Filtering in DWT Domain

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2010-01-01

    obtaining the point spread function (PSF parameter, iterative wiener filter is adopted to complete the restoration. We experimentally illustrate its performance on simulated data and real blurred image. Results show that the proposed PSF parameter estimation technique and the image restoration method are effective.

  15. Adequação de recursos humanos ao PSF: percepção de formandos de dois modelos de formação acadêmica em odontologia The Family Health Program (FHP and human resources: perceptions of students from two different dentistry schools

    Directory of Open Access Journals (Sweden)

    Heriberto Fiúza Sanchez

    2008-04-01

    Full Text Available O Programa Saúde da Família - PSF foi instituído pelo governo federal objetivando reverter o modelo assistencial. Os recursos humanos envolvidos devem estar preparados para alcançar os objetivos que o PSF propõe. O propósito desse artigo foi avaliar os desejos, percepções e preparo de acadêmicos de Odontologia, em relação aos princípios do PSF, de duas diferentes Faculdades de Odontologia, aqui denominadas Faculdades 1 e 2. Buscou-se ainda analisar se as faculdades tiveram potencial transformador sobre os acadêmicos, graduando-os com compromisso social e sensibilidade humanitária, considerados importantes para aqueles que querem trabalhar no PSF. Questionários individuais foram aplicados por um único pesquisador. As respostas foram analisadas pelo programa Epi-Info. Os resultados mostraram que prevalece entre os acadêmicos o desejo de trabalhar no PSF por razões ligadas às dificuldades do mercado de trabalho e os mesmos citam freqüentemente a técnica como a principal característica necessária a um dentista para que o mesmo atue no PSF. Por outro lado, diferenças estatisticamente significativas foram encontradas entre os acadêmicos, apontando uma provável influência do Estágio Supervisionado, ministrado sob a forma de internato rural, sobre a formação do acadêmico da Faculdade 1, possivelmente habilitando-o melhor para o PSF.The purpose of this study was to evaluate the perceptions and opinions of dental students from two different Dentistry Schools in Brazil, both known here as Dentistry Schools 1 and 2 about the Family Health Program - FHP. The study analyzed if the Dentistry Schools had any influence on the students, graduating professionals with humanitarian and social sensibility, which are considered very important prerequisites for those who wish to work on this governmental health program, as well as searching for professional expectation of the students. Individual questionnaires were applied by only one

  16. Integration of prior knowledge into dense image matching for video surveillance

    Science.gov (United States)

    Menze, M.; Heipke, C.

    2014-08-01

    Three-dimensional information from dense image matching is a valuable input for a broad range of vision applications. While reliable approaches exist for dedicated stereo setups they do not easily generalize to more challenging camera configurations. In the context of video surveillance the typically large spatial extent of the region of interest and repetitive structures in the scene render the application of dense image matching a challenging task. In this paper we present an approach that derives strong prior knowledge from a planar approximation of the scene. This information is integrated into a graph-cut based image matching framework that treats the assignment of optimal disparity values as a labelling task. Introducing the planar prior heavily reduces ambiguities together with the search space and increases computational efficiency. The results provide a proof of concept of the proposed approach. It allows the reconstruction of dense point clouds in more general surveillance camera setups with wider stereo baselines.

  17. Advanced Tie Feature Matching for the Registration of Mobile Mapping Imaging Data and Aerial Imagery

    Science.gov (United States)

    Jende, P.; Peter, M.; Gerke, M.; Vosselman, G.

    2016-06-01

    Mobile Mapping's ability to acquire high-resolution ground data is opposing unreliable localisation capabilities of satellite-based positioning systems in urban areas. Buildings shape canyons impeding a direct line-of-sight to navigation satellites resulting in a deficiency to accurately estimate the mobile platform's position. Consequently, acquired data products' positioning quality is considerably diminished. This issue has been widely addressed in the literature and research projects. However, a consistent compliance of sub-decimetre accuracy as well as a correction of errors in height remain unsolved. We propose a novel approach to enhance Mobile Mapping (MM) image orientation based on the utilisation of highly accurate orientation parameters derived from aerial imagery. In addition to that, the diminished exterior orientation parameters of the MM platform will be utilised as they enable the application of accurate matching techniques needed to derive reliable tie information. This tie information will then be used within an adjustment solution to correct affected MM data. This paper presents an advanced feature matching procedure as a prerequisite to the aforementioned orientation update. MM data is ortho-projected to gain a higher resemblance to aerial nadir data simplifying the images' geometry for matching. By utilising MM exterior orientation parameters, search windows may be used in conjunction with a selective keypoint detection and template matching. Originating from different sensor systems, however, difficulties arise with respect to changes in illumination, radiometry and a different original perspective. To respond to these challenges for feature detection, the procedure relies on detecting keypoints in only one image. Initial tests indicate a considerable improvement in comparison to classic detector/descriptor approaches in this particular matching scenario. This method leads to a significant reduction of outliers due to the limited availability

  18. A Frequency Matching Method for Generation of a Priori Sample Models from Training Images

    DEFF Research Database (Denmark)

    Lange, Katrine; Cordua, Knud Skou; Frydendall, Jan

    2011-01-01

    This paper presents a Frequency Matching Method (FMM) for generation of a priori sample models based on training images and illustrates its use by an example. In geostatistics, training images are used to represent a priori knowledge or expectations of models, and the FMM can be used to generate...... new images that share the same multi-point statistics as a given training image. The FMM proceeds by iteratively updating voxel values of an image until the frequency of patterns in the image matches the frequency of patterns in the training image; making the resulting image statistically...... indistinguishable from the training image....

  19. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    Science.gov (United States)

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  20. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Inverse consistent non-rigid image registration based on robust point set matching

    Science.gov (United States)

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number

  2. Long range image enhancement

    CSIR Research Space (South Africa)

    Duvenhage, B

    2015-11-01

    Full Text Available the surveillance system performance. This paper discusses an image processing method that tracks the behaviour of the PSF and then de-warps the image to reduce the disruptive effects of turbulence. Optical flow, an average image filter and a simple unsharp mask...

  3. Mix-and-match holography

    KAUST Repository

    Peng, Yifan

    2017-11-22

    Computational caustics and light steering displays offer a wide range of interesting applications, ranging from art works and architectural installations to energy efficient HDR projection. In this work we expand on this concept by encoding several target images into pairs of front and rear phase-distorting surfaces. Different target holograms can be decoded by mixing and matching different front and rear surfaces under specific geometric alignments. Our approach, which we call mix-and-match holography, is made possible by moving from a refractive caustic image formation process to a diffractive, holographic one. This provides the extra bandwidth that is required to multiplex several images into pairing surfaces.

  4. Spontaneous reorientation is guided by perceived surface distance, not by image matching or comparison.

    Directory of Open Access Journals (Sweden)

    Sang Ah Lee

    Full Text Available Humans and animals recover their sense of position and orientation using properties of the surface layout, but the processes underlying this ability are disputed. Although behavioral and neurophysiological experiments on animals long have suggested that reorientation depends on representations of surface distance, recent experiments on young children join experimental studies and computational models of animal navigation to suggest that reorientation depends either on processing of any continuous perceptual variables or on matching of 2D, depthless images of the landscape. We tested the surface distance hypothesis against these alternatives through studies of children, using environments whose 3D shape and 2D image properties were arranged to enhance or cancel impressions of depth. In the absence of training, children reoriented by subtle differences in perceived surface distance under conditions that challenge current models of 2D-image matching or comparison processes. We provide evidence that children's spontaneous navigation depends on representations of 3D layout geometry.

  5. High Resolution Imaging of the Sun with CORONAS-1

    Science.gov (United States)

    Karovska, Margarita

    1998-01-01

    We applied several image restoration and enhancement techniques, to CORONAS-I images. We carried out the characterization of the Point Spread Function (PSF) using the unique capability of the Blind Iterative Deconvolution (BID) technique, which recovers the real PSF at a given location and time of observation, when limited a priori information is available on its characteristics. We also applied image enhancement technique to extract the small scale structure imbeded in bright large scale structures on the disk and on the limb. The results demonstrate the capability of the image post-processing to substantially increase the yield from the space observations by improving the resolution and reducing noise in the images.

  6. Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching

    Directory of Open Access Journals (Sweden)

    Tianyang Cao

    2017-01-01

    Full Text Available Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.

  7. Relationship between x-ray illumination field size and flat field intensity and its impacts on x-ray imaging

    International Nuclear Information System (INIS)

    Dong Xue; Niu Tianye; Jia Xun; Zhu Lei

    2012-01-01

    -width-at-half-maximum (FWHM) of around 0.4 mm, while non-negligible off-focal-spot radiation is observed at a distance of over 2 mm from the center. The measured detector PSF has an FWHM of 0.510 mm, with a shape close to Gaussian. From these two distributions, the author calculate the estimated I 0 values at different collimator settings. The I 0 variation mainly comes from the focal spot effect. The estimation matches well with the measurements at different collimator widths in both horizontal and vertical directions, with an average error of less than 3%. Our method improves the accuracy of conventional scatter measurements, where the scatter is measured as the difference between fan-beam and cone-beam projections. On a uniform water cylinder phantom, more accurate I 0 suppresses the unfaithful high-frequency signals at the object boundaries of the measured scatter, and the SPR estimation error is reduced from 0.158 to 0.014. The proposed I 0 estimation also reduces the reconstruction error from about 20 HU on the Catphan©600 phantom in the selected regions of interest to less than 4 HU. Conclusions: The I 0 variation is identified as one additional error source in x-ray imaging. By measuring the focal-spot distribution and detector PSF, the authors propose an accurate method of estimating the I 0 value for different illumination field sizes. The method obtains more accurate scatter measurements and therefore facilitates scatter correction algorithm designs. As correction methods for other CBCT artifacts become more successful, our research is significant in further improving the CBCT imaging accuracy.

  8. Iris Matching Based on Personalized Weight Map.

    Science.gov (United States)

    Dong, Wenbo; Sun, Zhenan; Tan, Tieniu

    2011-09-01

    Iris recognition typically involves three steps, namely, iris image preprocessing, feature extraction, and feature matching. The first two steps of iris recognition have been well studied, but the last step is less addressed. Each human iris has its unique visual pattern and local image features also vary from region to region, which leads to significant differences in robustness and distinctiveness among the feature codes derived from different iris regions. However, most state-of-the-art iris recognition methods use a uniform matching strategy, where features extracted from different regions of the same person or the same region for different individuals are considered to be equally important. This paper proposes a personalized iris matching strategy using a class-specific weight map learned from the training images of the same iris class. The weight map can be updated online during the iris recognition procedure when the successfully recognized iris images are regarded as the new training data. The weight map reflects the robustness of an encoding algorithm on different iris regions by assigning an appropriate weight to each feature code for iris matching. Such a weight map trained by sufficient iris templates is convergent and robust against various noise. Extensive and comprehensive experiments demonstrate that the proposed personalized iris matching strategy achieves much better iris recognition performance than uniform strategies, especially for poor quality iris images.

  9. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  10. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    Science.gov (United States)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  11. Composite multi-lobe descriptor for cross spectral face recognition: matching active IR to visible light images

    Science.gov (United States)

    Cao, Zhicheng; Schmid, Natalia A.

    2015-05-01

    Matching facial images across electromagnetic spectrum presents a challenging problem in the field of biometrics and identity management. An example of this problem includes cross spectral matching of active infrared (IR) face images or thermal IR face images against a dataset of visible light images. This paper describes a new operator named Composite Multi-Lobe Descriptor (CMLD) for facial feature extraction in cross spectral matching of near-infrared (NIR) or short-wave infrared (SWIR) against visible light images. The new operator is inspired by the design of ordinal measures. The operator combines Gaussian-based multi-lobe kernel functions, Local Binary Pattern (LBP), generalized LBP (GLBP) and Weber Local Descriptor (WLD) and modifies them into multi-lobe functions with smoothed neighborhoods. The new operator encodes both the magnitude and phase responses of Gabor filters. The combining of LBP and WLD utilizes both the orientation and intensity information of edges. Introduction of multi-lobe functions with smoothed neighborhoods further makes the proposed operator robust against noise and poor image quality. Output templates are transformed into histograms and then compared by means of a symmetric Kullback-Leibler metric resulting in a matching score. The performance of the multi-lobe descriptor is compared with that of other operators such as LBP, Histogram of Oriented Gradients (HOG), ordinal measures, and their combinations. The experimental results show that in many cases the proposed method, CMLD, outperforms the other operators and their combinations. In addition to different infrared spectra, various standoff distances from close-up (1.5 m) to intermediate (50 m) and long (106 m) are also investigated in this paper. Performance of CMLD is evaluated for of each of the three cases of distances.

  12. Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging

    Science.gov (United States)

    Lin, Bingxiong; Sun, Yu; Qian, Xiaoning

    2013-03-01

    Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.

  13. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Huichen Yan

    2015-10-01

    Full Text Available Matched field processing (MFP is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.

  14. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    Science.gov (United States)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  15. Spatial resolution of the HRRT PET scanner using 3D-OSEM PSF reconstruction

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Sibomana, Merence; Keller, Sune Høgild

    2009-01-01

    The spatial resolution of the Siemens High Resolution Research Tomograph (HRRT) dedicated brain PET scanner installed at Copenhagen University Hospital (Rigshospitalet) was measured using a point-source phantom with high statistics. Further, it was demonstrated how the newly developed 3D-OSEM PSF...

  16. Hyperspectral imaging of cuttlefish camouflage indicates good color match in the eyes of fish predators.

    Science.gov (United States)

    Chiao, Chuan-Chin; Wickiser, J Kenneth; Allen, Justine J; Genter, Brock; Hanlon, Roger T

    2011-05-31

    Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision.

  17. Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.

    Science.gov (United States)

    Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C

    2014-08-01

    Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Image/patient registration from (partial) projection data by the Fourier phase matching method

    International Nuclear Information System (INIS)

    Weiguo Lu; You, J.

    1999-01-01

    A technique for 2D or 3D image/patient registration, PFPM (projection based Fourier phase matching method), is proposed. This technique provides image/patient registration directly from sequential tomographic projection data. The method can also deal with image files by generating 2D Radon transforms slice by slice. The registration in projection space is done by calculating a Fourier invariant (FI) descriptor for each one-dimensional projection datum, and then registering the FI descriptor by the Fourier phase matching (FPM) method. The algorithm has been tested on both synthetic and experimental data. When dealing with translated, rotated and uniformly scaled 2D image registration, the performance of the PFPM method is comparable to that of the IFPM (image based Fourier phase matching) method in robustness, efficiency, insensitivity to the offset between images, and registration time. The advantages of the former are that subpixel resolution is feasible, and it is more insensitive to image noise due to the averaging effect of the projection acquisition. Furthermore, the PFPM method offers the ability to generalize to 3D image/patient registration and to register partial projection data. By applying patient registration directly from tomographic projection data, image reconstruction is not needed in the therapy set-up verification, thus reducing computational time and artefacts. In addition, real time registration is feasible. Registration from partial projection data meets the geometry and dose requirements in many application cases and makes dynamic set-up verification possible in tomotherapy. (author)

  19. Improvement of temporal and dynamic subtraction images on abdominal CT using 3D global image matching and nonlinear image warping techniques

    International Nuclear Information System (INIS)

    Okumura, E; Sanada, S; Suzuki, M; Takemura, A; Matsui, O

    2007-01-01

    Accurate registration of the corresponding non-enhanced and arterial-phase CT images is necessary to create temporal and dynamic subtraction images for the enhancement of subtle abnormalities. However, respiratory movement causes misregistration at the periphery of the liver. To reduce these misregistration errors, we developed a temporal and dynamic subtraction technique to enhance small HCC by 3D global matching and nonlinear image warping techniques. The study population consisted of 21 patients with HCC. Using the 3D global matching and nonlinear image warping technique, we registered current and previous arterial-phase CT images or current non-enhanced and arterial-phase CT images obtained in the same position. The temporal subtraction image was obtained by subtracting the previous arterial-phase CT image from the warped current arterial-phase CT image. The dynamic subtraction image was obtained by the subtraction of the current non-enhanced CT image from the warped current arterial-phase CT image. The percentage of fair or superior temporal subtraction images increased from 52.4% to 95.2% using the new technique, while on the dynamic subtraction images, the percentage increased from 66.6% to 95.2%. The new subtraction technique may facilitate the diagnosis of subtle HCC based on the superior ability of these subtraction images to show nodular and/or ring enhancement

  20. Processing and evaluation of image matching tools in radiotherapy

    International Nuclear Information System (INIS)

    Bondiau, P.Y.

    2004-11-01

    Cancer is a major problem of public health. Treatment can be done in a general or loco-regional way, in this last case medical images are important as they specify the localization of the tumour. The objective of the radiotherapy is to deliver a curative dose of radiation in the target volume while sparing the organs at risks (O.A.R.). The determination of the accurate localization of the targets volume as well as O.A.R. make it possible to define the ballistic of irradiation beams. After the description of the principles of radiotherapy and cancers treatment, we specify the clinical stakes of ocular, cerebral and prostatic tumours. We present a state of the art of image matching, the various techniques reviewed with an aim of being didactic with respect to the medical community. The results of matching are presented within the framework of the planning of the cerebral and prostatic radiotherapy in order to specify the types of applicable matching in oncology and more particularly in radiotherapy. Then, we present the prospects for this type of application according to various anatomical areas. Applications of automatic segmentation and the evaluation of the results in the framework of brain tumour are described after a review of the various segmentation methods according to anatomical localizations. We will see an original application: the digital simulation of the virtual tumoral growth and the comparison with the real growth of a cerebral tumour presented by a patient. Lastly, we will expose the future developments possible of the tools for image processing in radiotherapy as well as the tracks of research to be explored in oncology. (author)

  1. Model-based restoration using light vein for range-gated imaging systems.

    Science.gov (United States)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen

    2016-09-10

    The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.

  2. Galaxy–Galaxy Weak-lensing Measurements from SDSS. I. Image Processing and Lensing Signals

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Wentao [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Nandan Road 80, Shanghai 200030 (China); Yang, Xiaohu; Zhang, Jun; Tweed, Dylan [Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 (United States); Fu, Liping; Shu, Chenggang [Shanghai Key Lab for Astrophysics, Shanghai Normal University, 100 Guilin Road, 200234, Shanghai (China); Mo, H. J. [Department of Astronomy, University of Massachusetts, Amherst, MA 01003-9305 (United States); Bosch, Frank C. van den [Department of Astronomy, Yale University, P.O. Box 208101, New Haven, CT 06520-8101 (United States); Li, Ran [Key Laboratory for Computational Astrophysics, Partner Group of the Max Planck Institute for Astrophysics, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100012 (China); Li, Nan [Department of Astronomy and Astrophysics, The University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Liu, Xiangkun; Pan, Chuzhong [Department of Astronomy, Peking University, Beijing 100871 (China); Wang, Yiran [Department of Astronomy, University of Illinois at Urbana-Champaign, 1002 W. Green Street, Urbana, IL 61801 (United States); Radovich, Mario, E-mail: walt@shao.ac.cn, E-mail: xyang@sjtu.edu.cn [INAF-Osservatorio Astronomico di Napoli, via Moiariello 16, I-80131 Napoli (Italy)

    2017-02-10

    We present our image processing pipeline that corrects the systematics introduced by the point-spread function (PSF). Using this pipeline, we processed Sloan Digital Sky Survey (SDSS) DR7 imaging data in r band and generated a galaxy catalog containing the shape information. Based on our shape measurements of the galaxy images from SDSS DR7, we extract the galaxy–galaxy (GG) lensing signals around foreground spectroscopic galaxies binned in different luminosities and stellar masses. We estimated the systematics, e.g., selection bias, PSF reconstruction bias, PSF dilution bias, shear responsivity bias, and noise rectification bias, which in total is between −9.1% and 20.8% at 2 σ levels. The overall GG lensing signals we measured are in good agreement with Mandelbaum et al. The reduced χ {sup 2} between the two measurements in different luminosity bins are from 0.43 to 0.83. Larger reduced χ {sup 2} from 0.60 to 1.87 are seen for different stellar mass bins, which is mainly caused by the different stellar mass estimator. The results in this paper with higher signal-to-noise ratio are due to the larger survey area than SDSS DR4, confirming that more luminous/massive galaxies bear stronger GG lensing signals. We divide the foreground galaxies into red/blue and star-forming/quenched subsamples and measure their GG lensing signals. We find that, at a specific stellar mass/luminosity, the red/quenched galaxies have stronger GG lensing signals than their counterparts, especially at large radii. These GG lensing signals can be used to probe the galaxy–halo mass relations and their environmental dependences in the halo occupation or conditional luminosity function framework.

  3. Characterization of lens based photoacoustic imaging system

    Directory of Open Access Journals (Sweden)

    Kalloor Joseph Francis

    2017-12-01

    Full Text Available Some of the challenges in translating photoacoustic (PA imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF. Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  4. Characterization of lens based photoacoustic imaging system.

    Science.gov (United States)

    Francis, Kalloor Joseph; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2017-12-01

    Some of the challenges in translating photoacoustic (PA) imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF). Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  5. Text Character Extraction Implementation from Captured Handwritten Image to Text Conversionusing Template Matching Technique

    Directory of Open Access Journals (Sweden)

    Barate Seema

    2016-01-01

    Full Text Available Images contain various types of useful information that should be extracted whenever required. A various algorithms and methods are proposed to extract text from the given image, and by using that user will be able to access the text from any image. Variations in text may occur because of differences in size, style,orientation, alignment of text, and low image contrast, composite backgrounds make the problem during extraction of text. If we develop an application that extracts and recognizes those texts accurately in real time, then it can be applied to many important applications like document analysis, vehicle license plate extraction, text- based image indexing, etc and many applications have become realities in recent years. To overcome the above problems we develop such application that will convert the image into text by using algorithms, such as bounding box, HSV model, blob analysis,template matching, template generation.

  6. Semi-blind sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O

    2012-09-01

    We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.

  7. Fusion of different modalities of imaging the fist

    International Nuclear Information System (INIS)

    Verdenet, J.; Garbuio, P.; Runge, M.; Cardot, J.C.

    1997-01-01

    The standard radiographical pictures are not able always to bring out the fracture of one of the fist bones. In an early study it was shown that 40% of patients presenting a suspicion of fracture and in which the radio- image was normal, have had a fracture confirmed with quantification by MRI and scintigraphy. The last one does not allow to specify the localization and consequently we developed a code to fusion entirely automatically the radiologic image and the scintigraphic image using no external marker. The code has been installed on a PC and uses the Matlab environment. Starting from the histogram processing the contours are individualized on the interpolated radio- and scinti-images. For matching there are 3 freedom degrees: one of rotation and 2 of translation (in x and y axes). The internal axes of the forearm was chosen to effect the rotation and translation. The forehand thickness, identical for each modality, allows to match properly the images. We have obtained an anatomic image on which the contour and the hyper-fixating zones of the scintigraphy are added. On a set of 100 examinations we observed 38 fractures while the difference between a fracture of the scaphoid and of another fist bone is confirmed in 93% of cases

  8. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    International Nuclear Information System (INIS)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-01-01

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  9. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Energy Technology Data Exchange (ETDEWEB)

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  10. Influence of the beam divergence on the quality neutron radiographic images improved by Richardson-Lucy deconvolution

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2010-01-01

    Full text: Images produced by radiation transmission, as many others, are affected by disturbances caused by random and systematic uncertainties. Those caused by noise or statistical dispersion can be diminished by a filtering procedure which eliminates high-frequencies associated to the noise, but unfortunately also those belonging to the signal itself. Systematic uncertainties, in principle, could be more effectively removed if one knows the spoiling convolution function causing the degradation of the image. This function depends upon the detector resolution and the non-punctual character of the source employed in the acquisition, which blur the image making a single point to appear as a spot with a vanishing edge. For an extended source, exhibiting however a reasonable parallel beam, the penumbra degrading the image would be caused by the unavoidable beam divergence. In both cases, the essential information to improve the degraded image is the law of transformation of a single point into a blurred spot, known as point spread function-PSF. Even for an isotropic system, where this function would have a symmetric bell-like shape, it is very difficult to obtain experimentally and to apply it to the data processing. For this reason it is usually replaced by an approximated analytical function such as a Gaussian or Lorentzian. In this work, the Richardson-Lucy deconvoultion has been applied to ameliorate thermal neutron radiographic images acquired with imaging plates using a Gaussian PSF as deconvolutor. Due to the divergence of the neutron beam, reaching 1 deg 16', the penumbra affecting the final image depends upon the gap object-detector. Moreover, even if the object were placed in direct contact with the detector the non-zero dimension of the object along the beam path would produce penumbrae of different magnitudes, i.e., the spatial resolution of the system would be dependent upon the object-detector arrangement. This means that the width of the PSF increases

  11. Optical image encryption method based on incoherent imaging and polarized light encoding

    Science.gov (United States)

    Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.

    2018-05-01

    We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.

  12. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    Science.gov (United States)

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  13. Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing

    Science.gov (United States)

    Li-Chee-Ming, J.; Armenakis, C.

    2017-05-01

    This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.

  14. SPECT imaging with resolution recovery

    International Nuclear Information System (INIS)

    Bronnikov, A. V.

    2011-01-01

    Single-photon emission computed tomography (SPECT) is a method of choice for imaging spatial distributions of radioisotopes. Many applications of this method are found in nuclear industry, medicine, and biomedical research. We study mathematical modeling of a micro-SPECT system by using a point-spread function (PSF) and implement an OSEM-based iterative algorithm for image reconstruction with resolution recovery. Unlike other known implementations of the OSEM algorithm, we apply en efficient computation scheme based on a useful approximation of the PSF, which ensures relatively fast computations. The proposed approach can be applied with the data acquired with any type of collimators, including parallel-beam fan-beam, cone-beam and pinhole collimators. Experimental results obtained with a micro SPECT system demonstrate high efficiency of resolution recovery. (authors)

  15. Precision 3d Surface Reconstruction from Lro Nac Images Using Semi-Global Matching with Coupled Epipolar Rectification

    Science.gov (United States)

    Hu, H.; Wu, B.

    2017-07-01

    The Narrow-Angle Camera (NAC) on board the Lunar Reconnaissance Orbiter (LRO) comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM) is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four) of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to generate a disparity

  16. PRECISION 3D SURFACE RECONSTRUCTION FROM LRO NAC IMAGES USING SEMI-GLOBAL MATCHING WITH COUPLED EPIPOLAR RECTIFICATION

    Directory of Open Access Journals (Sweden)

    H. Hu

    2017-07-01

    Full Text Available The Narrow-Angle Camera (NAC on board the Lunar Reconnaissance Orbiter (LRO comprises of a pair of closely attached high-resolution push-broom sensors, in order to improve the swath coverage. However, the two image sensors do not share the same lenses and cannot be modelled geometrically using a single physical model. Thus, previous works on dense matching of stereo pairs of NAC images would generally create two to four stereo models, each with an irregular and overlapping region of varying size. Semi-Global Matching (SGM is a well-known dense matching method and has been widely used for image-based 3D surface reconstruction. SGM is a global matching algorithm relying on global inference in a larger context rather than individual pixels to establish stable correspondences. The stereo configuration of LRO NAC images causes severe problem for image matching methods such as SGM, which emphasizes global matching strategy. Aiming at using SGM for image matching of LRO NAC stereo pairs for precision 3D surface reconstruction, this paper presents a coupled epipolar rectification methods for LRO NAC stereo images, which merges the image pair in the disparity space and in this way, only one stereo model will be estimated. For a stereo pair (four of NAC images, the method starts with the boresight calibration by finding correspondence in the small overlapping stripe between each pair of NAC images and bundle adjustment of the stereo pair, in order to clean the vertical disparities. Then, the dominate direction of the images are estimated by project the center of the coverage area to the reference image and back-projected to the bounding box plane determined by the image orientation parameters iteratively. The dominate direction will determine an affine model, by which the pair of NAC images are warped onto the object space with a given ground resolution and in the meantime, a mask is produced indicating the owner of each pixel. SGM is then used to

  17. IMAGE ANALYSIS FOR COSMOLOGY: RESULTS FROM THE GREAT10 STAR CHALLENGE

    International Nuclear Information System (INIS)

    Kitching, T. D.; Heymans, C.; Rowe, B.; Witherick, D.; Gill, M.; Massey, R.; Courbin, F.; Gentile, M.; Meylan, G.; Georgatzis, K.; Gruen, D.; Kilbinger, M.; Li, G. L.; Mariglis, A. P.; Storkey, A.; Xin, B.

    2013-01-01

    We present the results from the first public blind point-spread function (PSF) reconstruction challenge, the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) Star Challenge. Reconstruction of a spatially varying PSF, sparsely sampled by stars, at non-star positions is a critical part in the image analysis for weak lensing where inaccuracies in the modeled ellipticity e and size R 2 can impact the ability to measure the shapes of galaxies. This is of importance because weak lensing is a particularly sensitive probe of dark energy and can be used to map the mass distribution of large scale structure. Participants in the challenge were presented with 27,500 stars over 1300 images subdivided into 26 sets, where in each set a category change was made in the type or spatial variation of the PSF. Thirty submissions were made by nine teams. The best methods reconstructed the PSF with an accuracy of σ(e) ≈ 2.5 × 10 –4 and σ(R 2 )/R 2 ≈ 7.4 × 10 –4 . For a fixed pixel scale, narrower PSFs were found to be more difficult to model than larger PSFs, and the PSF reconstruction was severely degraded with the inclusion of an atmospheric turbulence model (although this result is likely to be a strong function of the amplitude of the turbulence power spectrum).

  18. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins.

    Science.gov (United States)

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N Samba; Gopalaswamy, Arjun M; Karanth, K Ullas

    2009-06-23

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin.

  19. A spot-matching method using cumulative frequency matrix in 2D gel images

    Science.gov (United States)

    Han, Chan-Myeong; Park, Joon-Ho; Chang, Chu-Seok; Ryoo, Myung-Chun

    2014-01-01

    A new method for spot matching in two-dimensional gel electrophoresis images using a cumulative frequency matrix is proposed. The method improves on the weak points of the previous method called ‘spot matching by topological patterns of neighbour spots’. It accumulates the frequencies of neighbour spot pairs produced through the entire matching process and determines spot pairs one by one in order of higher frequency. Spot matching by frequencies of neighbour spot pairs shows a fairly better performance. However, it can give researchers a hint for whether the matching results can be trustworthy or not, which can save researchers a lot of effort for verification of the results. PMID:26019609

  20. Quantification of dopaminergic neurotransmission SPECT studies with 123I-labelled radioligands. A comparison between different imaging systems and data acquisition protocols using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Crespo, Cristina; Aguiar, Pablo; Gallego, Judith; Cot, Albert; Falcon, Carles; Ros, Domenec; Bullich, Santiago; Pareto, Deborah; Sempau, Josep; Lomena, Francisco; Calvino, Francisco; Pavia, Javier

    2008-01-01

    123 I-labelled radioligands are commonly used for single-photon emission computed tomography (SPECT) imaging of the dopaminergic system to study the dopamine transporter binding. The aim of this work was to compare the quantitative capabilities of two different SPECT systems through Monte Carlo (MC) simulation. The SimSET MC code was employed to generate simulated projections of a numerical phantom for two gamma cameras equipped with a parallel and a fan-beam collimator, respectively. A fully 3D iterative reconstruction algorithm was used to compensate for attenuation, the spatially variant point spread function (PSF) and scatter. A post-reconstruction partial volume effect (PVE) compensation was also developed. For both systems, the correction for all degradations and PVE compensation resulted in recovery factors of the theoretical specific uptake ratio (SUR) close to 100%. For a SUR value of 4, the recovered SUR for the parallel imaging system was 33% for a reconstruction without corrections (OSEM), 45% for a reconstruction with attenuation correction (OSEM-A), 56% for a 3D reconstruction with attenuation and PSF corrections (OSEM-AP), 68% for OSEM-AP with scatter correction (OSEM-APS) and 97% for OSEM-APS plus PVE compensation (OSEM-APSV). For the fan-beam imaging system, the recovered SUR was 41% without corrections, 55% for OSEM-A, 65% for OSEM-AP, 75% for OSEM-APS and 102% for OSEM-APSV. Our findings indicate that the correction for degradations increases the quantification accuracy, with PVE compensation playing a major role in the SUR quantification. The proposed methodology allows us to reach similar SUR values for different SPECT systems, thereby allowing a reliable standardisation in multicentric studies. (orig.)

  1. Image enhancement through deconvolution

    International Nuclear Information System (INIS)

    Zhang, Xiaodong; Jacobsen, C.; Williams, S.

    1993-01-01

    Several groups have been developing X-ray microscopes for studies of biological and materials specimens at suboptical resolution. The XIA Scanning Transmission X-ray Microscope at Brookhaven National Laboratory has achieved 55 nm Rayleigh resolution, and is limited by the 45 nm finest zone width of the zone plate used to focus the X-rays. In principle, features as small as half the outermost zone width, or 23 nm, can be observed in the microscope, though with reduced contrast in the image. One approach to recover the object from the image is to deconvolve the image with the Point Spread Function (PSF) of the optic system. Towards this end, the magnitude of the Fourier transform of the PSF, the Modulation Transfer Function, has been experimentally determined and agrees reasonably well with the calculations using the known parameters of the microscope. To minimize artifacts in the deconvolved images, large signal-to-noise ratios are required in the original image, and high frequency filters can be used to reduce the noise at the expense of resolution. In this way the authors are able to recover the original contrast of high resolution features in the images

  2. Desempenho do PSF no Sul e no Nordeste do Brasil: avaliação institucional e epidemiológica da Atenção Básica à Saúde Performance of the PSF in the Brazilian South and Northeast: institutional and epidemiological Assessment of Primary Health Care

    Directory of Open Access Journals (Sweden)

    Luiz Augusto Facchini

    2006-09-01

    Full Text Available A pesquisa, desenvolvida dentro dos Estudos de Linha de Base do Proesf analisou o desempenho do Programa Saúde da Família (PSF em 41 municípios dos Estados de Alagoas, Paraíba, Pernambuco, Piauí, Rio Grande do Norte, Rio Grande do Sul e Santa Catarina. Utilizou delineamento transversal, com grupo de comparação externo (atenção básica tradicional. Entrevistou 41 presidentes de Conselhos Municipais de Saúde, 29 secretários municipais de Saúde e 32 coordenadores de Atenção Básica. Foram caracterizados a estrutura e o processo de trabalho em 234 Unidades Básicas de Saúde (UBS, incluindo 4.749 trabalhadores de saúde; 4.079 crianças; 3.945 mulheres; 4.060 adultos e 4.006 idosos. O controle de qualidade alcançou 6% dos domicílios amostrados. A cobertura do PSF de 1999 a 2004 cresceu mais no Nordeste do que no Sul. Menos da metade dos trabalhadores ingressaram por concurso público e o trabalho precário foi maior no PSF do que em UBS tradicionais. Os achados sugerem um desempenho da Atenção Básica à Saúde (ABS ainda distante das prescrições do SUS. Menos da metade da demanda potencial utilizou a UBS de sua área de abrangência. A oferta de ações de saúde, a sua utilização e o contato por ações programáticas foram mais adequados no PSF.The research developed in the context of the baseline studies of the PROESF analyzed the performance of the PSF in 41 municipalities of the States Alagoas, Paraíba, Pernambuco, Piauí, Rio Grande do Norte, Rio Grande do Sul and Santa Catarina. This article describes a transversal study using an external group for comparison (traditional primary care. Forty-one Presidents of Municipal Health Councils, 20 Municipal Health Secretaries and 32 Primary Care Coordinators were interviewed. The study characterized the structure and work process of 234 Basic Care Units, involving 4.794 workers, 4.079 children, 3.945 women, 4.060 adults and 4.006 aged people. Quality control reached 6% of the

  3. Karlsruhe Research Center, Nuclear Safety Research Project (PSF). Annual report 1994

    International Nuclear Information System (INIS)

    Hueper, R.

    1995-08-01

    The reactor safety R and D work of the Karlsruhe Research Centre (FZKA) has been part of the Nuclear Safety Research Projet (PSF) since 1990. The present annual report 1994 summarizes the R and D results. The research tasks are coordinated in agreement with internal and external working groups. The contributions to this report correspond to the status of early 1995. An abstract in English precedes each of them, whenever the respective article is written in German. (orig.) [de

  4. Harmonizing FDG PET quantification while maintaining optimal lesion detection: prospective multicentre validation in 517 oncology patients

    International Nuclear Information System (INIS)

    Quak, Elske; Le Roux, Pierre-Yves; Robin, Philippe; Bourhis, David; Salaun, Pierre-Yves; Hofman, Michael S.; Callahan, Jason; Binns, David; Hicks, Rodney J.; Desmonts, Cedric; Aide, Nicolas

    2015-01-01

    Point-spread function (PSF) or PSF + time-of-flight (TOF) reconstruction may improve lesion detection in oncologic PET, but can alter quantitation resulting in variable standardized uptake values (SUVs) between different PET systems. This study aims to validate a proprietary software tool (EQ.PET) to harmonize SUVs across different PET systems independent of the reconstruction algorithm used. NEMA NU2 phantom data were used to calculate the appropriate filter for each PSF or PSF+TOF reconstruction from three different PET systems, in order to obtain EANM compliant recovery coefficients. PET data from 517 oncology patients were reconstructed with a PSF or PSF+TOF reconstruction for optimal tumour detection and an ordered subset expectation maximization (OSEM3D) reconstruction known to fulfil EANM guidelines. Post-reconstruction, the proprietary filter was applied to the PSF or PSF+TOF data (PSF EQ or PSF+TOF EQ ). SUVs for PSF or PSF+TOF and PSF EQ or PSF+TOF EQ were compared to SUVs for the OSEM3D reconstruction. The impact of potential confounders on the EQ.PET methodology including lesion and patient characteristics was studied, as was the adherence to imaging guidelines. For the 1380 tumour lesions studied, Bland-Altman analysis showed a mean ratio between PSF or PSF+TOF and OSEM3D of 1.46 (95 %CI: 0.86-2.06) and 1.23 (95 %CI: 0.95-1.51) for SUV max and SUV peak , respectively. Application of the proprietary filter improved these ratios to 1.02 (95 %CI: 0.88-1.16) and 1.04 (95 %CI: 0.92-1.17) for SUV max and SUV peak , respectively. The influence of the different confounding factors studied (lesion size, location, radial offset and patient's BMI) was less than 5 %. Adherence to the European Association of Nuclear Medicine (EANM) guidelines for tumour imaging was good. These data indicate that it is not necessary to sacrifice the superior lesion detection and image quality achieved by newer reconstruction techniques in the quest for harmonizing quantitative

  5. MBR-SIFT: A mirror reflected invariant feature descriptor using a binary representation for image matching.

    Directory of Open Access Journals (Sweden)

    Mingzhe Su

    Full Text Available The traditional scale invariant feature transform (SIFT method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed.

  6. MBR-SIFT: A mirror reflected invariant feature descriptor using a binary representation for image matching.

    Science.gov (United States)

    Su, Mingzhe; Ma, Yan; Zhang, Xiangfen; Wang, Yan; Zhang, Yuping

    2017-01-01

    The traditional scale invariant feature transform (SIFT) method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT) methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed.

  7. Application of four different football match analysis systems

    DEFF Research Database (Denmark)

    Randers, Morten B; Mujika, Inigo; Hewitt, Adam

    2010-01-01

    Using a video-based time-motion analysis system, a semi-automatic multiple-camera system, and two commercially available GPS systems (GPS-1; 5 Hz and GPS-2; 1 Hz), we compared activity pattern and fatigue development in the same football match. Twenty football players competing in the Spanish...... a football game and can be used to study game-induced fatigue. Rather large between-system differences were present in the determination of the absolute distances covered, meaning that any comparisons of results between different match analysis systems should be done with caution....

  8. Quantification of dopaminergic neurotransmission SPECT studies with {sup 123}I-labelled radioligands. A comparison between different imaging systems and data acquisition protocols using Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Crespo, Cristina; Aguiar, Pablo [Universitat de Barcelona - IDIBAPS, Unitat de Biofisica i Bioenginyeria, Departament de Ciencies Fisiologiques I, Facultat de Medicina, Barcelona (Spain); Gallego, Judith [Universitat Politecnica de Catalunya, Institut de Tecniques Energetiques, Barcelona (Spain); Institut de Bioenginyeria de Catalunya, Barcelona (Spain); Cot, Albert [Universitat de Barcelona - IDIBAPS, Unitat de Biofisica i Bioenginyeria, Departament de Ciencies Fisiologiques I, Facultat de Medicina, Barcelona (Spain); Universitat Politecnica de Catalunya, Seccio d' Enginyeria Nuclear, Departament de Fisica i Enginyeria Nuclear, Barcelona (Spain); Falcon, Carles; Ros, Domenec [Universitat de Barcelona - IDIBAPS, Unitat de Biofisica i Bioenginyeria, Departament de Ciencies Fisiologiques I, Facultat de Medicina, Barcelona (Spain); CIBER en Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain); Bullich, Santiago [Hospital del Mar, Center for Imaging in Psychiatry, CRC-MAR, Barcelona (Spain); Pareto, Deborah [CIBER en Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain); PRBB, Institut d' Alta Tecnologia, Barcelona (Spain); Sempau, Josep [Universitat Politecnica de Catalunya, Institut de Tecniques Energetiques, Barcelona (Spain); CIBER en Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain); Lomena, Francisco [IDIBAPS, Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Calvino, Francisco [Universitat Politecnica de Catalunya, Institut de Tecniques Energetiques, Barcelona (Spain); Universitat Politecnica de Catalunya, Seccio d' Enginyeria Nuclear, Departament de Fisica i Enginyeria Nuclear, Barcelona (Spain); Pavia, Javier [CIBER en Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain); IDIBAPS, Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain)

    2008-07-15

    {sup 123}I-labelled radioligands are commonly used for single-photon emission computed tomography (SPECT) imaging of the dopaminergic system to study the dopamine transporter binding. The aim of this work was to compare the quantitative capabilities of two different SPECT systems through Monte Carlo (MC) simulation. The SimSET MC code was employed to generate simulated projections of a numerical phantom for two gamma cameras equipped with a parallel and a fan-beam collimator, respectively. A fully 3D iterative reconstruction algorithm was used to compensate for attenuation, the spatially variant point spread function (PSF) and scatter. A post-reconstruction partial volume effect (PVE) compensation was also developed. For both systems, the correction for all degradations and PVE compensation resulted in recovery factors of the theoretical specific uptake ratio (SUR) close to 100%. For a SUR value of 4, the recovered SUR for the parallel imaging system was 33% for a reconstruction without corrections (OSEM), 45% for a reconstruction with attenuation correction (OSEM-A), 56% for a 3D reconstruction with attenuation and PSF corrections (OSEM-AP), 68% for OSEM-AP with scatter correction (OSEM-APS) and 97% for OSEM-APS plus PVE compensation (OSEM-APSV). For the fan-beam imaging system, the recovered SUR was 41% without corrections, 55% for OSEM-A, 65% for OSEM-AP, 75% for OSEM-APS and 102% for OSEM-APSV. Our findings indicate that the correction for degradations increases the quantification accuracy, with PVE compensation playing a major role in the SUR quantification. The proposed methodology allows us to reach similar SUR values for different SPECT systems, thereby allowing a reliable standardisation in multicentric studies. (orig.)

  9. Development of Neuromorphic Sift Operator with Application to High Speed Image Matching

    Science.gov (United States)

    Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.

    2015-12-01

    There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.

  10. Real-time detection of natural objects using AM-coded spectral matching imager

    Science.gov (United States)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  11. Performance Evaluation of Tree Object Matching

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven

    2005-01-01

    Multi-Scale Singularity Trees (MSSTs) represents the deep structure of images in scale-space and provide both the connections between image features at different scales and their strengths. In this report we present and evaluate an algorithm that exploits the MSSTs for image matching. Two versions...

  12. Characterization of statistical prior image constrained compressed sensing (PICCS): II. Application to dose reduction

    International Nuclear Information System (INIS)

    Lauzier, Pascal Thériault; Chen Guanghong

    2013-01-01

    Purpose: The ionizing radiation imparted to patients during computed tomography exams is raising concerns. This paper studies the performance of a scheme called dose reduction using prior image constrained compressed sensing (DR-PICCS). The purpose of this study is to characterize the effects of a statistical model of x-ray detection in the DR-PICCS framework and its impact on spatial resolution. Methods: Both numerical simulations with known ground truth and in vivo animal dataset were used in this study. In numerical simulations, a phantom was simulated with Poisson noise and with varying levels of eccentricity. Both the conventional filtered backprojection (FBP) and the PICCS algorithms were used to reconstruct images. In PICCS reconstructions, the prior image was generated using two different denoising methods: a simple Gaussian blur and a more advanced diffusion filter. Due to the lack of shift-invariance in nonlinear image reconstruction such as the one studied in this paper, the concept of local spatial resolution was used to study the sharpness of a reconstructed image. Specifically, a directional metric of image sharpness, the so-called pseudopoint spread function (pseudo-PSF), was employed to investigate local spatial resolution. Results: In the numerical studies, the pseudo-PSF was reduced from twice the voxel width in the prior image down to less than 1.1 times the voxel width in DR-PICCS reconstructions when the statistical model was not included. At the same noise level, when statistical weighting was used, the pseudo-PSF width in DR-PICCS reconstructed images varied between 1.5 and 0.75 times the voxel width depending on the direction along which it was measured. However, this anisotropy was largely eliminated when the prior image was generated using diffusion filtering; the pseudo-PSF width was reduced to below one voxel width in that case. In the in vivo study, a fourfold improvement in CNR was achieved while qualitatively maintaining sharpness

  13. Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    Nina Merkle

    2017-06-01

    Full Text Available Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like monitoring by image time series or scene analysis after sudden events. These tasks require geo-referenced and precisely co-registered multi-sensor data. Images captured by the high resolution synthetic aperture radar (SAR satellite TerraSAR-X exhibit an absolute geo-location accuracy within a few decimeters. These images represent therefore a reliable source to improve the geo-location accuracy of optical images, which is in the order of tens of meters. In this paper, a deep learning-based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data is investigated. Image registration between SAR and optical images requires few, but accurate and reliable matching points. These are derived from a Siamese neural network. The network is trained using TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe, in order to learn the two-dimensional spatial shifts between optical and SAR image patches. Results confirm that accurate and reliable matching points can be generated with higher matching accuracy and precision with respect to state-of-the-art approaches.

  14. IMAGE ANALYSIS FOR COSMOLOGY: RESULTS FROM THE GREAT10 STAR CHALLENGE

    Energy Technology Data Exchange (ETDEWEB)

    Kitching, T. D.; Heymans, C. [Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey RH5 6NT (United Kingdom); Rowe, B.; Witherick, D. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); Gill, M. [Center for Cosmology and AstroParticle Physics, Physics Department, The Ohio State University, Columbus, OH (United States); Massey, R. [Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE (United Kingdom); Courbin, F.; Gentile, M.; Meylan, G. [Laboratoire d' Astrophysique, Ecole Polytechnique Federale de Lausanne (EPFL) (Switzerland); Georgatzis, K. [Department of Information and Computer Science, Aalto University, P.O. Box 15400, FI-00076 Aalto (Finland); Gruen, D. [Department of Physics and Astronomy, 209 South 33rd Street, University of Pennsylvania, Philadelphia, PA 19104 (United States); Kilbinger, M. [Excellence Cluster Universe, Boltzmannstr. 2, D-85748 Garching (Germany); Li, G. L. [Purple Mountain Observatory, 2 West Beijing Road, Nanjing 210008 (China); Mariglis, A. P.; Storkey, A. [School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB (United Kingdom); Xin, B., E-mail: t.kitching@ucl.ac.uk [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907 (United States)

    2013-04-01

    We present the results from the first public blind point-spread function (PSF) reconstruction challenge, the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) Star Challenge. Reconstruction of a spatially varying PSF, sparsely sampled by stars, at non-star positions is a critical part in the image analysis for weak lensing where inaccuracies in the modeled ellipticity e and size R {sup 2} can impact the ability to measure the shapes of galaxies. This is of importance because weak lensing is a particularly sensitive probe of dark energy and can be used to map the mass distribution of large scale structure. Participants in the challenge were presented with 27,500 stars over 1300 images subdivided into 26 sets, where in each set a category change was made in the type or spatial variation of the PSF. Thirty submissions were made by nine teams. The best methods reconstructed the PSF with an accuracy of {sigma}(e) Almost-Equal-To 2.5 Multiplication-Sign 10{sup -4} and {sigma}(R {sup 2})/R {sup 2} Almost-Equal-To 7.4 Multiplication-Sign 10{sup -4}. For a fixed pixel scale, narrower PSFs were found to be more difficult to model than larger PSFs, and the PSF reconstruction was severely degraded with the inclusion of an atmospheric turbulence model (although this result is likely to be a strong function of the amplitude of the turbulence power spectrum).

  15. A PSF-shape-based beamforming strategy for robust 2D motion estimation in ultrafast data

    NARCIS (Netherlands)

    Saris, Anne E.C.M.; Fekkes, Stein; Nillesen, Maartje; Hansen, Hendrik H.G.; de Korte, Chris L.

    2018-01-01

    This paper presents a framework for motion estimation in ultrafast ultrasound data. It describes a novel approach for determining the sampling grid for ultrafast data based on the system's point-spread-function (PSF). As a consequence, the cross-correlation functions (CCF) used in the speckle

  16. Automated image-matching technique for comparative diagnosis of the liver on CT examination

    International Nuclear Information System (INIS)

    Okumura, Eiichiro; Sanada, Shigeru; Suzuki, Masayuki; Tsushima, Yoshito; Matsui, Osamu

    2005-01-01

    When interpreting enhanced computer tomography (CT) images of the upper abdomen, radiologists visually select a set of images of the same anatomical positions from two or more CT image series (i.e., non-enhanced and contrast-enhanced CT images at arterial and delayed phase) to depict and to characterize any abnormalities. The same process is also necessary to create subtraction images by computer. We have developed an automated image selection system using a template-matching technique that allows the recognition of image sets at the same anatomical position from two CT image series. Using the template-matching technique, we compared several anatomical structures in each CT image at the same anatomical position. As the position of the liver may shift according to respiratory movement, not only the shape of the liver but also the gallbladder and other prominent structures included in the CT images were compared to allow appropriate selection of a set of CT images. This novel technique was applied in 11 upper abdominal CT examinations. In CT images with a slice thickness of 7.0 or 7.5 mm, the percentage of image sets selected correctly by the automated procedure was 86.6±15.3% per case. In CT images with a slice thickness of 1.25 mm, the percentages of correct selection of image sets by the automated procedure were 79.4±12.4% (non-enhanced and arterial-phase CT images) and 86.4±10.1% (arterial- and delayed-phase CT images). This automated method is useful for assisting in interpreting CT images and in creating digital subtraction images. (author)

  17. The Sloan Digital Sky Survey COADD: 275 deg2 of deep Sloan Digital Sky Survey imaging on stripe 82

    International Nuclear Information System (INIS)

    Annis, James; Soares-Santos, Marcelle; Dodelson, Scott; Hao, Jiangang; Jester, Sebastian; Johnston, David E.; Kubo, Jeffrey M.; Lampeitl, Hubert; Lin, Huan; Miknaitis, Gajus; Yanny, Brian; Strauss, Michael A.; Gunn, James E.; Lupton, Robert H.; Becker, Andrew C.; Ivezić, Željko; Fan, Xiaohui; Jiang, Linhua; Seo, Hee-Jong; Simet, Melanie

    2014-01-01

    We present details of the construction and characterization of the coaddition of the Sloan Digital Sky Survey (SDSS) Stripe 82 ugriz imaging data. This survey consists of 275 deg 2 of repeated scanning by the SDSS camera over –50° ≤ α ≤ 60° and –1.°25 ≤ δ ≤ +1.°25 centered on the Celestial Equator. Each piece of sky has ∼20 runs contributing and thus reaches ∼2 mag fainter than the SDSS single pass data, i.e., to r ∼ 23.5 for galaxies. We discuss the image processing of the coaddition, the modeling of the point-spread function (PSF), the calibration, and the production of standard SDSS catalogs. The data have an r-band median seeing of 1.''1 and are calibrated to ≤1%. Star color-color, number counts, and PSF size versus modeled size plots show that the modeling of the PSF is good enough for precision five-band photometry. Structure in the PSF model versus magnitude plot indicates minor PSF modeling errors, leading to misclassification of stars as galaxies, as verified using VVDS spectroscopy. There are a variety of uses for this wide-angle deep imaging data, including galactic structure, photometric redshift computation, cluster finding and cross wavelength measurements, weak lensing cluster mass calibrations, and cosmic shear measurements.

  18. Mix-and-match holography

    KAUST Repository

    Peng, Yifan; Dun, Xiong; Sun, Qilin; Heidrich, Wolfgang

    2017-01-01

    target images into pairs of front and rear phase-distorting surfaces. Different target holograms can be decoded by mixing and matching different front and rear surfaces under specific geometric alignments. Our approach, which we call mixWe derive a detailed image formation model for the setting of holographic projection displays, as well as a multiplexing method based on a combination of phase retrieval methods and complex matrix factorization. We demonstrate several application scenarios in both simulation and physical prototypes.

  19. Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair

    Directory of Open Access Journals (Sweden)

    Won-Jae Park

    2017-06-01

    Full Text Available In this paper, a high dynamic range (HDR imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.

  20. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    NARCIS (Netherlands)

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to

  1. Improving Image Matching by Reducing Surface Reflections Using Polarising Filter Techniques

    Science.gov (United States)

    Conen, N.; Hastedt, H.; Kahmen, O.; Luhmann, T.

    2018-05-01

    In dense stereo matching applications surface reflections may lead to incorrect measurements and blunders in the resulting point cloud. To overcome the problem of disturbing reflexions polarising filters can be mounted on the camera lens and light source. Reflections in the images can be suppressed by crossing the polarising direction of the filters leading to homogeneous illuminated images and better matching results. However, the filter may influence the camera's orientation parameters as well as the measuring accuracy. To quantify these effects, a calibration and an accuracy analysis is conducted within a spatial test arrangement according to the German guideline VDI/VDE 2634.1 (2002) using a DSLR with and without polarising filter. In a second test, the interior orientation is analysed in more detail. The results do not show significant changes of the measuring accuracy in object space and only very small changes of the interior orientation (Δc ≤ 4 μm) with the polarising filter in use. Since in medical applications many tiny reflections are present and impede robust surface measurements, a prototypic trinocular endoscope is equipped with polarising technique. The interior and relative orientation is determined and analysed. The advantage of the polarising technique for medical image matching is shown in an experiment with a moistened pig kidney. The accuracy and completeness of the resulting point cloud can be improved clearly when using polarising filters. Furthermore, an accuracy analysis using a laser triangulation system is performed and the special reflection properties of metallic surfaces are presented.

  2. IMPROVING IMAGE MATCHING BY REDUCING SURFACE REFLECTIONS USING POLARISING FILTER TECHNIQUES

    Directory of Open Access Journals (Sweden)

    N. Conen

    2018-05-01

    Full Text Available In dense stereo matching applications surface reflections may lead to incorrect measurements and blunders in the resulting point cloud. To overcome the problem of disturbing reflexions polarising filters can be mounted on the camera lens and light source. Reflections in the images can be suppressed by crossing the polarising direction of the filters leading to homogeneous illuminated images and better matching results. However, the filter may influence the camera’s orientation parameters as well as the measuring accuracy. To quantify these effects, a calibration and an accuracy analysis is conducted within a spatial test arrangement according to the German guideline VDI/VDE 2634.1 (2002 using a DSLR with and without polarising filter. In a second test, the interior orientation is analysed in more detail. The results do not show significant changes of the measuring accuracy in object space and only very small changes of the interior orientation (Δc ≤ 4 μm with the polarising filter in use. Since in medical applications many tiny reflections are present and impede robust surface measurements, a prototypic trinocular endoscope is equipped with polarising technique. The interior and relative orientation is determined and analysed. The advantage of the polarising technique for medical image matching is shown in an experiment with a moistened pig kidney. The accuracy and completeness of the resulting point cloud can be improved clearly when using polarising filters. Furthermore, an accuracy analysis using a laser triangulation system is performed and the special reflection properties of metallic surfaces are presented.

  3. Reconstruction of a cone-beam CT image via forward iterative projection matching

    International Nuclear Information System (INIS)

    Brock, R. Scott; Docef, Alen; Murphy, Martin J.

    2010-01-01

    Purpose: To demonstrate the feasibility of reconstructing a cone-beam CT (CBCT) image by deformably altering a prior fan-beam CT (FBCT) image such that it matches the anatomy portrayed in the CBCT projection data set. Methods: A prior FBCT image of the patient is assumed to be available as a source image. A CBCT projection data set is obtained and used as a target image set. A parametrized deformation model is applied to the source FBCT image, digitally reconstructed radiographs (DRRs) that emulate the CBCT projection image geometry are calculated and compared to the target CBCT projection data, and the deformation model parameters are adjusted iteratively until the DRRs optimally match the CBCT projection data set. The resulting deformed FBCT image is hypothesized to be an accurate representation of the patient's anatomy imaged by the CBCT system. The process is demonstrated via numerical simulation. A known deformation is applied to a prior FBCT image and used to create a synthetic set of CBCT target projections. The iterative projection matching process is then applied to reconstruct the deformation represented in the synthetic target projections; the reconstructed deformation is then compared to the known deformation. The sensitivity of the process to the number of projections and the DRR/CBCT projection mismatch is explored by systematically adding noise to and perturbing the contrast of the target projections relative to the iterated source DRRs and by reducing the number of projections. Results: When there is no noise or contrast mismatch in the CBCT projection images, a set of 64 projections allows the known deformed CT image to be reconstructed to within a nRMS error of 1% and the known deformation to within a nRMS error of 7%. A CT image nRMS error of less than 4% is maintained at noise levels up to 3% of the mean projection intensity, at which the deformation error is 13%. At 1% noise level, the number of projections can be reduced to 8 while maintaining

  4. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier.

    Directory of Open Access Journals (Sweden)

    Nogol Memari

    Full Text Available The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE, Structured Analysis of the Retina (STARE and Child Heart and Health Study in England (CHASE_DB1 datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.

  5. An Matching Method for Vehicle-borne Panoramic Image Sequence Based on Adaptive Structure from Motion Feature

    Directory of Open Access Journals (Sweden)

    ZHANG Zhengpeng

    2015-10-01

    Full Text Available Panoramic image matching method with the constraint condition of local structure from motion similarity feature is an important method, the process requires multivariable kernel density estimations for the structure from motion feature used nonparametric mean shift. Proper selection of the kernel bandwidth is a critical step for convergence speed and accuracy of matching method. Variable bandwidth with adaptive structure from motion feature for panoramic image matching method has been proposed in this work. First the bandwidth matrix is defined using the locally adaptive spatial structure of the sampling point in spatial domain and optical flow domain. The relaxation diffusion process of structure from motion similarity feature is described by distance weighting method of local optical flow feature vector. Then the expression form of adaptive multivariate kernel density function is given out, and discusses the solution of the mean shift vector, termination conditions, and the seed point selection method. The final fusions of multi-scale SIFT the features and structure features to establish a unified panoramic image matching framework. The sphere panoramic images from vehicle-borne mobile measurement system are chosen such that a comparison analysis between fixed bandwidth and adaptive bandwidth is carried out in detail. The results show that adaptive bandwidth is good for case with the inlier ratio changes and the object space scale changes. The proposed method can realize the adaptive similarity measure of structure from motion feature, improves the correct matching points and matching rate, experimental results have shown our method to be robust.

  6. A Registration Scheme for Multispectral Systems Using Phase Correlation and Scale Invariant Feature Matching

    Directory of Open Access Journals (Sweden)

    Hanlun Li

    2016-01-01

    Full Text Available In the past few years, many multispectral systems which consist of several identical monochrome cameras equipped with different bandpass filters have been developed. However, due to the significant difference in the intensity between different band images, image registration becomes very difficult. Considering the common structural characteristic of the multispectral systems, this paper proposes an effective method for registering different band images. First we use the phase correlation method to calculate the parameters of a coarse-offset relationship between different band images. Then we use the scale invariant feature transform (SIFT to detect the feature points. For every feature point in a reference image, we can use the coarse-offset parameters to predict the location of its matching point. We only need to compare the feature point in the reference image with the several near feature points from the predicted location instead of the feature points all over the input image. Our experiments show that this method does not only avoid false matches and increase correct matches, but also solve the matching problem between an infrared band image and a visible band image in cases lacking man-made objects.

  7. Quality control and authentication of packaged integrated circuits using enhanced-spatial-resolution terahertz time-domain spectroscopy and imaging

    Science.gov (United States)

    Ahi, Kiarash; Shahbazmohamadi, Sina; Asadizanjani, Navid

    2018-05-01

    In this paper, a comprehensive set of techniques for quality control and authentication of packaged integrated circuits (IC) using terahertz (THz) time-domain spectroscopy (TDS) is developed. By material characterization, the presence of unexpected materials in counterfeit components is revealed. Blacktopping layers are detected using THz time-of-flight tomography, and thickness of hidden layers is measured. Sanded and contaminated components are detected by THz reflection-mode imaging. Differences between inside structures of counterfeit and authentic components are revealed through developing THz transmission imaging. For enabling accurate measurement of features by THz transmission imaging, a novel resolution enhancement technique (RET) has been developed. This RET is based on deconvolution of the THz image and the THz point spread function (PSF). The THz PSF is mathematically modeled through incorporating the spectrum of the THz imaging system, the axis of propagation of the beam, and the intensity extinction coefficient of the object into a Gaussian beam distribution. As a result of implementing this RET, the accuracy of the measurements on THz images has been improved from 2.4 mm to 0.1 mm and bond wires as small as 550 μm inside the packaging of the ICs are imaged.

  8. GPU-based simulation of optical propagation through turbulence for active and passive imaging

    Science.gov (United States)

    Monnier, Goulven; Duval, François-Régis; Amram, Solène

    2014-10-01

    IMOTEP is a GPU-based (Graphical Processing Units) software relying on a fast parallel implementation of Fresnel diffraction through successive phase screens. Its applications include active imaging, laser telemetry and passive imaging through turbulence with anisoplanatic spatial and temporal fluctuations. Thanks to parallel implementation on GPU, speedups ranging from 40X to 70X are achieved. The present paper gives a brief overview of IMOTEP models, algorithms, implementation and user interface. It then focuses on major improvements recently brought to the anisoplanatic imaging simulation method. Previously, we took advantage of the computational power offered by the GPU to develop a simulation method based on large series of deterministic realisations of the PSF distorted by turbulence. The phase screen propagation algorithm, by reproducing higher moments of the incident wavefront distortion, provides realistic PSFs. However, we first used a coarse gaussian model to fit the numerical PSFs and characterise there spatial statistics through only 3 parameters (two-dimensional displacements of centroid and width). Meanwhile, this approach was unable to reproduce the effects related to the details of the PSF structure, especially the "speckles" leading to prominent high-frequency content in short-exposure images. To overcome this limitation, we recently implemented a new empirical model of the PSF, based on Principal Components Analysis (PCA), ought to catch most of the PSF complexity. The GPU implementation allows estimating and handling efficiently the numerous (up to several hundreds) principal components typically required under the strong turbulence regime. A first demanding computational step involves PCA, phase screen propagation and covariance estimates. In a second step, realistic instantaneous images, fully accounting for anisoplanatic effects, are quickly generated. Preliminary results are presented.

  9. Image Registration for PET/CT and CT Images with Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Lee, Hak Jae; Kim, Yong Kwon; Lee, Ki Sung; Choi, Jong Hak; Kim, Chang Kyun; Moon, Guk Hyun; Joo, Sung Kwan; Kim, Kyeong Min; Cheon, Gi Jeong

    2009-01-01

    Image registration is a fundamental task in image processing used to match two or more images. It gives new information to the radiologists by matching images from different modalities. The objective of this study is to develop 2D image registration algorithm for PET/CT and CT images acquired by different systems at different times. We matched two CT images first (one from standalone CT and the other from PET/CT) that contain affluent anatomical information. Then, we geometrically transformed PET image according to the results of transformation parameters calculated by the previous step. We have used Affine transform to match the target and reference images. For the similarity measure, mutual information was explored. Use of particle swarm algorithm optimized the performance by finding the best matched parameter set within a reasonable amount of time. The results show good agreements of the images between PET/CT and CT. We expect the proposed algorithm can be used not only for PET/CT and CT image registration but also for different multi-modality imaging systems such as SPECT/CT, MRI/PET and so on.

  10. Lung metastases detection in CT images using 3D template matching

    International Nuclear Information System (INIS)

    Wang, Peng; DeNunzio, Andrea; Okunieff, Paul; O'Dell, Walter G.

    2007-01-01

    The aim of this study is to demonstrate a novel, fully automatic computer detection method applicable to metastatic tumors to the lung with a diameter of 4-20 mm in high-risk patients using typical computed tomography (CT) scans of the chest. Three-dimensional (3D) spherical tumor appearance models (templates) of various sizes were created to match representative CT imaging parameters and to incorporate partial volume effects. Taking into account the variability in the location of CT sampling planes cut through the spherical models, three offsetting template models were created for each appearance model size. Lung volumes were automatically extracted from computed tomography images and the correlation coefficients between the subregions around each voxel in the lung volume and the set of appearance models were calculated using a fast frequency domain algorithm. To determine optimal parameters for the templates, simulated tumors of varying sizes and eccentricities were generated and superposed onto a representative human chest image dataset. The method was applied to real image sets from 12 patients with known metastatic disease to the lung. A total of 752 slices and 47 identifiable tumors were studied. Spherical templates of three sizes (6, 8, and 10 mm in diameter) were used on the patient image sets; all 47 true tumors were detected with the inclusion of only 21 false positives. This study demonstrates that an automatic and straightforward 3D template-matching method, without any complex training or postprocessing, can be used to detect small lung metastases quickly and reliably in the clinical setting

  11. Blurred image restoration using knife-edge function and optimal window Wiener filtering

    Science.gov (United States)

    Zhou, Shudao; Yan, Wei

    2018-01-01

    Motion blur in images is usually modeled as the convolution of a point spread function (PSF) and the original image represented as pixel intensities. The knife-edge function can be used to model various types of motion-blurs, and hence it allows for the construction of a PSF and accurate estimation of the degradation function without knowledge of the specific degradation model. This paper addresses the problem of image restoration using a knife-edge function and optimal window Wiener filtering. In the proposed method, we first calculate the motion-blur parameters and construct the optimal window. Then, we use the detected knife-edge function to obtain the system degradation function. Finally, we perform Wiener filtering to obtain the restored image. Experiments show that the restored image has improved resolution and contrast parameters with clear details and no discernible ringing effects. PMID:29377950

  12. Gender differences in game responses during badminton match play.

    Science.gov (United States)

    Fernandez-Fernandez, Jaime; de la Aleja Tellez, Jose G; Moya-Ramon, Manuel; Cabello-Manrique, David; Mendez-Villanueva, Alberto

    2013-09-01

    The aim of this study was to evaluate possible gender differences in match play activity pattern [rally duration, rest time between rallies, effective playing time, and strokes performed during a rally] and exercise intensity (heart rate [HR], blood lactate [La], and subjective ratings of perceived exertion [RPE]) during 9 simulated badminton matches in male (n = 8) and female (n = 8) elite junior (16.0 ± 1.4 years) players. Results showed significant differences (all p 0.05; ES = -0.33 to 0.08) were observed between female or male players in average HR (174 ± 7 vs. 170 ± 9 b·min(-1)), %HRmax (89.2 ± 4.0% vs. 85.9 ± 4.3%), La (2.5 ± 1.3 vs. 3.2 ± 1.8 mmol·L(-1)), and RPE values (14.2 ± 1.9 vs. 14.6 ± 1.8) during match play, although male players spent more time (moderate effect sizes) at intensities between 81 and 90% HRmax (35.3 ± 17.9 vs. 25.3 ± 13.6; p < 0.05; ES = 0.64) in the second game. There seemed to be a trend toward an increased playing intensity (i.e., higher HR, La, and RPE) from the first to the second game, highlighting the higher exercise intensity experienced during the last part of the match. The clear between-gender differences in activity patterns induced only slightly different physiological responses.

  13. Stereo matching and view interpolation based on image domain triangulation.

    Science.gov (United States)

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  14. SU-E-T-499: Comparison of Measured Tissue Phantom Ratios (TPR) Against Calculated From Percent Depth Doses (PDD) with and Without Peak Scatter Factor (PSF) in 6MV Open Beam

    International Nuclear Information System (INIS)

    Narayanasamy, G; Cruz, W; Gutierrez, Alonso; Mavroidis, Panayiotis; Papanikolaou, N; Stathakis, S; Breton, C

    2014-01-01

    Purpose: To examine the accuracy of measured tissue phantom ratios (TPR) values with TPR calculated from percentage depth dose (PDD) with and without peak scatter fraction (PSF) correction. Methods: For 6MV open beam, TPR and PDD values were measured using PTW Semiflex (31010) ionization field and reference chambers (0.125cc volume) in a PTW MP3-M water tank. PDD curves were measured at SSD of 100cm for 7 square fields from 3cm to 30cm. The TPR values were measured up to 22cm depth for the same fields by continuous water draining method with ionization chamber static at 100cm from source. A comparison study was performed between the (a) measured TPR, (b) TPR calculated from PDD without PSF, (c) TPR calculated from PDD with PSF and (d) clinical TPR from RadCalc (ver 6.2, Sun Nuclear Corp). Results: There is a field size, depth dependence on TPR values. For 10cmx10cm, the differences in surface dose (DDs), dose at 10cm depth (DD10) <0.5%; differences in dmax (Ddmax) <2mm for the 4 methods. The corresponding values for 30cmx30cm are DDs, DD10 <0.2% and Ddmax<3mm. Even though for 3cmx3cm field, DDs and DD10 <1% and Ddmax<1mm, the calculated TPR values with and without PSF correction differed by 2% at >20cm depth. In all field sizes at depths>28cm, (d) clinical TPR values are larger than that from (b) and (c) by >3%. Conclusion: Measured TPR in method (a) differ from calculated TPR in methods (b) and (c) to within 1% for depths < 28cm in all 7 fields in open 6MV beam. The dmax values are within 3mm of each other. The largest deviation of >3% was observed in clinical TPR values in method (d) for all fields at depths < 28cm

  15. A comparative study between matched and mis-matched projection/back projection pairs used with ASIRT reconstruction method

    International Nuclear Information System (INIS)

    Guedouar, R.; Zarrad, B.

    2010-01-01

    For algebraic reconstruction techniques both forward and back projection operators are needed. The ability to perform accurate reconstruction relies fundamentally on the forward projection and back projection methods which are usually, the transpose of each other. Even though the mis-matched pairs may introduce additional errors during the iterative process, the usefulness of mis-matched projector/back projector pairs has been proved in image reconstruction. This work investigates the performance of matched and mis-matched reconstruction pairs using popular forward projectors and their transposes when used in reconstruction tasks with additive simultaneous iterative reconstruction techniques (ASIRT) in a parallel beam approach. Simulated noiseless phantoms are used to compare the performance of the investigated pairs in terms of the root mean squared errors (RMSE) which are calculated between reconstructed slices and the reference in different regions. Results show that mis-matched projection/back projection pairs can promise more accuracy of reconstructed images than matched ones. The forward projection operator performance seems independent of the choice of the back projection operator and vice versa.

  16. A comparative study between matched and mis-matched projection/back projection pairs used with ASIRT reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Guedouar, R., E-mail: raja_guedouar@yahoo.f [Higher School of Health Sciences and Techniques of Monastir, Av. Avicenne, 5060 Monastir, B.P. 128 (Tunisia); Zarrad, B., E-mail: boubakerzarrad@yahoo.f [Higher School of Health Sciences and Techniques of Monastir, Av. Avicenne, 5060 Monastir, B.P. 128 (Tunisia)

    2010-07-21

    For algebraic reconstruction techniques both forward and back projection operators are needed. The ability to perform accurate reconstruction relies fundamentally on the forward projection and back projection methods which are usually, the transpose of each other. Even though the mis-matched pairs may introduce additional errors during the iterative process, the usefulness of mis-matched projector/back projector pairs has been proved in image reconstruction. This work investigates the performance of matched and mis-matched reconstruction pairs using popular forward projectors and their transposes when used in reconstruction tasks with additive simultaneous iterative reconstruction techniques (ASIRT) in a parallel beam approach. Simulated noiseless phantoms are used to compare the performance of the investigated pairs in terms of the root mean squared errors (RMSE) which are calculated between reconstructed slices and the reference in different regions. Results show that mis-matched projection/back projection pairs can promise more accuracy of reconstructed images than matched ones. The forward projection operator performance seems independent of the choice of the back projection operator and vice versa.

  17. Target Matching Recognition for Satellite Images Based on the Improved FREAK Algorithm

    Directory of Open Access Journals (Sweden)

    Yantong Chen

    2016-01-01

    Full Text Available Satellite remote sensing image target matching recognition exhibits poor robustness and accuracy because of the unfit feature extractor and large data quantity. To address this problem, we propose a new feature extraction algorithm for fast target matching recognition that comprises an improved feature from accelerated segment test (FAST feature detector and a binary fast retina key point (FREAK feature descriptor. To improve robustness, we extend the FAST feature detector by applying scale space theory and then transform the feature vector acquired by the FREAK descriptor from decimal into binary. We reduce the quantity of data in the computer and improve matching accuracy by using the binary space. Simulation test results show that our algorithm outperforms other relevant methods in terms of robustness and accuracy.

  18. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  19. Action verbs are processed differently in metaphorical and literal sentences depending on the semantic match of visual primes

    Directory of Open Access Journals (Sweden)

    Melissa eTroyer

    2014-12-01

    Full Text Available Language comprehension requires rapid and flexible access to information stored in long-term memory, likely influenced by activation of rich world knowledge and by brain systems that support the processing of sensorimotor content. We hypothesized that while literal language about biological motion might rely on neurocognitive representations of biological motion specific to the details of the actions described, metaphors rely on more generic representations of motion. In a priming and self-paced reading paradigm, participants saw video clips or images of (a an intact point-light walker or (b a scrambled control and read sentences containing literal or metaphoric uses of biological motion verbs either closely or distantly related to the depicted action (walking. We predicted that reading times for literal and metaphorical sentences would show differential sensitivity to the match between the verb and the visual prime. In Experiment 1, we observed interactions between the prime type (walker or scrambled video and the verb type (close or distant match for both literal and metaphorical sentences, but with strikingly different patterns. We found no difference in the verb region of literal sentences for Close-Match verbs after walker or scrambled motion primes, but Distant-Match verbs were read more quickly following walker primes. For metaphorical sentences, the results were roughly reversed, with Distant-Match verbs being read more slowly following a walker compared to scrambled motion. In Experiment 2, we observed a similar pattern following still image primes, though critical interactions emerged later in the sentence. We interpret these findings as evidence for shared recruitment of cognitive and neural mechanisms for processing visual and verbal biological motion information. Metaphoric language using biological motion verbs may recruit neurocognitive mechanisms similar to those used in processing literal language but be represented in a less

  20. Method to restore images from chaotic frequency-down-converted light using phase matching

    International Nuclear Information System (INIS)

    Andreoni, Alessandra; Puddu, Emiliano; Bondani, Maria

    2006-01-01

    We present an optical frequency-down-conversion process of the image of an object illuminated with chaotic light in which also the low-frequency field entering the second-order nonlinear crystal is chaotic. We show that the fulfillment of the phase-matching conditions by the chaotic interacting fields provides the rules to retrieve the object image by calculating suitable correlations of the local intensity fluctuations even if a single record of down-converted chaotic image is available

  1. Robust stereo matching with trinary cross color census and triple image-based refinements

    Science.gov (United States)

    Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr

    2017-12-01

    For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.

  2. Motion-compensated PET image reconstruction with respiratory-matched attenuation correction using two low-dose inhale and exhale CT images

    International Nuclear Information System (INIS)

    Nam, Woo Hyun; Ahn, Il Jun; Ra, Jong Beom; Kim, Kyeong Min; Kim, Byung Il

    2013-01-01

    Positron emission tomography (PET) is widely used for diagnosis and follow up assessment of radiotherapy. However, thoracic and abdominal PET suffers from false staging and incorrect quantification of the radioactive uptake of lesion(s) due to respiratory motion. Furthermore, respiratory motion-induced mismatch between a computed tomography (CT) attenuation map and PET data often leads to significant artifacts in the reconstructed PET image. To solve these problems, we propose a unified framework for respiratory-matched attenuation correction and motion compensation of respiratory-gated PET. For the attenuation correction, the proposed algorithm manipulates a 4D CT image virtually generated from two low-dose inhale and exhale CT images, rather than a real 4D CT image which significantly increases the radiation burden on a patient. It also utilizes CT-driven motion fields for motion compensation. To realize the proposed algorithm, we propose an improved region-based approach for non-rigid registration between body CT images, and we suggest a selection scheme of 3D CT images that are respiratory-matched to each respiratory-gated sinogram. In this work, the proposed algorithm was evaluated qualitatively and quantitatively by using patient datasets including lung and/or liver lesion(s). Experimental results show that the method can provide much clearer organ boundaries and more accurate lesion information than existing algorithms by utilizing two low-dose CT images. (paper)

  3. {sup 18}F-FDG PET/CT heterogeneity quantification through textural features in the era of harmonisation programs: a focus on lung cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lasnon, Charline [University Hospital, Nuclear Medicine Department, Caen (France); Biologie et Therapies Innovantes des Cancers Localement Agressifs, Universite de Caen Normandie, INSERM, Caen (France); Normandie University, Caen (France); Majdoub, Mohamed; Lavigne, Brice; Visvikis, Dimitris [LaTIM, INSERM UMR 1101, Brest (France); Do, Pascal [Thoracic Oncology, Francois Baclesse Cancer Centre, Caen (France); Madelaine, Jeannick [Caen University Hospital, Pulmonology Department, Caen (France); Hatt, Mathieu [LaTIM, INSERM UMR 1101, Brest (France); CHRU Morvan, INSERM UMR 1101, Laboratoire de Traitement de l' Information Medicale (LaTIM), Groupe ' Imagerie multi-modalite quantitative pour le diagnostic et la therapie' , Brest (France); Aide, Nicolas [University Hospital, Nuclear Medicine Department, Caen (France); Biologie et Therapies Innovantes des Cancers Localement Agressifs, Universite de Caen Normandie, INSERM, Caen (France); Normandie University, Caen (France); Caen University Hospital, Nuclear Medicine Department, Caen (France)

    2016-12-15

    Quantification of tumour heterogeneity in PET images has recently gained interest, but has been shown to be dependent on image reconstruction. This study aimed to evaluate the impact of the EANM/EARL accreditation program on selected {sup 18}F-FDG heterogeneity metrics. To carry out our study, we prospectively analysed 71 tumours in 60 biopsy-proven lung cancer patient acquisitions reconstructed with unfiltered point spread function (PSF) positron emission tomography (PET) images (optimised for diagnostic purposes), PSF-reconstructed images with a 7-mm Gaussian filter (PSF{sub 7}) chosen to meet European Association of Nuclear Medicine (EANM) 1.0 harmonising standards, and EANM Research Ltd. (EARL)-compliant ordered subset expectation maximisation (OSEM) images. Delineation was performed with fuzzy locally adaptive Bayesian (FLAB) algorithm on PSF images and reported on PSF{sub 7} and OSEM ones, and with a 50 % standardised uptake values (SUV){sub max} threshold (SUV{sub max50%}) applied independently to each image. Robust and repeatable heterogeneity metrics including 1st-order [area under the curve of the cumulative histogram (CH{sub AUC})], 2nd-order (entropy, correlation, and dissimilarity), and 3rd-order [high-intensity larger area emphasis (HILAE) and zone percentage (ZP)] textural features (TF) were statistically compared. Volumes obtained with SUV{sub max50%} were significantly smaller than FLAB-derived ones, and were significantly smaller in PSF images compared to OSEM and PSF{sub 7} images. PSF-reconstructed images showed significantly higher SUVmax and SUVmean values, as well as heterogeneity for CH{sub AUC}, dissimilarity, correlation, and HILAE, and a wider range of heterogeneity values than OSEM images for most of the metrics considered, especially when analysing larger tumours. Histological subtypes had no impact on TF distribution. No significant difference was observed between any of the considered metrics (SUV or heterogeneity features) that we

  4. 18F-FDG PET/CT heterogeneity quantification through textural features in the era of harmonisation programs: a focus on lung cancer.

    Science.gov (United States)

    Lasnon, Charline; Majdoub, Mohamed; Lavigne, Brice; Do, Pascal; Madelaine, Jeannick; Visvikis, Dimitris; Hatt, Mathieu; Aide, Nicolas

    2016-12-01

    Quantification of tumour heterogeneity in PET images has recently gained interest, but has been shown to be dependent on image reconstruction. This study aimed to evaluate the impact of the EANM/EARL accreditation program on selected 18 F-FDG heterogeneity metrics. To carry out our study, we prospectively analysed 71 tumours in 60 biopsy-proven lung cancer patient acquisitions reconstructed with unfiltered point spread function (PSF) positron emission tomography (PET) images (optimised for diagnostic purposes), PSF-reconstructed images with a 7-mm Gaussian filter (PSF 7 ) chosen to meet European Association of Nuclear Medicine (EANM) 1.0 harmonising standards, and EANM Research Ltd. (EARL)-compliant ordered subset expectation maximisation (OSEM) images. Delineation was performed with fuzzy locally adaptive Bayesian (FLAB) algorithm on PSF images and reported on PSF 7 and OSEM ones, and with a 50 % standardised uptake values (SUV) max threshold (SUV max50% ) applied independently to each image. Robust and repeatable heterogeneity metrics including 1st-order [area under the curve of the cumulative histogram (CH AUC )], 2nd-order (entropy, correlation, and dissimilarity), and 3rd-order [high-intensity larger area emphasis (HILAE) and zone percentage (ZP)] textural features (TF) were statistically compared. Volumes obtained with SUV max50% were significantly smaller than FLAB-derived ones, and were significantly smaller in PSF images compared to OSEM and PSF 7 images. PSF-reconstructed images showed significantly higher SUVmax and SUVmean values, as well as heterogeneity for CH AUC , dissimilarity, correlation, and HILAE, and a wider range of heterogeneity values than OSEM images for most of the metrics considered, especially when analysing larger tumours. Histological subtypes had no impact on TF distribution. No significant difference was observed between any of the considered metrics (SUV or heterogeneity features) that we extracted from OSEM and PSF 7

  5. "1"8F-FDG PET/CT heterogeneity quantification through textural features in the era of harmonisation programs: a focus on lung cancer

    International Nuclear Information System (INIS)

    Lasnon, Charline; Majdoub, Mohamed; Lavigne, Brice; Visvikis, Dimitris; Do, Pascal; Madelaine, Jeannick; Hatt, Mathieu; Aide, Nicolas

    2016-01-01

    Quantification of tumour heterogeneity in PET images has recently gained interest, but has been shown to be dependent on image reconstruction. This study aimed to evaluate the impact of the EANM/EARL accreditation program on selected "1"8F-FDG heterogeneity metrics. To carry out our study, we prospectively analysed 71 tumours in 60 biopsy-proven lung cancer patient acquisitions reconstructed with unfiltered point spread function (PSF) positron emission tomography (PET) images (optimised for diagnostic purposes), PSF-reconstructed images with a 7-mm Gaussian filter (PSF_7) chosen to meet European Association of Nuclear Medicine (EANM) 1.0 harmonising standards, and EANM Research Ltd. (EARL)-compliant ordered subset expectation maximisation (OSEM) images. Delineation was performed with fuzzy locally adaptive Bayesian (FLAB) algorithm on PSF images and reported on PSF_7 and OSEM ones, and with a 50 % standardised uptake values (SUV)_m_a_x threshold (SUV_m_a_x_5_0_%) applied independently to each image. Robust and repeatable heterogeneity metrics including 1st-order [area under the curve of the cumulative histogram (CH_A_U_C)], 2nd-order (entropy, correlation, and dissimilarity), and 3rd-order [high-intensity larger area emphasis (HILAE) and zone percentage (ZP)] textural features (TF) were statistically compared. Volumes obtained with SUV_m_a_x_5_0_% were significantly smaller than FLAB-derived ones, and were significantly smaller in PSF images compared to OSEM and PSF_7 images. PSF-reconstructed images showed significantly higher SUVmax and SUVmean values, as well as heterogeneity for CH_A_U_C, dissimilarity, correlation, and HILAE, and a wider range of heterogeneity values than OSEM images for most of the metrics considered, especially when analysing larger tumours. Histological subtypes had no impact on TF distribution. No significant difference was observed between any of the considered metrics (SUV or heterogeneity features) that we extracted from OSEM and PSF_7

  6. Use of the probability of stone formation (PSF) score to assess stone forming risk and treatment response in a cohort of Brazilian stone formers

    OpenAIRE

    Turney, Benjamin; Robertson, William; Wiseman, Oliver; Amaro, Carmen Regina P. R.; Leitão, Victor A.; Silva, Isabela Leme da; Amaro, João Luiz

    2014-01-01

    Introduction: The aim was to confirm that PSF (probability of stone formation) changed appropriately following medical therapy on recurrent stone formers.Materials and Methods: Data were collected on 26 Brazilian stone-formers. A baseline 24-hour urine collection was performed prior to treatment. Details of the medical treatment initiated for stone-disease were recorded. A PSF calculation was performed on the 24 hour urine sample using the 7 urinary parameters required: voided volume, oxalate...

  7. An Adaptive Dense Matching Method for Airborne Images Using Texture Information

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-01-01

    Full Text Available Semi-global matching (SGM is essentially a discrete optimization for the disparity value of each pixel, under the assumption of disparity continuities. SGM overcomes the influence of the disparity discontinuities by a set of parameters. Using smaller parameters, the continuity constraint is weakened, which will cause significant noises in planar and textureless areas, reflected as the fluctuations on the final surface reconstruction. On the other hands, larger parameters will impose too much constraints on continuities, which may lead to losses of sharp features. To address this problem, this paper proposes an adaptive dense stereo matching methods for airborne images using with texture information. Firstly, the texture is quantified, and under the assumption that disparity variation is directly proportional to the texture information, the adaptive parameters are gauged accordingly. Second, SGM is adopted to optimize the discrete disparities using the adaptively tuned parameters. Experimental evaluations using the ISPRS benchmark dataset and images obtained by the SWDC-5 have revealed that the proposed method will significantly improve the visual qualities of the point clouds.

  8. Ground and space-based separate PSF photometry of Pluto and Charon from New Horizons and Magellan

    Science.gov (United States)

    Zangari, Amanda M.; Stern, S. A.; Young, L. A.; Weaver, H. A.; Olkin, C.; Buratti, B. J.; Spencer, J.; Ennico, K.

    2013-10-01

    While Pluto and Charon are easily resolvable in some space-based telescopes, ground-based imaging of Pluto and Charon can yield separate PSF photometry in excellent seeing. We present B and Sloan g', r', i', and z' separate photometry of Pluto and Charon taken at the Magellan Clay telescope using LDSS-3. In 2011, observations were made on 7, 8, 9, 19, and 20 March, at 9:00 UT, covering sub-Earth longitudes 130°, 74°, 17°, 175° and 118°. The solar phase angle ranged from 1.66-1.68° to 1.76-1.77°. In 2012, observations were made on February 28, 29 and March 1 at 9:00 UT covering longitudes 342°, 110° and 53° and on May 30 and 31 at 9:30 UT and 7:00 UT, covering longitudes 358° and 272°. Solar phase angles were 1.53-1.56° and 0.89°-0.90° degrees. All longitudes use the convention of zero at the sub-Charon longitude and decrease in time. Seeing ranged from 0.46 to 1.26 arcsecond. We find that the mean rotationally-averaged Charon-to-Pluto light ratio is 0.142±0.003 for Sloan r',i' and z'. Charon is brighter in B and g', with a light ratio of 0.182±0.003 and 0.178±0.002 respectively. Additionally, we present separate PSF photometry of Pluto and Charon from New Horizons images taken by the LORRI instrument on 1 and 3 July 2013 at 17:00 UT and 23:00 UT, sub-Earth longitude 251° and 125°. We find that the rotation-dependent variations in the light ratio are consistent with earlier estimates such as those from Buie et al. 2010, AJ 139, 1117-1127. However, at a solar phase angle of 10.9°, Charon appears 0.25 magnitudes fainter relative to Pluto at the same rotational phase than measurements from the ground with the largest possible solar phase angle. Thus we provide the first estimate of a Pluto phase curve beyond 2°. These results represent some of the first Pluto science from New Horizons. This work has been funded in part by NASA Planetary Astronomy Grant NNX10AB27G and NSF Award 0707609 to MIT and by NASA's New Horizons mission to Pluto.

  9. Chandra's Ultimate Angular Resolution: Studies of the HRC-I Point Spread Function

    Science.gov (United States)

    Juda, Michael; Karovska, M.

    2010-03-01

    The Chandra High Resolution Camera (HRC) should provide an ideal imaging match to the High-Resolution Mirror Assembly (HRMA). The laboratory-measured intrinsic resolution of the HRC is 20 microns FWHM. HRC event positions are determined via a centroiding method rather than by using discrete pixels. This event position reconstruction method and any non-ideal performance of the detector electronics can introduce distortions in event locations that, when combined with spacecraft dither, produce artifacts in source images. We compare ray-traces of the HRMA response to "on-axis" observations of AR Lac and Capella as they move through their dither patterns to images produced from filtered event lists to characterize the effective intrinsic PSF of the HRC-I. A two-dimensional Gaussian, which is often used to represent the detector response, is NOT a good representation of the intrinsic PSF of the HRC-I; the actual PSF has a sharper peak and additional structure which will be discussed. This work was supported under NASA contract NAS8-03060.

  10. Complementary Cohort Strategy for Multimodal Face Pair Matching

    DEFF Research Database (Denmark)

    Sun, Yunlian; Nasrollahi, Kamal; Sun, Zhenan

    2016-01-01

    Face pair matching is the task of determining whether two face images represent the same person. Due to the limited expressive information embedded in the two face images as well as various sources of facial variations, it becomes a quite difficult problem. Towards the issue of few available images...... provided to represent each face, we propose to exploit an extra cohort set (identities in the cohort set are different from those being compared) by a series of cohort list comparisons. Useful cohort coefficients are then extracted from both sorted cohort identities and sorted cohort images...... for complementary information. To augment its robustness to complicated facial variations, we further employ multiple face modalities owing to their complementary value to each other for the face pair matching task. The final decision is made by fusing the extracted cohort coefficients with the direct matching...

  11. Polarization behaviour of polyvinylidenefluoride-polysulfone (PVDF: PSF) blends under high field and high temperature condition

    Science.gov (United States)

    Shrivas, Sandhya; Patel, Swarnim; Dubey, R. K.; Keller, J. M.

    2018-05-01

    Thermally stimulated discharge currents of PVDF: PSF blend samples in ratio 80:20 and 95:05 prepared by the solution cast technique have been studied as a function of polarizing field and polarizing temperature, the temperature corresponding to a peak in TSDC is found to be independent of polarizing field but dependent on the polarizing temperature.

  12. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Matched Filtering and Convolutional Neural Network.

    Science.gov (United States)

    Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen

    2018-04-26

    As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  13. Entropy-Weighted Instance Matching Between Different Sourcing Points of Interest

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-01-01

    Full Text Available The crucial problem for integrating geospatial data is finding the corresponding objects (the counterpart from different sources. Most current studies focus on object matching with individual attributes such as spatial, name, or other attributes, which avoids the difficulty of integrating those attributes, but at the cost of an ineffective matching. In this study, we propose an approach for matching instances by integrating heterogeneous attributes with the allocation of suitable attribute weights via information entropy. First, a normalized similarity formula is developed, which can simplify the calculation of spatial attribute similarity. Second, sound-based and word segmentation-based methods are adopted to eliminate the semantic ambiguity when there is a lack of a normative coding standard in geospatial data to express the name attribute. Third, category mapping is established to address the heterogeneity among different classifications. Finally, to address the non-linear characteristic of attribute similarity, the weights of the attributes are calculated by the entropy of the attributes. Experiments demonstrate that the Entropy-Weighted Approach (EWA has good performance both in terms of precision and recall for instance matching from different data sets.

  14. Transport Properties, Mechanical Behavior, Thermal and Chemical Resistance of Asymmetric Flat Sheet Membrane Prepared from PSf/PVDF Blended Membrane on Gauze Supporting Layer

    Directory of Open Access Journals (Sweden)

    Nita Kusumawati

    2018-05-01

    Full Text Available Asymmetric polysulfone (PSf membrane is prepared using phase inversion method and blending with polyvinylidene fluoride (PVDF on the gauze solid support. Casting solution composition optimization has been done to get PSf/PVDF membrane with best characteristics and permeability. The result shows that blending on PSf with PVDF polymer using phase inversion method has been very helpful in creating an asymmetric porous membrane. Increased level of PVDF in casting solution has increased the formation of asymmetry structure and corresponding flux membrane. The result from thermal test using Differential Scanning Calorimetry (DSC-Thermal Gravimetric Analysis (TGA shows the resistance of the membrane to temperature 460 °C. Membrane resistance against acid looks from undetectable changes on infrared spectra after immersion process in H2SO4 6–98 v/v%. While membrane color changes from white to brownish and black is detected after the immersion process in sodium hydroxide (NaOH 0.15–80 w/v%.

  15. Body-image, quality of life and psychological distress: a comparison between kidney transplant patients and a matching healthy sample.

    Science.gov (United States)

    Yagil, Yaron; Geller, Shulamit; Levy, Sigal; Sidi, Yael; Aharoni, Shiri

    2018-04-01

    The purpose of the current study was to assess the uniqueness of the condition of kidney transplant recipients in comparison to a sample of matching healthy peers in relation to body-image dissatisfaction and identification, quality of life and psychological distress. Participants were 45 kidney transplant recipients who were under follow-up care at a Transplant Unit of a major Medical Center, and a sample of 45 matching healthy peers. Measures were taken using self-report questionnaires [Body-Image Ideals Questionnaire (BIIQ), Body Identification Questionnaire (BIQ), Brief Symptoms Inventory (BSI), and the SF-12]. The major findings were the following: (i) kidney transplant recipients reported lower levels of quality of life and higher levels of PsD when compared to their healthy peers; (ii) no difference in body-image dissatisfaction was found between the two studied groups; (iii) significant correlations between body-image dissatisfaction quality of life and PsD were found only in the kidney transplant recipients. The kidney transplantation condition has a moderating effect in the association between body-image dissatisfaction PsD but not in the association between body-image dissatisfaction and quality of life; (iv) kidney transplant recipients experienced higher levels of body identification than did their healthy peers. Taken together, these findings highlight the unique condition of kidney transplant recipients, as well as the function that body-image plays within the self.

  16. Frameless image registration of X-ray CT and SPECT by volume matching

    International Nuclear Information System (INIS)

    Tanaka, Yuko; Kihara, Tomohiko; Yui, Nobuharu; Kinoshita, Fujimi; Kamimura, Yoshitsugu; Yamada, Yoshifumi.

    1998-01-01

    Image registration of functional (SPECT) and morphological (X-ray CT/MRI) images is studied in order to improve the accuracy and the quantity of the image diagnosis. We have developed a new frameless registration method of X-ray CT and SPECT image using transmission CT image acquired for absorption correction of SPECT images. This is the automated registration method and calculates the transformation matrix between the two coordinate systems of image data by the optimization method. This registration method is based on the similar physical property of X-ray CT and transmission CT image. The three-dimensional overlap of the bone region is used for image matching. We verified by a phantom test that it can provide a good result of within two millimeters error. We also evaluated visually the accuracy of the registration method by the application study of SPECT, X-ray CT, and transmission CT head images. This method can be carried out accurately without any frames. We expect this registration method becomes an efficient tool to improve image diagnosis and medical treatment. (author)

  17. Object-based connectedness facilitates matching

    NARCIS (Netherlands)

    Koning, A.R.; Lier, R.J. van

    2003-01-01

    In two matching tasks, participants had to match two images of object pairs. Image-based (113) connectedness refers to connectedness between the objects in an image. Object-based (OB) connectedness refers to connectedness between the interpreted objects. In Experiment 1, a monocular depth cue

  18. A patch-based method for the evaluation of dense image matching quality

    NARCIS (Netherlands)

    Zhang, Zhenchao; Gerke, Markus; Vosselman, George; Yang, Michael Ying

    2018-01-01

    Airborne laser scanning and photogrammetry are two main techniques to obtain 3D data representing the object surface. Due to the high cost of laser scanning, we want to explore the potential of using point clouds derived by dense image matching (DIM), as effective alternatives to laser scanning

  19. Evaluation of different set-up error corrections on dose-volume metrics in prostate IMRT using CBCT images

    International Nuclear Information System (INIS)

    Hirose, Yoshinori; Tomita, Tsuneyuki; Kitsuda, Kenji; Notogawa, Takuya; Miki, Katsuhito; Nakamura, Mitsuhiro; Nakamura, Kiyonao; Ishigaki, Takashi

    2014-01-01

    We investigated the effect of different set-up error corrections on dose-volume metrics in intensity-modulated radiotherapy (IMRT) for prostate cancer under different planning target volume (PTV) margin settings using cone-beam computed tomography (CBCT) images. A total of 30 consecutive patients who underwent IMRT for prostate cancer were retrospectively analysed, and 7-14 CBCT datasets were acquired per patient. Interfractional variations in dose-volume metrics were evaluated under six different set-up error corrections, including tattoo, bony anatomy, and four different target matching groups. Set-up errors were incorporated into planning the isocenter position, and dose distributions were recalculated on CBCT images. These processes were repeated under two different PTV margin settings. In the on-line bony anatomy matching groups, systematic error (Σ) was 0.3 mm, 1.4 mm, and 0.3 mm in the left-right, anterior-posterior (AP), and superior-inferior directions, respectively. Σ in three successive off-line target matchings was finally comparable with that in the on-line bony anatomy matching in the AP direction. Although doses to the rectum and bladder wall were reduced for a small PTV margin, averaged reductions in the volume receiving 100% of the prescription dose from planning were within 2.5% under all PTV margin settings for all correction groups, with the exception of the tattoo set-up error correction only (≥ 5.0%). Analysis of variance showed no significant difference between on-line bony anatomy matching and target matching. While variations between the planned and delivered doses were smallest when target matching was applied, the use of bony anatomy matching still ensured the planned doses. (author)

  20. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images

    International Nuclear Information System (INIS)

    Zhen, Xin; Chen, Haibin; Zhou, Linghong; Yan, Hao; Jiang, Steve; Jia, Xun; Gu, Xuejun; Mell, Loren K; Yashar, Catheryn M; Cervino, Laura

    2015-01-01

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based ‘thin-plate-spline robust point matching’ algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses. (paper)

  1. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images

    Science.gov (United States)

    Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K.; Yashar, Catheryn M.; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura

    2015-04-01

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based ‘thin-plate-spline robust point matching’ algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.

  2. MO-C-17A-11: A Segmentation and Point Matching Enhanced Deformable Image Registration Method for Dose Accumulation Between HDR CT Images

    International Nuclear Information System (INIS)

    Zhen, X; Chen, H; Zhou, L; Yan, H; Jiang, S; Jia, X; Gu, X; Mell, L; Yashar, C; Cervino, L

    2014-01-01

    Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the random walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no

  3. Evaluation of an automated deformable image matching method for quantifying lung motion in respiration-correlated CT images

    International Nuclear Information System (INIS)

    Pevsner, A.; Davis, B.; Joshi, S.; Hertanto, A.; Mechalakos, J.; Yorke, E.; Rosenzweig, K.; Nehmeh, S.; Erdi, Y.E.; Humm, J.L.; Larson, S.; Ling, C.C.; Mageras, G.S.

    2006-01-01

    We have evaluated an automated registration procedure for predicting tumor and lung deformation based on CT images of the thorax obtained at different respiration phases. The method uses a viscous fluid model of tissue deformation to map voxels from one CT dataset to another. To validate the deformable matching algorithm we used a respiration-correlated CT protocol to acquire images at different phases of the respiratory cycle for six patients with nonsmall cell lung carcinoma. The position and shape of the deformable gross tumor volumes (GTV) at the end-inhale (EI) phase predicted by the algorithm was compared to those drawn by four observers. To minimize interobserver differences, all observers used the contours drawn by a single observer at end-exhale (EE) phase as a guideline to outline GTV contours at EI. The differences between model-predicted and observer-drawn GTV surfaces at EI, as well as differences between structures delineated by observers at EI (interobserver variations) were evaluated using a contour comparison algorithm written for this purpose, which determined the distance between the two surfaces along different directions. The mean and 90% confidence interval for model-predicted versus observer-drawn GTV surface differences over all patients and all directions were 2.6 and 5.1 mm, respectively, whereas the mean and 90% confidence interval for interobserver differences were 2.1 and 3.7 mm. We have also evaluated the algorithm's ability to predict normal tissue deformations by examining the three-dimensional (3-D) vector displacement of 41 landmarks placed by each observer at bronchial and vascular branch points in the lung between the EE and EI image sets (mean and 90% confidence interval displacements of 11.7 and 25.1 mm, respectively). The mean and 90% confidence interval discrepancy between model-predicted and observer-determined landmark displacements over all patients were 2.9 and 7.3 mm, whereas interobserver discrepancies were 2.8 and 6

  4. High throughput static and dynamic small animal imaging using clinical PET/CT: potential preclinical applications

    International Nuclear Information System (INIS)

    Aide, Nicolas; Desmonts, Cedric; Agostini, Denis; Bardet, Stephane; Bouvard, Gerard; Beauregard, Jean-Mathieu; Roselt, Peter; Neels, Oliver; Beyer, Thomas; Kinross, Kathryn; Hicks, Rodney J.

    2010-01-01

    The objective of the study was to evaluate state-of-the-art clinical PET/CT technology in performing static and dynamic imaging of several mice simultaneously. A mouse-sized phantom was imaged mimicking simultaneous imaging of three mice with computation of recovery coefficients (RCs) and spillover ratios (SORs). Fifteen mice harbouring abdominal or subcutaneous tumours were imaged on clinical PET/CT with point spread function (PSF) reconstruction after injection of [18F]fluorodeoxyglucose or [18F]fluorothymidine. Three of these mice were imaged alone and simultaneously at radial positions -5, 0 and 5 cm. The remaining 12 tumour-bearing mice were imaged in groups of 3 to establish the quantitative accuracy of PET data using ex vivo gamma counting as the reference. Finally, a dynamic scan was performed in three mice simultaneously after the injection of 68 Ga-ethylenediaminetetraacetic acid (EDTA). For typical lesion sizes of 7-8 mm phantom experiments indicated RCs of 0.42 and 0.76 for ordered subsets expectation maximization (OSEM) and PSF reconstruction, respectively. For PSF reconstruction, SOR air and SOR water were 5.3 and 7.5%, respectively. A strong correlation (r 2 = 0.97, p 2 = 0.98; slope = 0.89, p 2 = 0.96; slope = 0.62, p 68 Ga-EDTA dynamic acquisition. New generation clinical PET/CT can be used for simultaneous imaging of multiple small animals in experiments requiring high throughput and where a dedicated small animal PET system is not available. (orig.)

  5. Preliminary investigations into macroscopic attenuated total reflection-fourier transform infrared imaging of intact spherical domains: spatial resolution and image distortion.

    Science.gov (United States)

    Everall, Neil J; Priestnall, Ian M; Clarke, Fiona; Jayes, Linda; Poulter, Graham; Coombs, David; George, Michael W

    2009-03-01

    This paper describes preliminary investigations into the spatial resolution of macro attenuated total reflection (ATR) Fourier transform infrared (FT-IR) imaging and the distortions that arise when imaging intact, convex domains, using spheres as an extreme example. The competing effects of shallow evanescent wave penetration and blurring due to finite spatial resolution meant that spheres within the range 20-140 microm all appeared to be approximately the same size ( approximately 30-35 microm) when imaged with a numerical aperture (NA) of approximately 0.2. A very simple model was developed that predicted this extreme insensitivity to particle size. On the basis of these studies, it is anticipated that ATR imaging at this NA will be insensitive to the size of intact highly convex objects. A higher numerical aperture device should give a better estimate of the size of small spheres, owing to superior spatial resolution, but large spheres should still appear undersized due to the shallow sampling depth. An estimate of the point spread function (PSF) was required in order to develop and apply the model. The PSF was measured by imaging a sharp interface; assuming an Airy profile, the PSF width (distance from central maximum to first minimum) was estimated to be approximately 20 and 30 microm for IR bands at 1600 and 1000 cm(-1), respectively. This work has two significant limitations. First, underestimation of domain size only arises when imaging intact convex objects; if surfaces are prepared that randomly and representatively section through domains, the images can be analyzed to calculate parameters such as domain size, area, and volume. Second, the model ignores reflection and refraction and assumes weak absorption; hence, the predicted intensity profiles are not expected to be accurate; they merely give a rough estimate of the apparent sphere size. Much further work is required to place the field of quantitative ATR-FT-IR imaging on a sound basis.

  6. Nonlinear matching measure for the analysis of on-off type DNA microarray images

    Science.gov (United States)

    Kim, Jong D.; Park, Misun; Kim, Jongwon

    2003-07-01

    In this paper, we propose a new nonlinear matching measure for automatic analysis of the on-off type DNA microarray images in which the hybridized spots are detected by the template matching method. The targeting spots of HPV DNA chips are designed for genotyping the human papilloma virus(HPV). The proposed measure is obtained by binarythresholding over the whole template region and taking the number of white pixels inside the spotted area. This measure is evaluated in terms of the accuracy of the estimated marker location to show better performance than the normalized covariance.

  7. Preparation, characterization and gas permeation study of PSf/MgO nanocomposite membrane

    Directory of Open Access Journals (Sweden)

    S. M. Momeni

    2013-09-01

    Full Text Available Nanocomposite membranes composed of polymer and inorganic nanoparticles are a novel method to enhance gas separation performance. In this study, membranes were fabricated from polysulfone (PSf containing magnesium oxide (MgO nanoparticles and gas permeation properties of the resulting membranes were investigated. Membranes were prepared by solution blending and phase inversion methods. Morphology of the membranes, void formations, MgO distribution and aggregates were observed by SEM analysis. Furthermore, thermal stability, residual solvent in the membrane film and structural ruination of membranes were analyzed by thermal gravimetric analysis (TGA. The effects of MgO nanoparticles on the glass transition temperature (Tg of the prepared nanocomposites were studied by differential scanning calorimetry (DSC. The Tg of nanocomposite membranes increased with MgO loading. Fourier transform infrared (FTIR spectra of nanocomposite membranes were analyzed to identify the variations of the bonds. The results obtained from gas permeation experiments with a constant pressure setup showed that adding MgO nanoparticles to the polymeric membrane structure increased the permeability of the membranes. At 30 wt% MgO loading, the CO2 permeability was enhanced from 25.75×10-16 to 47.12×10-16 mol.m/(m².s.Pa and the CO2/CH4 selectivity decreased from 30.84 to 25.65 when compared with pure PSf. For H2, the permeability was enhanced from 44.05×10-16 to 67.3×10-16 mol.m/(m².s.Pa, whereas the H2/N2 selectivity decreased from 47.11 to 33.58.

  8. Validation of the blurring of a small object on CT images calculated on the basis of three-dimensional spatial resolution

    International Nuclear Information System (INIS)

    Okubo, Masaki; Wada, Shinichi; Saito, Masatoshi

    2005-01-01

    We determine three-dimensional (3D) blurring of a small object on computed tomography (CT) images calculated on the basis of 3D spatial resolution. The images were characterized by point spread function (PSF), line spread function (LSF) and slice sensitivity profile (SSP). In advance, we systematically arranged expressions in the model for the imaging system to calculate 3D images under various conditions of spatial resolution. As a small object, we made a blood vessel phantom in which the direction of the vessel was not parallel to either the xy scan-plane or the z-axis perpendicular to the scan-plane. Therefore, when scanning the phantom, non-sharpness must be induced in all axes of the image. To predict the image blurring of the phantom, 3D spatial resolution is essential. The LSF and SSP were measured on our scanner, and two-dimensional (2D) PSF in the scan-plane was derived from the LSF by solving an integral equation. We obtained 3D images by convolving the 3D object-function of the phantom with both 2D PSF and SSP, corresponding to the 3D convolution. Calculated images showed good agreement with scanned images. Our technique of determining 3D blurring offers an accuracy advantage in 3D shape (size) and density measurements of small objects. (author)

  9. MEMORY EFFICIENT SEMI-GLOBAL MATCHING

    Directory of Open Access Journals (Sweden)

    H. Hirschmüller

    2012-07-01

    Full Text Available Semi-GlobalMatching (SGM is a robust stereo method that has proven its usefulness in various applications ranging from aerial image matching to driver assistance systems. It supports pixelwise matching for maintaining sharp object boundaries and fine structures and can be implemented efficiently on different computation hardware. Furthermore, the method is not sensitive to the choice of parameters. The structure of the matching algorithm is well suited to be processed by highly paralleling hardware e.g. FPGAs and GPUs. The drawback of SGM is the temporary memory requirement that depends on the number of pixels and the disparity range. On the one hand this results in long idle times due to the bandwidth limitations of the external memory and on the other hand the capacity bounds are quickly reached. A full HD image with a size of 1920 × 1080 pixels and a disparity range of 512 pixels requires already 1 billion elements, which is at least several GB of RAM, depending on the element size, wich are not available at standard FPGA- and GPUboards. The novel memory efficient (eSGM method is an advancement in which the amount of temporary memory only depends on the number of pixels and not on the disparity range. This permits matching of huge images in one piece and reduces the requirements of the memory bandwidth for real-time mobile robotics. The feature comes at the cost of 50% more compute operations as compared to SGM. This overhead is compensated by the previously idle compute logic within the FPGA and the GPU and therefore results in an overall performance increase. We show that eSGM produces the same high quality disparity images as SGM and demonstrate its performance both on an aerial image pair with 142 MPixel and within a real-time mobile robotic application. We have implemented the new method on the CPU, GPU and FPGA.We conclude that eSGM is advantageous for a GPU implementation and essential for an implementation on our FPGA.

  10. OPERA, an automatic PSF reconstruction software for Shack-Hartmann AO systems: application to Altair

    Science.gov (United States)

    Jolissaint, Laurent; Veran, Jean-Pierre; Marino, Jose

    2004-10-01

    When doing high angular resolution imaging with adaptive optics (AO), it is of crucial importance to have an accurate knowledge of the point spread function associated with each observation. Applications are numerous: image contrast enhancement by deconvolution, improved photometry and astrometry, as well as real time AO performance evaluation. In this paper, we present our work on automatic PSF reconstruction based on control loop data, acquired simultaneously with the observation. This problem has already been solved for curvature AO systems. To adapt this method to another type of WFS, a specific analytical noise propagation model must be established. For the Shack-Hartmann WFS, we are able to derive a very accurate estimate of the noise on each slope measurement, based on the covariances of the WFS CCD pixel values in the corresponding sub-aperture. These covariances can be either derived off-line from telemetry data, or calculated by the AO computer during the acquisition. We present improved methods to determine 1) r0 from the DM drive commands, which includes an estimation of the outer scale L0 2) the contribution of the high spatial frequency component of the turbulent phase, which is not corrected by the AO system and is scaled by r0. This new method has been implemented in an IDL-based software called OPERA (Performance of Adaptive Optics). We have tested OPERA on Altair, the recently commissioned Gemini-North AO system, and present our preliminary results. We also summarize the AO data required to run OPERA on any other AO system.

  11. Static telescope aberration measurement using lucky imaging techniques

    Science.gov (United States)

    López-Marrero, Marcos; Rodríguez-Ramos, Luis Fernando; Marichal-Hernández, José Gil; Rodríguez-Ramos, José Manuel

    2012-07-01

    A procedure has been developed to compute static aberrations once the telescope PSF has been measured with the lucky imaging technique, using a nearby star close to the object of interest as the point source to probe the optical system. This PSF is iteratively turned into a phase map at the pupil using the Gerchberg-Saxton algorithm and then converted to the appropriate actuation information for a deformable mirror having low actuator number but large stroke capability. The main advantage of this procedure is related with the capability of correcting static aberration at the specific pointing direction and without the need of a wavefront sensor.

  12. A NEW IMAGE REGISTRATION METHOD FOR GREY IMAGES

    Institute of Scientific and Technical Information of China (English)

    Nie Xuan; Zhao Rongchun; Jiang Zetao

    2004-01-01

    The proposed algorithm relies on a group of new formulas for calculating tangent slope so as to address angle feature of edge curves of image. It can utilize tangent angle features to estimate automatically and fully the rotation parameters of geometric transform and enable rough matching of images with huge rotation difference. After angle compensation, it can search for matching point sets by correlation criterion, then calculate parameters of affine transform, enable higher-precision emendation of rotation and transferring. Finally, it fulfills precise matching for images with relax-tense iteration method. Compared with the registration approach based on wavelet direction-angle features, the matching algorithm with tangent feature of image edge is more robust and realizes precise registration of various images. Furthermore, it is also helpful in graphics matching.

  13. Enhanced simulator software for image validation and interpretation for multimodal localization super-resolution fluorescence microscopy

    Science.gov (United States)

    Erdélyi, Miklós; Sinkó, József; Gajdos, Tamás.; Novák, Tibor

    2017-02-01

    Optical super-resolution techniques such as single molecule localization have become one of the most dynamically developed areas in optical microscopy. These techniques routinely provide images of fixed cells or tissues with sub-diffraction spatial resolution, and can even be applied for live cell imaging under appropriate circumstances. Localization techniques are based on the precise fitting of the point spread functions (PSF) to the measured images of stochastically excited, identical fluorescent molecules. These techniques require controlling the rate between the on, off and the bleached states, keeping the number of active fluorescent molecules at an optimum value, so their diffraction limited images can be detected separately both spatially and temporally. Because of the numerous (and sometimes unknown) parameters, the imaging system can only be handled stochastically. For example, the rotation of the dye molecules obscures the polarization dependent PSF shape, and only an averaged distribution - typically estimated by a Gaussian function - is observed. TestSTORM software was developed to generate image stacks for traditional localization microscopes, where localization meant the precise determination of the spatial position of the molecules. However, additional optical properties (polarization, spectra, etc.) of the emitted photons can be used for further monitoring the chemical and physical properties (viscosity, pH, etc.) of the local environment. The image stack generating program was upgraded by several new features, such as: multicolour, polarization dependent PSF, built-in 3D visualization, structured background. These features make the program an ideal tool for optimizing the imaging and sample preparation conditions.

  14. Least median of squares filtering of locally optimal point matches for compressible flow image registration

    International Nuclear Information System (INIS)

    Castillo, Edward; Guerrero, Thomas; Castillo, Richard; White, Benjamin; Rojo, Javier

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. (paper)

  15. Control of the ORR-PSF pressure-vessel surveillance irradiation experiment temperature

    International Nuclear Information System (INIS)

    Miller, L.F.

    1982-01-01

    Control of the Oak Ridge Research Reactor Pool Side Facility (ORR-PSF) pressure vessel surveillance irradiation experiment temperature is implemented by digital computer control of electrical heaters under fixed cooling conditions. Cooling is accomplished with continuous flows of water in pipes between specimen sets and of helium-neon gas in the specimen set housings. Control laws are obtained from solutions of the discrete-time Riccati equation and are implemented with direct digital control of solid state relays in the electrical heater circuit. Power dissipated by the heaters is determined by variac settings and the percent of time that the solid state relays allow power to be supplied to the heaters. Control demands are updated every forty seconds

  16. Tilted Light Sheet Microscopy with 3D Point Spread Functions for Single-Molecule Super-Resolution Imaging in Mammalian Cells.

    Science.gov (United States)

    Gustavsson, Anna-Karin; Petrov, Petar N; Lee, Maurice Y; Shechtman, Yoav; Moerner, W E

    2018-02-01

    To obtain a complete picture of subcellular nanostructures, cells must be imaged with high resolution in all three dimensions (3D). Here, we present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet, and the light sheet can therefore be formed using simple optics. The result is flexible and user-friendly 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D super-resolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSF for fiducial bead tracking and live axial drift correction. We envision TILT3D to become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

  17. Tilted light sheet microscopy with 3D point spread functions for single-molecule super-resolution imaging in mammalian cells

    Science.gov (United States)

    Gustavsson, Anna-Karin; Petrov, Petar N.; Lee, Maurice Y.; Shechtman, Yoav; Moerner, W. E.

    2018-02-01

    To obtain a complete picture of subcellular nanostructures, cells must be imaged with high resolution in all three dimensions (3D). Here, we present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet, and the light sheet can therefore be formed using simple optics. The result is flexible and user-friendly 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D superresolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSF for fiducial bead tracking and live axial drift correction. We envision TILT3D to become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

  18. Predicting CT Image From MRI Data Through Feature Matching With Learned Nonlinear Local Descriptors.

    Science.gov (United States)

    Yang, Wei; Zhong, Liming; Chen, Yang; Lin, Liyan; Lu, Zhentai; Liu, Shupeng; Wu, Yao; Feng, Qianjin; Chen, Wufan

    2018-04-01

    Attenuation correction for positron-emission tomography (PET)/magnetic resonance (MR) hybrid imaging systems and dose planning for MR-based radiation therapy remain challenging due to insufficient high-energy photon attenuation information. We present a novel approach that uses the learned nonlinear local descriptors and feature matching to predict pseudo computed tomography (pCT) images from T1-weighted and T2-weighted magnetic resonance imaging (MRI) data. The nonlinear local descriptors are obtained by projecting the linear descriptors into the nonlinear high-dimensional space using an explicit feature map and low-rank approximation with supervised manifold regularization. The nearest neighbors of each local descriptor in the input MR images are searched in a constrained spatial range of the MR images among the training dataset. Then the pCT patches are estimated through k-nearest neighbor regression. The proposed method for pCT prediction is quantitatively analyzed on a dataset consisting of paired brain MRI and CT images from 13 subjects. Our method generates pCT images with a mean absolute error (MAE) of 75.25 ± 18.05 Hounsfield units, a peak signal-to-noise ratio of 30.87 ± 1.15 dB, a relative MAE of 1.56 ± 0.5% in PET attenuation correction, and a dose relative structure volume difference of 0.055 ± 0.107% in , as compared with true CT. The experimental results also show that our method outperforms four state-of-the-art methods.

  19. A novel iris patterns matching algorithm of weighted polar frequency correlation

    Science.gov (United States)

    Zhao, Weijie; Jiang, Linhua

    2014-11-01

    Iris recognition is recognized as one of the most accurate techniques for biometric authentication. In this paper, we present a novel correlation method - Weighted Polar Frequency Correlation(WPFC) - to match and evaluate two iris images, actually it can also be used for evaluating the similarity of any two images. The WPFC method is a novel matching and evaluating method for iris image matching, which is complete different from the conventional methods. For instance, the classical John Daugman's method of iris recognition uses 2D Gabor wavelets to extract features of iris image into a compact bit stream, and then matching two bit streams with hamming distance. Our new method is based on the correlation in the polar coordinate system in frequency domain with regulated weights. The new method is motivated by the observation that the pattern of iris that contains far more information for recognition is fine structure at high frequency other than the gross shapes of iris images. Therefore, we transform iris images into frequency domain and set different weights to frequencies. Then calculate the correlation of two iris images in frequency domain. We evaluate the iris images by summing the discrete correlation values with regulated weights, comparing the value with preset threshold to tell whether these two iris images are captured from the same person or not. Experiments are carried out on both CASIA database and self-obtained images. The results show that our method is functional and reliable. Our method provides a new prospect for iris recognition system.

  20. On an image reconstruction method for ECT

    Science.gov (United States)

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  1. Same/different concept learning by capuchin monkeys in matching-to-sample tasks.

    Directory of Open Access Journals (Sweden)

    Valentina Truppa

    Full Text Available The ability to understand similarities and analogies is a fundamental aspect of human advanced cognition. Although subject of considerable research in comparative cognition, the extent to which nonhuman species are capable of analogical reasoning is still debated. This study examined the conditions under which tufted capuchin monkeys (Cebus apella acquire a same/different concept in a matching-to-sample task on the basis of relational similarity among multi-item stimuli. We evaluated (i the ability of five capuchin monkeys to learn the same/different concept on the basis of the number of items composing the stimuli and (ii the ability to match novel stimuli after training with both several small stimulus sets and a large stimulus set. We found the first evidence of same/different relational matching-to-sample abilities in a New World monkey and demonstrated that the ability to match novel stimuli is within the capacity of this species. Therefore, analogical reasoning can emerge in monkeys under specific training conditions.

  2. Same/Different Concept Learning by Capuchin Monkeys in Matching-to-Sample Tasks

    Science.gov (United States)

    Truppa, Valentina; Piano Mortari, Eva; Garofoli, Duilio; Privitera, Sara; Visalberghi, Elisabetta

    2011-01-01

    The ability to understand similarities and analogies is a fundamental aspect of human advanced cognition. Although subject of considerable research in comparative cognition, the extent to which nonhuman species are capable of analogical reasoning is still debated. This study examined the conditions under which tufted capuchin monkeys (Cebus apella) acquire a same/different concept in a matching-to-sample task on the basis of relational similarity among multi-item stimuli. We evaluated (i) the ability of five capuchin monkeys to learn the same/different concept on the basis of the number of items composing the stimuli and (ii) the ability to match novel stimuli after training with both several small stimulus sets and a large stimulus set. We found the first evidence of same/different relational matching-to-sample abilities in a New World monkey and demonstrated that the ability to match novel stimuli is within the capacity of this species. Therefore, analogical reasoning can emerge in monkeys under specific training conditions. PMID:21858225

  3. A PSF-Shape-Based Beamforming Strategy for Robust 2D Motion Estimation in Ultrafast Data

    Directory of Open Access Journals (Sweden)

    Anne E. C. M. Saris

    2018-03-01

    Full Text Available This paper presents a framework for motion estimation in ultrafast ultrasound data. It describes a novel approach for determining the sampling grid for ultrafast data based on the system’s point-spread-function (PSF. As a consequence, the cross-correlation functions (CCF used in the speckle tracking (ST algorithm will have circular-shaped peaks, which can be interpolated using a 2D interpolation method to estimate subsample displacements. Carotid artery wall motion and parabolic blood flow simulations together with rotating disk experiments using a Verasonics Vantage 256 are used for performance evaluation. Zero-degree plane wave data were acquired using an ATL L5-12 (fc = 9 MHz transducer for a range of pulse repetition frequencies (PRFs, resulting in 0–600 µm inter-frame displacements. The proposed methodology was compared to data beamformed on a conventionally spaced grid, combined with the commonly used 1D parabolic interpolation. The PSF-shape-based beamforming grid combined with 2D cubic interpolation showed the most accurate and stable performance with respect to the full range of inter-frame displacements, both for the assessment of blood flow and vessel wall dynamics. The proposed methodology can be used as a protocolled way to beamform ultrafast data and obtain accurate estimates of tissue motion.

  4. Robust through-the-wall radar image classification using a target-model alignment procedure.

    Science.gov (United States)

    Smith, Graeme E; Mobasseri, Bijan G

    2012-02-01

    A through-the-wall radar image (TWRI) bears little resemblance to the equivalent optical image, making it difficult to interpret. To maximize the intelligence that may be obtained, it is desirable to automate the classification of targets in the image to support human operators. This paper presents a technique for classifying stationary targets based on the high-range resolution profile (HRRP) extracted from 3-D TWRIs. The dependence of the image on the target location is discussed using a system point spread function (PSF) approach. It is shown that the position dependence will cause a classifier to fail, unless the image to be classified is aligned to a classifier-training location. A target image alignment technique based on deconvolution of the image with the system PSF is proposed. Comparison of the aligned target images with measured images shows the alignment process introducing normalized mean squared error (NMSE) ≤ 9%. The HRRP extracted from aligned target images are classified using a naive Bayesian classifier supported by principal component analysis. The classifier is tested using a real TWRI of canonical targets behind a concrete wall and shown to obtain correct classification rates ≥ 97%. © 2011 IEEE

  5. An automated patient recognition method based on an image-matching technique using previous chest radiographs in the picture archiving and communication system environment

    International Nuclear Information System (INIS)

    Morishita, Junji; Katsuragawa, Shigehiko; Kondo, Keisuke; Doi, Kunio

    2001-01-01

    An automated patient recognition method for correcting 'wrong' chest radiographs being stored in a picture archiving and communication system (PACS) environment has been developed. The method is based on an image-matching technique that uses previous chest radiographs. For identification of a 'wrong' patient, the correlation value was determined for a previous image of a patient and a new, current image of the presumed corresponding patient. The current image was shifted horizontally and vertically and rotated, so that we could determine the best match between the two images. The results indicated that the correlation values between the current and previous images for the same, 'correct' patients were generally greater than those for different, 'wrong' patients. Although the two histograms for the same patient and for different patients overlapped at correlation values greater than 0.80, most parts of the histograms were separated. The correlation value was compared with a threshold value that was determined based on an analysis of the histograms of correlation values obtained for the same patient and for different patients. If the current image is considered potentially to belong to a 'wrong' patient, then a warning sign with the probability for a 'wrong' patient is provided to alert radiology personnel. Our results indicate that at least half of the 'wrong' images in our database can be identified correctly with the method described in this study. The overall performance in terms of a receiver operating characteristic curve showed a high performance of the system. The results also indicate that some readings of 'wrong' images for a given patient in the PACS environment can be prevented by use of the method we developed. Therefore an automated warning system for patient recognition would be useful in correcting 'wrong' images being stored in the PACS environment

  6. Different Loci of Semantic Interference in Picture Naming vs. Word-Picture Matching Tasks

    OpenAIRE

    Harvey, Denise Y.; Schnur, Tatiana T.

    2016-01-01

    Naming pictures and matching words to pictures belonging to the same semantic category impairs performance relative to when stimuli come from different semantic categories (i.e., semantic interference). Despite similar semantic interference phenomena in both picture naming and word-picture matching tasks, the locus of interference has been attributed to different levels of the language system – lexical in naming and semantic in word-picture matching. Although both tasks involve access to shar...

  7. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  8. A Learning-Based Steganalytic Method against LSB Matching Steganography

    Directory of Open Access Journals (Sweden)

    Z. Xia

    2011-04-01

    Full Text Available This paper considers the detection of spatial domain least significant bit (LSB matching steganography in gray images. Natural images hold some inherent properties, such as histogram, dependence between neighboring pixels, and dependence among pixels that are not adjacent to each other. These properties are likely to be disturbed by LSB matching. Firstly, histogram will become smoother after LSB matching. Secondly, the two kinds of dependence will be weakened by the message embedding. Accordingly, three features, which are respectively based on image histogram, neighborhood degree histogram and run-length histogram, are extracted at first. Then, support vector machine is utilized to learn and discriminate the difference of features between cover and stego images. Experimental results prove that the proposed method possesses reliable detection ability and outperforms the two previous state-of-the-art methods. Further more, the conclusions are drawn by analyzing the individual performance of three features and their fused feature.

  9. Approaches for Stereo Matching

    Directory of Open Access Journals (Sweden)

    Takouhi Ozanian

    1995-04-01

    Full Text Available This review focuses on the last decade's development of the computational stereopsis for recovering three-dimensional information. The main components of the stereo analysis are exposed: image acquisition and camera modeling, feature selection, feature matching and disparity interpretation. A brief survey is given of the well known feature selection approaches and the estimation parameters for this selection are mentioned. The difficulties in identifying correspondent locations in the two images are explained. Methods as to how effectively to constrain the search for correct solution of the correspondence problem are discussed, as are strategies for the whole matching process. Reasons for the occurrence of matching errors are considered. Some recently proposed approaches, employing new ideas in the modeling of stereo matching in terms of energy minimization, are described. Acknowledging the importance of computation time for real-time applications, special attention is paid to parallelism as a way to achieve the required level of performance. The development of trinocular stereo analysis as an alternative to the conventional binocular one, is described. Finally a classification based on the test images for verification of the stereo matching algorithms, is supplied.

  10. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    Directory of Open Access Journals (Sweden)

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  11. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    Science.gov (United States)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that

  12. Dificuldades do trabalho médico no PSF Difficulties of medical working at the family health program

    Directory of Open Access Journals (Sweden)

    Fernanda Gaspar Antonini Vasconcelos

    2011-01-01

    Full Text Available O presente estudo tem como objetivos identificar o perfil dos médicos que atuam ou atuaram no PSF, suas principais dificuldades e levantar a porcentagem de equipes de saúde da família sem médico no município de São Paulo. Para isso, foi utilizado um questionário baseado nas principais falas do estudo de Capozzolo, coletadas de janeiro até maio de 2008, e dados da atenção básica de outubro até dezembro de 2007. Os principais resultados incluem um tempo menor que cinco anos de formação para a maioria dos entrevistados e afinidade pelo PSF como motivação para o trabalho. As principais dificuldades referem-se à alta demanda, alta incidência de casos complexos, dificuldade de referenciamento, perfil de divisão do tempo não condizente com as necessidades de saúde e falta de incentivo à especialização. Os dados da atenção básica demonstraram que a Coordenadoria Leste foi a que mais sofreu falta de médicos no período analisado, mantendo índices em torno de 20% e 40%; existência de um aumento no déficit com a aproximação do final do ano e a manutenção dos déficits em algumas unidades.This study aims to identify the profile of doctors who act or acted in PSF, its main difficulties and raise the percentage of teams of family health without doctor in the city of São Paulo. For this was used a questionnaire based on keywords of the study of Capozzolo collected from January to May 2008, and data of the Primary Care from October until December 2007. The main results include a time less than 5 years of training for most of the interviewees and affinity by the PSF as motivation for work. Some of the main difficulties are the high demand, high incidence of complex cases, difficulty of listings, profile division of time is not consistent with health needs and lack of incentive to specialization. The figures for Primary Care demonstrated that the coordination East had the highest absence of experienced doctors in the period

  13. First indirect x-ray imaging tests with an 88-mm diameter single crystal

    Energy Technology Data Exchange (ETDEWEB)

    Lumpkin, A. H. [Fermilab; Macrander, A. T. [Argonne

    2017-02-01

    Using the 1-BM-C beamline at the Advanced Photon Source (APS), we have performed the initial indirect x - ray imaging point-spread-function (PSF) test of a unique 88-mm diameter YAG:Ce single crystal of only 100 - micron thickness. The crystal was bonded to a fiber optic plat e (FOP) for mechanical support and to allow the option for FO coupling to a large format camera. This configuration resolution was compared to that of self - supported 25-mm diameter crystals, with and without an Al reflective coating. An upstream monochromator was used to select 17-keV x-rays from the broadband APS bending magnet source of synchrotron radiation. The upstream , adjustable Mo collimators were then used to provide a series of x-ray source transverse sizes from 200 microns down to about 15-20 microns (FWHM) at the crystal surface. The emitted scintillator radiation was in this case lens coupled to the ANDOR Neo sCMOS camera, and the indirect x-ray images were processed offline by a MATLAB - based image processing program. Based on single Gaussian peak fits to the x-ray image projected profiles, we observed a 10.5 micron PSF. This sample thus exhibited superior spatial resolution to standard P43 polycrystalline phosphors of the same thickness which would have about a 100-micron PSF. Lastly, this single crystal resolution combined with the 88-mm diameter makes it a candidate to support future x-ray diffraction or wafer topography experiments.

  14. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    Science.gov (United States)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  15. Lattice and strain analysis of atomic resolution Z-contrast images based on template matching

    Energy Technology Data Exchange (ETDEWEB)

    Zuo, Jian-Min, E-mail: jianzuo@uiuc.edu [Department of Materials Science and Engineering, University of Illinois, Urbana, IL 61801 (United States); Seitz Materials Research Laboratory, University of Illinois, Urbana, IL 61801 (United States); Shah, Amish B. [Center for Microanalysis of Materials, Materials Research Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Kim, Honggyu; Meng, Yifei; Gao, Wenpei [Department of Materials Science and Engineering, University of Illinois, Urbana, IL 61801 (United States); Seitz Materials Research Laboratory, University of Illinois, Urbana, IL 61801 (United States); Rouviére, Jean-Luc [CEA-INAC/UJF-Grenoble UMR-E, SP2M, LEMMA, Minatec, Grenoble 38054 (France)

    2014-01-15

    A real space approach is developed based on template matching for quantitative lattice analysis using atomic resolution Z-contrast images. The method, called TeMA, uses the template of an atomic column, or a group of atomic columns, to transform the image into a lattice of correlation peaks. This is helped by using a local intensity adjusted correlation and by the design of templates. Lattice analysis is performed on the correlation peaks. A reference lattice is used to correct for scan noise and scan distortions in the recorded images. Using these methods, we demonstrate that a precision of few picometers is achievable in lattice measurement using aberration corrected Z-contrast images. For application, we apply the methods to strain analysis of a molecular beam epitaxy (MBE) grown LaMnO{sub 3} and SrMnO{sub 3} superlattice. The results show alternating epitaxial strain inside the superlattice and its variations across interfaces at the spatial resolution of a single perovskite unit cell. Our methods are general, model free and provide high spatial resolution for lattice analysis. - Highlights: • A real space approach is developed for strain analysis using atomic resolution Z-contrast images and template matching. • A precision of few picometers is achievable in the measurement of lattice displacements. • The spatial resolution of a single perovskite unit cell is demonstrated for a LaMnO{sub 3} and SrMnO{sub 3} superlattice grown by MBE.

  16. A Directional Antenna in a Matching Liquid for Microwave Radar Imaging

    Directory of Open Access Journals (Sweden)

    Saeed I. Latif

    2015-01-01

    Full Text Available The detailed design equations and antenna parameters for a directional antenna for breast imaging are presented in this paper. The antenna was designed so that it could be immersed in canola oil to achieve efficient coupling of the electromagnetic energy to the breast tissue. Ridges were used in the horn antenna to increase the operating bandwidth. The antenna has an exponentially tapered section for impedance matching. The double-ridged horn antenna has a wideband performance from 1.5 GHz to 5 GHz (3.75 GHz or 110% of impedance bandwidth, which is suitable for breast microwave radar imaging. The fabricated antenna was tested and compared with simulated results, and similar bandwidths were obtained. Experiments were conducted on breast phantoms using these antennas, to detect a simulated breast lesion. The reconstructed image from the experiments shows distinguishable tumor responses indicating promising results for successful breast cancer detection.

  17. Relevance of Postoperative Magnetic Resonance Images in Evaluating Epidural Hematoma After Thoracic Fixation Surgery.

    Science.gov (United States)

    Shin, Hong Kyung; Choi, Il; Roh, Sung Woo; Rhim, Seung Chul; Jeon, Sang Ryong

    2017-11-01

    It is difficult to evaluate the significant findings of epidural hematoma in magnetic resonance images (MRIs) obtained immediately after thoracic posterior screw fixation (PSF). Prospectively, immediate postoperative MRI was performed in 10 patients who underwent thoracic PSF from April to December 2013. Additionally, we retrospectively analyzed the MRIs from 3 patients before hematoma evacuation out of 260 patients who underwent thoracic PSF from January 2000 to March 2013. The MRI findings of 9 out of the 10 patients, consecutively collected after thoracic PSF, showed neurologic recovery with a well-preserved cerebrospinal fluid (CSF) space and no prominent hemorrhage. Even though there were metal artifacts at the level of the pedicle screws, the preserved CSF space was observed. In contrast, the MRI of 1 patient with poor neurologic outcome demonstrated a typical hematoma and slight spinal cord compression and reduced CSF space. In the retrospective analysis of the 3 patients who showed definite motor weakness in the lower extremities after their first thoracic fusion surgery and underwent hematoma evacuation, the magnetic resonance images before hematoma evacuation also revealed hematoma compressing the spinal cord and diminished CSF space. This study shows that epidural hematomas can be detected on MRI performed immediately after thoracic fixation surgery, despite metal artifacts and findings such as hematoma causing spinal cord compression. Loss of CSF space should be considered to be associated with neurologic deficit. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. A PSF-Shape-Based Beamforming Strategy for Robust 2D Motion Estimation in Ultrafast Data

    OpenAIRE

    Anne E. C. M. Saris; Stein Fekkes; Maartje M. Nillesen; Hendrik H. G. Hansen; Chris L. de Korte

    2018-01-01

    This paper presents a framework for motion estimation in ultrafast ultrasound data. It describes a novel approach for determining the sampling grid for ultrafast data based on the system’s point-spread-function (PSF). As a consequence, the cross-correlation functions (CCF) used in the speckle tracking (ST) algorithm will have circular-shaped peaks, which can be interpolated using a 2D interpolation method to estimate subsample displacements. Carotid artery wall motion and parabolic blood flow...

  19. Automatic block-matching registration to improve lung tumor localization during image-guided radiotherapy

    Science.gov (United States)

    Robertson, Scott Patrick

    To improve relatively poor outcomes for locally-advanced lung cancer patients, many current efforts are dedicated to minimizing uncertainties in radiotherapy. This enables the isotoxic delivery of escalated tumor doses, leading to better local tumor control. The current dissertation specifically addresses inter-fractional uncertainties resulting from patient setup variability. An automatic block-matching registration (BMR) algorithm is implemented and evaluated for the purpose of directly localizing advanced-stage lung tumors during image-guided radiation therapy. In this algorithm, small image sub-volumes, termed "blocks", are automatically identified on the tumor surface in an initial planning computed tomography (CT) image. Each block is independently and automatically registered to daily images acquired immediately prior to each treatment fraction. To improve the accuracy and robustness of BMR, this algorithm incorporates multi-resolution pyramid registration, regularization with a median filter, and a new multiple-candidate-registrations technique. The result of block-matching is a sparse displacement vector field that models local tissue deformations near the tumor surface. The distribution of displacement vectors is aggregated to obtain the final tumor registration, corresponding to the treatment couch shift for patient setup correction. Compared to existing rigid and deformable registration algorithms, the final BMR algorithm significantly improves the overlap between target volumes from the planning CT and registered daily images. Furthermore, BMR results in the smallest treatment margins for the given study population. However, despite these improvements, large residual target localization errors were noted, indicating that purely rigid couch shifts cannot correct for all sources of inter-fractional variability. Further reductions in treatment uncertainties may require the combination of high-quality target localization and adaptive radiotherapy.

  20. COMPARISON OF POINT CLOUDS DERIVED FROM AERIAL IMAGE MATCHING WITH DATA FROM AIRBORNE LASER SCANNING

    Directory of Open Access Journals (Sweden)

    Dominik Wojciech

    2017-04-01

    Full Text Available The aim of this study was to invest igate the properties of point clouds derived from aerial image matching and to compare them with point clouds from airborne laser scanning. A set of aerial images acquired in years 2010 - 2013 over the city of Elblag were used for the analysis. Images were acquired with the use of three digital cameras: DMC II 230, DMC I and DigiCAM60 with a GSD varying from 4.5 cm to 15 cm. Eight sets of images that were used in the study were acquired at different stages of the growing season – from March to December. Two L iDAR point clouds were used for the comparison – one with a density of 1.3 p/m 2 and a second with a density of 10 p/m 2 . Based on the input images point clouds were created with the use of the semi - global matching method. The properties of the obtained poi nt clouds were analyzed in three ways: – b y the comparison of the vertical accuracy of point clouds with reference to a terrain profile surveyed on bare ground with GPS - RTK method – b y visual assessment of point cloud profiles generated both from SGM and LiDAR point clouds – b y visual assessment of a digital surface model generated from a SGM point cloud with reference to a digital surface model generated from a LiDAR point cloud. The conducted studies allowed a number of observations about the quality o f SGM point clouds to be formulated with respect to different factors. The main factors having influence on the quality of SGM point clouds are GSD and base/height ratio. The essential problem related to SGM point clouds are areas covered with vegetation w here SGM point clouds are visibly worse in terms of both accuracy and the representation of terrain surface. It is difficult to expect that in these areas SG M point clouds could replace LiDAR point clouds. This leads to a general conclusion that SGM point clouds are less reliable, more unpredictable and are dependent on more factors than LiDAR point clouds. Nevertheless, SGM point

  1. Multiplicity and properties of Kepler planet candidates: High spatial imaging and RV studies*

    Directory of Open Access Journals (Sweden)

    Aceituno J.

    2013-04-01

    Full Text Available The Kepler space telescope is discovering thousands of new planet candidates. However, a follow up program is needed in order to reject false candidates and to fully characterize the bona-fide exoplanets. Our main aims are: 1./ Detect and analyze close companions inside the typical Kepler PSF to study if they are the responsible of the dim in the Kepler light curves, 2./ Study the change in the stellar and planetary parameters due to the presence of an unresolved object, 3./ Help to validate those Kepler Objects of Interest that do not present any object inside the Kepler PSF and 4./ Study the multiplicity rate in planet host candidates. Such a large sample of observed planet host candidates allows us to do statistics about the presence of close (visual or bounded companions to the harboring star. We present here Lucky Imaging observations for a total amount of 98 Kepler Objects of Interest. This technique is based on the acquisition of thousands of very short exposure time images. Then, a selection and combination of a small amount of the best quality frames provides a high resolution image with objects having a 0.1 arcsec PSF. We applied this technique to carry out observations in the Sloan i and Sloan z filters of our Kepler candidates. We find blended objects inside the Kepler PSF for a significant percentage of KOIs. On one hand, only 58.2% of the hosts do not present any object within 6 arcsec. On the other hand, we have found 19 companions closer than 3 arcsec in 17 KOIs. According to their magnitudes and i − z color, 8 of them could be physically bounded to the host star. We are also collecting high-spectral resolution spectroscopuy in order to derive the planet properties.

  2. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    Science.gov (United States)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-03-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey (SDSS) r-band images with artificial AGN point sources added which are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source PS is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover PS and host galaxy magnitudes with smaller systematic error and a lower average scatter (49%). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ±50% if it is trained on multiple PSF's. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN it is more robust and easy to use than parametric methods as it requires no input parameters.

  3. Use of the contingency matrix in the TRACK-MATCH procedure for two projections

    International Nuclear Information System (INIS)

    Baginyan, S.A.; Moroz, V.I.

    1985-01-01

    When analysing the work of geometrical reconstruction programs it is noted that if the TRACK-MATCH procedure is successful, it garantes the event measurement correctness. This serves as a base for application of the TRACK-MATCH procedure as a test for event mask measurements. Such use of the procedure does not require point-to-point correspondence between track images in different projections. It is sufficient to establish that the TRACK-MATCH procedure admits a solusion. It is shown that the problem of point-to-point correspondence between track images in different projections is reduced to conting matrix analysis. It is stated that if the determinant of the contingency matrix is not equal to zero it is sufficient for the TRACK-MATCH procedure to be solved

  4. MO-FG-CAMPUS-IeP1-01: Alternative K-Edge Filters for Low-Energy Image Acquisition in Contrast Enhanced Spectral Mammography

    Energy Technology Data Exchange (ETDEWEB)

    Shrestha, S; Vedantham, S; Karellas, A [University of Massachusetts Medical School, Worcester, MA (United States)

    2016-06-15

    Purpose: In Contrast Enhanced Spectral Mammography (CESM), Rh filter is often used during low-energy image acquisition. The potential for using Ag, In and Sn filters, which exhibit K-edge closer to, and just below that of Iodine, instead of the Rh filter, was investigated for the low-energy image acquisition. Methods: Analytical computations of the half-value thickness (HVT) and the photon fluence per mAs (photons/mm2/mAs) for 50µm Rh were compared with other potential K-edge filters (Ag, In and Sn), all with K-absorption edge below that of Iodine. Two strategies were investigated: fixed kVp and filter thickness (50µm for all filters) resulting in HVT variation, and fixed kVp and HVT resulting in variation in Ag, In and Sn thickness. Monte Carlo simulations (GEANT4) were conducted to determine if the scatter-to-primary ratio (SPR) and the point spread function of scatter (scatter PSF) differed between Rh and other K-edge filters. Results: Ag, In and Sn filters (50µm thick) increased photon fluence/mAs by 1.3–1.4, 1.8–2, and 1.7–2 at 28-32 kVp compared to 50µm Rh, which could decrease exposure time. Additionally, the fraction of spectra closer to and just below Iodine’s K-edge increased with these filters, which could improve post-subtraction image contrast. For HVT matched to 50µm Rh filtered spectra, the thickness range for Ag, In, and Sn were (41,44)µm, (49,55)µm and (45,53)µm, and increased photon fluence/mAs by 1.5–1.7, 1.6–2, and 1.6–2.2, respectively. Monte Carlo simulations showed that neither the SPR nor the scatter PSF of Ag, In and Sn differed from Rh, indicating no additional detriment due to x-ray scatter. Conclusion: The use of Ag, In and Sn filters for low-energy image acquisition in CESM is potentially feasible and could decrease exposure time and may improve post-subtraction image contrast. Effect of these filters on radiation dose, contrast, noise and associated metrics are being investigated. Funding Support: Supported in

  5. Establishment of Imaging Spectroscopy of Nuclear Gamma-Rays based on Geometrical Optics.

    Science.gov (United States)

    Tanimori, Toru; Mizumura, Yoshitaka; Takada, Atsushi; Miyamoto, Shohei; Takemura, Taito; Kishimoto, Tetsuro; Komura, Shotaro; Kubo, Hidetoshi; Kurosawa, Shunsuke; Matsuoka, Yoshihiro; Miuchi, Kentaro; Mizumoto, Tetsuya; Nakamasu, Yuma; Nakamura, Kiseki; Parker, Joseph D; Sawano, Tatsuya; Sonoda, Shinya; Tomono, Dai; Yoshikawa, Kei

    2017-02-03

    Since the discovery of nuclear gamma-rays, its imaging has been limited to pseudo imaging, such as Compton Camera (CC) and coded mask. Pseudo imaging does not keep physical information (intensity, or brightness in Optics) along a ray, and thus is capable of no more than qualitative imaging of bright objects. To attain quantitative imaging, cameras that realize geometrical optics is essential, which would be, for nuclear MeV gammas, only possible via complete reconstruction of the Compton process. Recently we have revealed that "Electron Tracking Compton Camera" (ETCC) provides a well-defined Point Spread Function (PSF). The information of an incoming gamma is kept along a ray with the PSF and that is equivalent to geometrical optics. Here we present an imaging-spectroscopic measurement with the ETCC. Our results highlight the intrinsic difficulty with CCs in performing accurate imaging, and show that the ETCC surmounts this problem. The imaging capability also helps the ETCC suppress the noise level dramatically by ~3 orders of magnitude without a shielding structure. Furthermore, full reconstruction of Compton process with the ETCC provides spectra free of Compton edges. These results mark the first proper imaging of nuclear gammas based on the genuine geometrical optics.

  6. Design Studies of a CZT-based Detector Combined with a Pixel-Geometry-Matching Collimator for SPECT Imaging.

    Science.gov (United States)

    Weng, Fenghua; Bagchi, Srijeeta; Huang, Qiu; Seo, Youngho

    2013-10-01

    Single Photon Emission Computed Tomography (SPECT) suffers limited efficiency due to the need for collimators. Collimator properties largely decide the data statistics and image quality. Various materials and configurations of collimators have been investigated in many years. The main thrust of our study is to evaluate the design of pixel-geometry-matching collimators to investigate their potential performances using Geant4 Monte Carlo simulations. Here, a pixel-geometry-matching collimator is defined as a collimator which is divided into the same number of pixels as the detector's and the center of each pixel in the collimator is a one-to-one correspondence to that in the detector. The detector is made of Cadmium Zinc Telluride (CZT), which is one of the most promising materials for applications to detect hard X-rays and γ -rays due to its ability to obtain good energy resolution and high light output at room temperature. For our current project, we have designed a large-area, CZT-based gamma camera (20.192 cm×20.192 cm) with a small pixel pitch (1.60 mm). The detector is pixelated and hence the intrinsic resolution can be as small as the size of the pixel. Materials of collimator, collimator hole geometry, detection efficiency, and spatial resolution of the CZT detector combined with the pixel-matching collimator were calculated and analyzed under different conditions. From the simulation studies, we found that such a camera using rectangular holes has promising imaging characteristics in terms of spatial resolution, detection efficiency, and energy resolution.

  7. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.

    Science.gov (United States)

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk

    2013-05-01

    Abstract—In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.

  8. Detection and Counting of Orchard Trees from Vhr Images Using a Geometrical-Optical Model and Marked Template Matching

    Science.gov (United States)

    Maillard, Philippe; Gomes, Marília F.

    2016-06-01

    This article presents an original algorithm created to detect and count trees in orchards using very high resolution images. The algorithm is based on an adaptation of the "template matching" image processing approach, in which the template is based on a "geometricaloptical" model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. The algorithm is tested on four images from different regions of the world and different crop types. These images all have the GoogleEarth application. Results show that the algorithm is very efficient at detecting and counting trees as long as their spectral and spatial characteristics are relatively constant. For walnut, mango and orange trees, the overall accuracy was clearly above 90%. However, the overall success rate for apple trees fell under 75%. It appears that the openness of the apple tree crown is most probably responsible for this poorer result. The algorithm is fully explained with a step-by-step description. At this stage, the algorithm still requires quite a bit of user interaction. The automatic determination of most of the required parameters is under development.

  9. Different Loci of Semantic Interference in Picture Naming vs. Word-Picture Matching Tasks.

    Science.gov (United States)

    Harvey, Denise Y; Schnur, Tatiana T

    2016-01-01

    Naming pictures and matching words to pictures belonging to the same semantic category impairs performance relative to when stimuli come from different semantic categories (i.e., semantic interference). Despite similar semantic interference phenomena in both picture naming and word-picture matching tasks, the locus of interference has been attributed to different levels of the language system - lexical in naming and semantic in word-picture matching. Although both tasks involve access to shared semantic representations, the extent to which interference originates and/or has its locus at a shared level remains unclear, as these effects are often investigated in isolation. We manipulated semantic context in cyclical picture naming and word-picture matching tasks, and tested whether factors tapping semantic-level (generalization of interference to novel category items) and lexical-level processes (interactions with lexical frequency) affected the magnitude of interference, while also assessing whether interference occurs at a shared processing level(s) (transfer of interference across tasks). We found that semantic interference in naming was sensitive to both semantic- and lexical-level processes (i.e., larger interference for novel vs. old and low- vs. high-frequency stimuli), consistent with a semantically mediated lexical locus. Interference in word-picture matching exhibited stable interference for old and novel stimuli and did not interact with lexical frequency. Further, interference transferred from word-picture matching to naming. Together, these experiments provide evidence to suggest that semantic interference in both tasks originates at a shared processing stage (presumably at the semantic level), but that it exerts its effect at different loci when naming pictures vs. matching words to pictures.

  10. Restoration of non-uniform exposure motion blurred image

    Science.gov (United States)

    Luo, Yuanhong; Xu, Tingfa; Wang, Ningming; Liu, Feng

    2014-11-01

    Restoring motion-blurred image is the key technologies in the opto-electronic detection system. The imaging sensors such as CCD and infrared imaging sensor, which are mounted on the motion platforms, quickly move together with the platforms of high speed. As a result, the images become blur. The image degradation will cause great trouble for the succeeding jobs such as objects detection, target recognition and tracking. So the motion-blurred images must be restoration before detecting motion targets in the subsequent images. On the demand of the real weapon task, in order to deal with targets in the complex background, this dissertation uses the new theories in the field of image processing and computer vision to research the new technology of motion deblurring and motion detection. The principle content is as follows: 1) When the prior knowledge about degradation function is unknown, the uniform motion blurred images are restored. At first, the blur parameters, including the motion blur extent and direction of PSF(point spread function), are estimated individually in domain of logarithmic frequency. The direction of PSF is calculated by extracting the central light line of the spectrum, and the extent is computed by minimizing the correction between the fourier spectrum of the blurred image and a detecting function. Moreover, in order to remove the strip in the deblurred image, windows technique is employed in the algorithm, which makes the deblurred image clear. 2) According to the principle of infrared image non-uniform exposure, a new restoration model for infrared blurred images is developed. The fitting of infrared image non-uniform exposure curve is performed by experiment data. The blurred images are restored by the fitting curve.

  11. Using maximum topology matching to explore differences in species distribution models

    Science.gov (United States)

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  12. Unfamiliar face matching with photographs of infants and children

    Directory of Open Access Journals (Sweden)

    Robin S.S. Kramer

    2018-06-01

    Full Text Available Background Infants and children travel using passports that are typically valid for five years (e.g. Canada, United Kingdom, United States and Australia. These individuals may also need to be identified using images taken from videos and other sources in forensic situations including child exploitation cases. However, few researchers have examined how useful these images are as a means of identification. Methods We investigated the effectiveness of photo identification for infants and children using a face matching task, where participants were presented with two images simultaneously and asked whether the images depicted the same child or two different children. In Experiment 1, both images showed an infant (<1 year old, whereas in Experiment 2, one image again showed an infant but the second image of the child was taken at 4–5 years of age. In Experiments 3a and 3b, we asked participants to complete shortened versions of both these tasks (selecting the most difficult trials as well as the short version Glasgow face matching test. Finally, in Experiment 4, we investigated whether information regarding the sex of the infants and children could be accurately perceived from the images. Results In Experiment 1, we found low levels of performance (72% accuracy for matching two infant photos. For Experiment 2, performance was lower still (64% accuracy when infant and child images were presented, given the significant changes in appearance that occur over the first five years of life. In Experiments 3a and 3b, when participants completed both these tasks, as well as a measure of adult face matching ability, we found lowest performance for the two infant tasks, along with mixed evidence of within-person correlations in sensitivities across all three tasks. The use of only same-sex pairings on mismatch trials, in comparison with random pairings, had little effect on performance measures. In Experiment 4, accuracy when judging the sex of infants was at

  13. Aerial image geolocalization by matching its line structure with route map

    Science.gov (United States)

    Kunina, I. A.; Terekhin, A. P.; Khanipov, T. M.; Kuznetsova, E. G.; Nikolaev, D. P.

    2017-03-01

    The classic way of aerial photographs geolocation is to bind their local coordinates to a geographic coordinate system using GPS and IMU data. At the same time the possibility of geolocation in a jammed navigation field is also of interest for practical purposes. In this paper we consider one approach to visual localization relatively to a vector road map without GPS. We suggest a geolocalization algorithm which detects image line segments and looks for a geometrical transformation which provides the best mapping between the obtained segments set and line segments in the road map. We consider IMU and altimeter data still known which allows to work with orthorectified images. The problem is hence reduced to a search for a transformation which contains an arbitrary shift and bounded rotation and scaling relatively to the vector map. These parameters are estimated using RANSAC by matching straight line segments from the image to vector map segments. We also investigate how the proposed algorithm's stability is influenced by segment coordinates (two spatial and one angular).

  14. Match activities of elite women soccer players at different performance levels

    DEFF Research Database (Denmark)

    Mohr, Magni; Krustrup, Peter; Andersson, Helena

    2008-01-01

    , (2) fatigue develops temporarily during and towards the end of a game, and (3) defenders have lower work rates than midfielders and attackers. The difference in high-intensity running between the 2 levels demonstrates the importance of intense intermittent exercise for match performance in women......We sought to study the physical demands and match performance of women soccer players. Nineteen top-class and 15 high-level players were individually videotaped in competitive matches, and time-motion analysis were performed. The players changed locomotor activity >1,300 times in a game...... fewer (P women soccer players (1) top-class international players perform more intervals of high-intensity running than elite players at a lower level...

  15. A long baseline global stereo matching based upon short baseline estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Li, Zigang; Gu, Feifei; Zhao, Zixin; Ma, Yueyang; Fang, Meiqi

    2018-05-01

    In global stereo vision, balancing the matching efficiency and computing accuracy seems to be impossible because they contradict each other. In the case of a long baseline, this contradiction becomes more prominent. In order to solve this difficult problem, this paper proposes a novel idea to improve both the efficiency and accuracy in global stereo matching for a long baseline. In this way, the reference images located between the long baseline image pairs are firstly chosen to form the new image pairs with short baselines. The relationship between the disparities of pixels in the image pairs with different baselines is revealed by considering the quantized error so that the disparity search range under the long baseline can be reduced by guidance of the short baseline to gain matching efficiency. Then, the novel idea is integrated into the graph cuts (GCs) to form a multi-step GC algorithm based on the short baseline estimation, by which the disparity map under the long baseline can be calculated iteratively on the basis of the previous matching. Furthermore, the image information from the pixels that are non-occluded under the short baseline but are occluded for the long baseline can be employed to improve the matching accuracy. Although the time complexity of the proposed method depends on the locations of the chosen reference images, it is usually much lower for a long baseline stereo matching than when using the traditional GC algorithm. Finally, the validity of the proposed method is examined by experiments based on benchmark datasets. The results show that the proposed method is superior to the traditional GC method in terms of efficiency and accuracy, and thus it is suitable for long baseline stereo matching.

  16. Constructing New Biorthogonal Wavelet Type which Matched for Extracting the Iris Image Features

    International Nuclear Information System (INIS)

    Isnanto, R Rizal; Suhardjo; Susanto, Adhi

    2013-01-01

    Some former research have been made for obtaining a new type of wavelet. In case of iris recognition using orthogonal or biorthogonal wavelets, it had been obtained that Haar filter is most suitable to recognize the iris image. However, designing the new wavelet should be done to find a most matched wavelet to extract the iris image features, for which we can easily apply it for identification, recognition, or authentication purposes. In this research, a new biorthogonal wavelet was designed based on Haar filter properties and Haar's orthogonality conditions. As result, it can be obtained a new biorthogonal 5/7 filter type wavelet which has a better than other types of wavelets, including Haar, to extract the iris image features based on its mean-squared error (MSE) and Euclidean distance parameters.

  17. A new signal restoration method based on deconvolution of the Point Spread Function (PSF) for the Flat-Field Holographic Concave Grating UV spectrometer system

    Science.gov (United States)

    Dai, Honglin; Luo, Yongdao

    2013-12-01

    In recent years, with the development of the Flat-Field Holographic Concave Grating, they are adopted by all kinds of UV spectrometers. By means of single optical surface, the Flat-Field Holographic Concave Grating can implement dispersion and imaging that make the UV spectrometer system design quite compact. However, the calibration of the Flat-Field Holographic Concave Grating is very difficult. Various factors make its imaging quality difficult to be guaranteed. So we have to process the spectrum signal with signal restoration before using it. Guiding by the theory of signals and systems, and after a series of experiments, we found that our UV spectrometer system is a Linear Space- Variant System. It means that we have to measure PSF of every pixel of the system which contains thousands of pixels. Obviously, that's a large amount of calculation .For dealing with this problem, we proposes a novel signal restoration method. This method divides the system into several Linear Space-Invariant subsystems and then makes signal restoration with PSFs. Our experiments turn out that this method is effective and inexpensive.

  18. MR angiography with a matched filter

    International Nuclear Information System (INIS)

    De Castro, J.B.; Riederer, S.J.; Lee, J.N.

    1987-01-01

    The technique of matched filtering was applied to a series of cine MR images. The filter was devised to yield a subtraction angiographic image in which direct current components present in the cine series are removed and the signal-to-noise ratio (S/N) of the vascular structures is optimized. The S/N of a matched filter was compared with that of a simple subtraction, in which an image with high flow is subtracted from one with low flow. Experimentally, a range of results from minimal improvement to significant (60%) improvement in S/N was seen in the comparisons of matched filtered subtraction with simple subtraction

  19. Semi-Automatic Removal of Foreground Stars from Images of Galaxies

    Science.gov (United States)

    Frei, Zsolt

    1996-07-01

    A new procedure, designed to remove foreground stars from galaxy proviles is presented here. Although several programs exist for stellar and faint object photometry, none of them treat star removal from the images very carefully. I present my attempt to develop such a system, and briefly compare the performance of my software to one of the well-known stellar photometry packages, DAOPhot (Stetson 1987). Major steps in my procedure are: (1) automatic construction of an empirical 2D point spread function from well separated stars that are situated off the galaxy; (2) automatic identification of those peaks that are likely to be foreground stars, scaling the PSF and removing these stars, and patching residuals (in the automatically determined smallest possible area where residuals are truly significant); and (3) cosmetic fix of remaining degradations in the image. The algorithm and software presented here is significantly better for automatic removal of foreground stars from images of galaxies than DAOPhot or similar packages, since: (a) the most suitable stars are selected automatically from the image for the PSF fit; (b) after star-removal an intelligent and automatic procedure removes any possible residuals; (c) unlimited number of images can be cleaned in one run without any user interaction whatsoever. (SECTION: Computing and Data Analysis)

  20. Biochemical Differences Between Official and Simulated Mixed Martial Arts (MMA) Matches.

    Science.gov (United States)

    Silveira Coswig, Victor; Hideyoshi Fukuda, David; de Paula Ramos, Solange; Boscolo Del Vecchio, Fabricio

    2016-06-01

    One of the goals for training in combat sports is to mimic real situations. For mixed martial arts (MMA), simulated sparring matches are a frequent component during training, but a there is a lack of knowledge considering the differences in sparring and competitive environments. The main objective of this study was to compare biochemical responses to sparring and official MMA matches. Twenty five male professional MMA fighters were evaluated during official events (OFF = 12) and simulated matches (SIM = 13). For both situations, blood samples were taken before (PRE) and immediately after (POST) matches. For statistical analysis, two-way analysis of variance (time x group and time x winner) were used to compare the dependent parametric variables. For non-parametric data, the Kruskal-Wallis test was used and differences were confirmed by Mann-Whitney tests. No significant differences were observed among the groups for demographic variables. The athletes were 26.5 ± 5 years with 80 ± 10 kg, 1.74 ± 0.05 m and had 39.4 ± 25 months of training experience. Primary results indicated higher blood glucose concentration prior to fights for OFF group (OFF= 6.1 ± 1.2 mmol/L and SIM= 4.4 ± 0.7 mmol/L; P < 0.01) and higher ALT values for OFF group at both time points (OFF: PRE = 41.2 ± 12 U/L, POST = 44.2 ± 14.1 U/L; SIM: PRE = 28.1 ± 13.8 U/L, POST = 30.5 ± 12.5 U/L; P = 0.001). In addition, the blood lactate showed similar responses for both groups (OFF: PRE= 4 [3.4 - 4.4] mmol/L, POST= 16.9 [13.8 - 23.5] mmol/L; SIM: PRE = 3.8 [2.8 - 5.5] mmol/L, POST= 16.8 [12.3 - 19.2] mmol/L; P < 0.001). In conclusion, MMA official and simulated matches induce similar high intensity glycolytic demands and minimal changes to biochemical markers of muscle damage immediately following the fights. Glycolytic availability prior to the fights was raised exclusively in response to official matches.

  1. On the use of INS to improve Feature Matching

    Science.gov (United States)

    Masiero, A.; Guarnieri, A.; Vettore, A.; Pirotti, F.

    2014-11-01

    The continuous technological improvement of mobile devices opens the frontiers of Mobile Mapping systems to very compact systems, i.e. a smartphone or a tablet. This motivates the development of efficient 3D reconstruction techniques based on the sensors typically embedded in such devices, i.e. imaging sensors, GPS and Inertial Navigation System (INS). Such methods usually exploits photogrammetry techniques (structure from motion) to provide an estimation of the geometry of the scene. Actually, 3D reconstruction techniques (e.g. structure from motion) rely on use of features properly matched in different images to compute the 3D positions of objects by means of triangulation. Hence, correct feature matching is of fundamental importance to ensure good quality 3D reconstructions. Matching methods are based on the appearance of features, that can change as a consequence of variations of camera position and orientation, and environment illumination. For this reason, several methods have been developed in recent years in order to provide feature descriptors robust (ideally invariant) to such variations, e.g. Scale-Invariant Feature Transform (SIFT), Affine SIFT, Hessian affine and Harris affine detectors, Maximally Stable Extremal Regions (MSER). This work deals with the integration of information provided by the INS in the feature matching procedure: a previously developed navigation algorithm is used to constantly estimate the device position and orientation. Then, such information is exploited to estimate the transformation of feature regions between two camera views. This allows to compare regions from different images but associated to the same feature as seen by the same point of view, hence significantly easing the comparison of feature characteristics and, consequently, improving matching. SIFT-like descriptors are used in order to ensure good matching results in presence of illumination variations and to compensate the approximations related to the estimation

  2. Influence of the partial volume correction method on (18)F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM.

    Science.gov (United States)

    Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian

    2013-10-21

    Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in

  3. Spin-image surface matching based target recognition in laser radar range imagery

    International Nuclear Information System (INIS)

    Li, Wang; Jian-Feng, Sun; Qi, Wang

    2010-01-01

    We explore the problem of in-plane rotation-invariance existing in the vertical detection of laser radar (Ladar) using the algorithm of spin-image surface matching. The method used to recognize the target in the range imagery of Ladar is time-consuming, owing to its complicated procedure, which violates the requirement of real-time target recognition in practical applications. To simplify the troublesome procedures, we improve the spin-image algorithm by introducing a statistical correlated coefficient into target recognition in range imagery of Ladar. The system performance is demonstrated on sixteen simulated noise range images with targets rotated through an arbitrary angle in plane. A high efficiency and an acceptable recognition rate obtained herein testify the validity of the improved algorithm for practical applications. The proposed algorithm not only solves the problem of in-plane rotation-invariance rationally, but also meets the real-time requirement. This paper ends with a comparison of the proposed method and the previous one. (classical areas of phenomenology)

  4. Stereo matching using epipolar distance transform.

    Science.gov (United States)

    Yang, Qingxiong; Ahuja, Narendra

    2012-10-01

    In this paper, we propose a simple but effective image transform, called the epipolar distance transform, for matching low-texture regions. It converts image intensity values to a relative location inside a planar segment along the epipolar line, such that pixels in the low-texture regions become distinguishable. We theoretically prove that the transform is affine invariant, thus the transformed images can be directly used for stereo matching. Any existing stereo algorithms can be directly used with the transformed images to improve reconstruction accuracy for low-texture regions. Results on real indoor and outdoor images demonstrate the effectiveness of the proposed transform for matching low-texture regions, keypoint detection, and description for low-texture scenes. Our experimental results on Middlebury images also demonstrate the robustness of our transform for highly textured scenes. The proposed transform has a great advantage, its low computational complexity. It was tested on a MacBook Air laptop computer with a 1.8 GHz Core i7 processor, with a speed of about 9 frames per second for a video graphics array-sized image.

  5. Efficient line matching with homography

    Science.gov (United States)

    Shen, Yan; Dai, Yuxing; Zhu, Zhiliang

    2018-03-01

    In this paper, we propose a novel approach to line matching based on homography. The basic idea is to use cheaply obtainable matched points to boost the similarity between two images. Two types of homography method, which are estimated by direct linear transformation, transform images and extract their similar parts, laying a foundation for the use of optical flow tracking. The merit of the similarity is that rapid matching can be achieved by regionalizing line segments and local searching. For multiple homography estimation that can perform better than one global homography, we introduced the rank-one modification method of singular value decomposition to reduce the computation cost. The proposed approach results in point-to-point matches, which can be utilized with state-of-the-art point-match-based structures from motion (SfM) frameworks seamlessly. The outstanding performance and feasible robustness of our approach are demonstrated in this paper.

  6. Technical performance and match-to-match variation in elite football teams.

    Science.gov (United States)

    Liu, Hongyou; Gómez, Miguel-Angel; Gonçalves, Bruno; Sampaio, Jaime

    2016-01-01

    Recent research suggests that match-to-match variation adds important information to performance descriptors in team sports, as it helps measure how players fine-tune their tactical behaviours and technical actions to the extreme dynamical environments. The current study aims to identify the differences in technical performance of players from strong and weak teams and to explore match-to-match variation of players' technical match performance. Performance data of all the 380 matches of season 2012-2013 in the Spanish First Division Professional Football League were analysed. Twenty-one performance-related match actions and events were chosen as variables in the analyses. Players' technical performance profiles were established by unifying count values of each action or event of each player per match into the same scale. Means of these count values of players from Top3 and Bottom3 teams were compared and plotted into radar charts. Coefficient of variation of each match action or event within a player was calculated to represent his match-to-match variation of technical performance. Differences in the variation of technical performances of players across different match contexts (team and opposition strength, match outcome and match location) were compared. All the comparisons were achieved by the magnitude-based inferences. Results showed that technical performances differed between players of strong and weak teams from different perspectives across different field positions. Furthermore, the variation of the players' technical performance is affected by the match context, with effects from team and opposition strength greater than effects from match location and match outcome.

  7. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which r...

  8. Multi-patch matching for person re-identification

    Science.gov (United States)

    Labidi, Hocine; Luo, Sen-Lin; Boubekeur, Mohamed B.; Benlefki, Tarek

    2015-08-01

    Recognizing a target object across non-overlapping distributed cameras is known in the computer vision community as the problem of person re-identification. In this paper, a multi-patch matching method for person reidentification is presented. Starting from the assumption that: the appearance (clothes) of a person does not change during the time of passing in different cameras field of view , which means the regions with the same color in target image will be identical while crossing cameras. First, we extract distinctive features in the training procedure, where each image target is devised into small patches, the SIFT features and LAB color histograms are computed for each patch. Then we use the KNN approach to detect group of patches with high similarity in the target image and then we use a bi-directional weighted group matching mechanism for the re-identification. Experiments on a challenging VIPeR dataset show that the performances of the proposed method outperform several baselines and state of the art approaches.

  9. Matched-Filter Thermography

    Directory of Open Access Journals (Sweden)

    Nima Tabatabaei

    2018-04-01

    Full Text Available Conventional infrared thermography techniques, including pulsed and lock-in thermography, have shown great potential for non-destructive evaluation of broad spectrum of materials, spanning from metals to polymers to biological tissues. However, performance of these techniques is often limited due to the diffuse nature of thermal wave fields, resulting in an inherent compromise between inspection depth and depth resolution. Recently, matched-filter thermography has been introduced as a means for overcoming this classic limitation to enable depth-resolved subsurface thermal imaging and improving axial/depth resolution. This paper reviews the basic principles and experimental results of matched-filter thermography: first, mathematical and signal processing concepts related to matched-fileting and pulse compression are discussed. Next, theoretical modeling of thermal-wave responses to matched-filter thermography using two categories of pulse compression techniques (linear frequency modulation and binary phase coding are reviewed. Key experimental results from literature demonstrating the maintenance of axial resolution while inspecting deep into opaque and turbid media are also presented and discussed. Finally, the concept of thermal coherence tomography for deconvolution of thermal responses of axially superposed sources and creation of depth-selective images in a diffusion-wave field is reviewed.

  10. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    International Nuclear Information System (INIS)

    Jochimsen, Thies H.; Zeisig, Vilia; Schulz, Jessica; Werner, Peter; Patt, Marianne; Patt, Jörg; Dreyer, Antje Y.; Boltze, Johannes; Barthel, Henryk; Sabri, Osama; Sattler, Bernhard

    2016-01-01

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  11. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    Energy Technology Data Exchange (ETDEWEB)

    Jochimsen, Thies H.; Zeisig, Vilia [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Schulz, Jessica [Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, Leipzig, D-04103 (Germany); Werner, Peter; Patt, Marianne; Patt, Jörg [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Dreyer, Antje Y. [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Boltze, Johannes [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Fraunhofer Research Institution of Marine Biotechnology and Institute for Medical and Marine Biotechnology, University of Lübeck, Lübeck (Germany); Barthel, Henryk; Sabri, Osama; Sattler, Bernhard [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany)

    2016-02-13

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  12. Multi-modal image registration: matching MRI with histology

    Science.gov (United States)

    Alic, Lejla; Haeck, Joost C.; Klein, Stefan; Bol, Karin; van Tiel, Sandra T.; Wielopolski, Piotr A.; Bijster, Magda; Niessen, Wiro J.; Bernsen, Monique; Veenland, Jifke F.; de Jong, Marion

    2010-03-01

    Spatial correspondence between histology and multi sequence MRI can provide information about the capabilities of non-invasive imaging to characterize cancerous tissue. However, shrinkage and deformation occurring during the excision of the tumor and the histological processing complicate the co registration of MR images with histological sections. This work proposes a methodology to establish a detailed 3D relation between histology sections and in vivo MRI tumor data. The key features of the methodology are a very dense histological sampling (up to 100 histology slices per tumor), mutual information based non-rigid B-spline registration, the utilization of the whole 3D data sets, and the exploitation of an intermediate ex vivo MRI. In this proof of concept paper, the methodology was applied to one tumor. We found that, after registration, the visual alignment of tumor borders and internal structures was fairly accurate. Utilizing the intermediate ex vivo MRI, it was possible to account for changes caused by the excision of the tumor: we observed a tumor expansion of 20%. Also the effects of fixation, dehydration and histological sectioning could be determined: 26% shrinkage of the tumor was found. The annotation of viable tissue, performed in histology and transformed to the in vivo MRI, matched clearly with high intensity regions in MRI. With this methodology, histological annotation can be directly related to the corresponding in vivo MRI. This is a vital step for the evaluation of the feasibility of multi-spectral MRI to depict histological groundtruth.

  13. Simultaneous Semi-Coupled Dictionary Learning for Matching in Canonical Space.

    Science.gov (United States)

    Das, Nilotpal; Mandal, Devraj; Biswas, Soma

    2017-05-24

    Cross-modal recognition and matching with privileged information are important challenging problems in the field of computer vision. The cross-modal scenario deals with matching across different modalities and needs to take care of the large variations present across and within each modality. The privileged information scenario deals with the situation that all the information available during training may not be available during the testing stage and hence algorithms need to leverage the extra information from the training stage itself. We show that for multi-modal data, either one of the above situations may arise if one modality is absent during testing. Here, we propose a novel framework which can handle both these scenarios seamlessly with applications to matching multi-modal data. The proposed approach jointly uses data from the two modalities to build a canonical representation which encompasses information from both the modalities. We explore four different types of canonical representations for different types of data. The algorithm computes dictionaries and canonical representation for data from both the modalities such that the transformed sparse coefficients of both the modalities are equal to that of the canonical representation. The sparse coefficients are finally matched using Mahalanobis metric. Extensive experiments on different datasets, involving RGBD, text-image and audio-image data show the effectiveness of the proposed framework.

  14. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Science.gov (United States)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  15. PTB-associated splicing factor (PSF) functions as a repressor of STAT6-mediated IG{epsilon} gene transcription by recruitment of HDAC1

    DEFF Research Database (Denmark)

    Dong, Lijie; Zhang, Xinyu; Fu, Xiao

    2010-01-01

    understood. Here we identified by proteomic approach that PTB-associated splicing factor (PSF) interacts with STAT6. In cells the interaction required IL-4 stimulation and was observed both with endogenous and ectopically expressed proteins. The ligand dependency of the interaction suggested involvement...

  16. PIMR: Parallel and Integrated Matching for Raw Data.

    Science.gov (United States)

    Li, Zhenghao; Yang, Junying; Zhao, Jiaduo; Han, Peng; Chai, Zhi

    2016-01-02

    With the trend of high-resolution imaging, computational costs of image matching have substantially increased. In order to find the compromise between accuracy and computation in real-time applications, we bring forward a fast and robust matching algorithm, named parallel and integrated matching for raw data (PIMR). This algorithm not only effectively utilizes the color information of raw data, but also designs a parallel and integrated framework to shorten the time-cost in the demosaicing stage. Experiments show that compared to existing state-of-the-art methods, the proposed algorithm yields a comparable recognition rate, while the total time-cost of imaging and matching is significantly reduced.

  17. Global stereo matching algorithm based on disparity range estimation

    Science.gov (United States)

    Li, Jing; Zhao, Hong; Gu, Feifei

    2017-09-01

    The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.

  18. Optimization of brain PET imaging for a multicentre trial: the French CATI experience.

    Science.gov (United States)

    Habert, Marie-Odile; Marie, Sullivan; Bertin, Hugo; Reynal, Moana; Martini, Jean-Baptiste; Diallo, Mamadou; Kas, Aurélie; Trébossen, Régine

    2016-12-01

    CATI is a French initiative launched in 2010 to handle the neuroimaging of a large cohort of subjects recruited for an Alzheimer's research program called MEMENTO. This paper presents our test protocol and results obtained for the 22 PET centres (overall 13 different scanners) involved in the MEMENTO cohort. We determined acquisition parameters using phantom experiments prior to patient studies, with the aim of optimizing PET quantitative values to the highest possible per site, while reducing, if possible, variability across centres. Jaszczak's and 3D-Hoffman's phantom measurements were used to assess image spatial resolution (ISR), recovery coefficients (RC) in hot and cold spheres, and signal-to-noise ratio (SNR). For each centre, the optimal reconstruction parameters were chosen as those maximizing ISR and RC without a noticeable decrease in SNR. Point-spread-function (PSF) modelling reconstructions were discarded. The three figures of merit extracted from the images reconstructed with optimized parameters and routine schemes were compared, as were volumes of interest ratios extracted from Hoffman acquisitions. The net effect of the 3D-OSEM reconstruction parameter optimization was investigated on a subset of 18 scanners without PSF modelling reconstruction. Compared to the routine parameters of the 22 PET centres, average RC in the two smallest hot and cold spheres and average ISR remained stable or were improved with the optimized reconstruction, at the expense of slight SNR degradation, while the dispersion of values was reduced. For the subset of scanners without PSF modelling, the mean RC of the smallest hot sphere obtained with the optimized reconstruction was significantly higher than with routine reconstruction. The putamen and caudate-to-white matter ratios measured on 3D-Hoffman acquisitions of all centres were also significantly improved by the optimization, while the variance was reduced. This study provides guidelines for optimizing quantitative

  19. Histogram Matching Extends Acceptable Signal Strength Range on Optical Coherence Tomography Images

    Science.gov (United States)

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Sigal, Ian A.; Kagemann, Larry; Schuman, Joel S.

    2015-01-01

    Purpose. We minimized the influence of image quality variability, as measured by signal strength (SS), on optical coherence tomography (OCT) thickness measurements using the histogram matching (HM) method. Methods. We scanned 12 eyes from 12 healthy subjects with the Cirrus HD-OCT device to obtain a series of OCT images with a wide range of SS (maximal range, 1–10) at the same visit. For each eye, the histogram of an image with the highest SS (best image quality) was set as the reference. We applied HM to the images with lower SS by shaping the input histogram into the reference histogram. Retinal nerve fiber layer (RNFL) thickness was automatically measured before and after HM processing (defined as original and HM measurements), and compared to the device output (device measurements). Nonlinear mixed effects models were used to analyze the relationship between RNFL thickness and SS. In addition, the lowest tolerable SSs, which gave the RNFL thickness within the variability margin of manufacturer recommended SS range (6–10), were determined for device, original, and HM measurements. Results. The HM measurements showed less variability across a wide range of image quality than the original and device measurements (slope = 1.17 vs. 4.89 and 1.72 μm/SS, respectively). The lowest tolerable SS was successfully reduced to 4.5 after HM processing. Conclusions. The HM method successfully extended the acceptable SS range on OCT images. This would qualify more OCT images with low SS for clinical assessment, broadening the OCT application to a wider range of subjects. PMID:26066749

  20. Accurate estimation of motion blur parameters in noisy remote sensing image

    Science.gov (United States)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  1. Edge Artifacts in Point Spread Function-based PET Reconstruction in Relation to Object Size and Reconstruction Parameters

    Directory of Open Access Journals (Sweden)

    Yuji Tsutsui

    2017-06-01

    Full Text Available Objective(s: We evaluated edge artifacts in relation to phantom diameter and reconstruction parameters in point spread function (PSF-based positron emission tomography (PET image reconstruction.Methods: PET data were acquired from an original cone-shaped phantom filled with 18F solution (21.9 kBq/mL for 10 min using a Biograph mCT scanner. The images were reconstructed using the baseline ordered subsets expectation maximization (OSEM algorithm and the OSEM with PSF correction model. The reconstruction parameters included a pixel size of 1.0, 2.0, or 3.0 mm, 1-12 iterations, 24 subsets, and a full width at half maximum (FWHM of the post-filter Gaussian filter of 1.0, 2.0, or 3.0 mm. We compared both the maximum recovery coefficient (RCmax and the mean recovery coefficient (RCmean in the phantom at different diameters.Results: The OSEM images had no edge artifacts, but the OSEM with PSF images had a dense edge delineating the hot phantom at diameters 10 mm or more and a dense spot at the center at diameters of 8 mm or less. The dense edge was clearly observed on images with a small pixel size, a Gaussian filter with a small FWHM, and a high number of iterations. At a phantom diameter of 6-7 mm, the RCmax for the OSEM and OSEM with PSF images was 60% and 140%, respectively (pixel size: 1.0 mm; FWHM of the Gaussian filter: 2.0 mm; iterations: 2. The RCmean of the OSEM with PSF images did not exceed 100%.Conclusion: PSF-based image reconstruction resulted in edge artifacts, the degree of which depends on the pixel size, number of iterations, FWHM of the Gaussian filter, and object size.

  2. Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images

    Directory of Open Access Journals (Sweden)

    Victor Lawrence

    2012-07-01

    Full Text Available Electro-optic (EO image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF of a uniform detector array and the incoherent optical transfer function (OTF of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1 inverse filter-based IR image transformation; (2 EO image edge detection; (3 registration; and (4 blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.

  3. Deep convolutional neural networks for building extraction from orthoimages and dense image matching point clouds

    Science.gov (United States)

    Maltezos, Evangelos; Doulamis, Nikolaos; Doulamis, Anastasios; Ioannidis, Charalabos

    2017-10-01

    Automatic extraction of buildings from remote sensing data is an attractive research topic, useful for several applications, such as cadastre and urban planning. This is mainly due to the inherent artifacts of the used data and the differences in viewpoint, surrounding environment, and complex shape and size of the buildings. This paper introduces an efficient deep learning framework based on convolutional neural networks (CNNs) toward building extraction from orthoimages. In contrast to conventional deep approaches in which the raw image data are fed as input to the deep neural network, in this paper the height information is exploited as an additional feature being derived from the application of a dense image matching algorithm. As test sites, several complex urban regions of various types of buildings, pixel resolutions and types of data are used, located in Vaihingen in Germany and in Perissa in Greece. Our method is evaluated using the rates of completeness, correctness, and quality and compared with conventional and other "shallow" learning paradigms such as support vector machines. Experimental results indicate that a combination of raw image data with height information, feeding as input to a deep CNN model, provides potentials in building detection in terms of robustness, flexibility, and efficiency.

  4. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    Directory of Open Access Journals (Sweden)

    Jinbeum Jang

    2015-03-01

    Full Text Available This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF algorithm consists of four steps: (i acquisition of left and right images using AF points in the region-of-interest; (ii feature extraction in the left image under low illumination and out-of-focus blur; (iii the generation of two feature images using the phase difference between the left and right images; and (iv estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  5. Processing and evaluation of image matching tools in radiotherapy; Mise en oeuvre et evaluation d'outils de fusion d'image en radiotherapie

    Energy Technology Data Exchange (ETDEWEB)

    Bondiau, P Y

    2004-11-15

    Cancer is a major problem of public health. Treatment can be done in a general or loco-regional way, in this last case medical images are important as they specify the localization of the tumour. The objective of the radiotherapy is to deliver a curative dose of radiation in the target volume while sparing the organs at risks (O.A.R.). The determination of the accurate localization of the targets volume as well as O.A.R. make it possible to define the ballistic of irradiation beams. After the description of the principles of radiotherapy and cancers treatment, we specify the clinical stakes of ocular, cerebral and prostatic tumours. We present a state of the art of image matching, the various techniques reviewed with an aim of being didactic with respect to the medical community. The results of matching are presented within the framework of the planning of the cerebral and prostatic radiotherapy in order to specify the types of applicable matching in oncology and more particularly in radiotherapy. Then, we present the prospects for this type of application according to various anatomical areas. Applications of automatic segmentation and the evaluation of the results in the framework of brain tumour are described after a review of the various segmentation methods according to anatomical localizations. We will see an original application: the digital simulation of the virtual tumoral growth and the comparison with the real growth of a cerebral tumour presented by a patient. Lastly, we will expose the future developments possible of the tools for image processing in radiotherapy as well as the tracks of research to be explored in oncology. (author)

  6. Depth-variant blind restoration with pupil-phase constraints for 3D confocal microscopy

    International Nuclear Information System (INIS)

    Hadj, Saima Ben; Blanc-Féraud, Laure; Engler, Gilbert

    2013-01-01

    Three-dimensional images of confocal laser scanning microscopy suffer from a depth-variant blur, due to refractive index mismatch between the different mediums composing the system as well as the specimen, leading to optical aberrations. Our goal is to develop an image restoration method for 3D confocal microscopy taking into account the blur variation with depth. The difficulty is that optical aberrations depend on the refractive index of the biological specimen. The depth-variant blur function or the Point Spread Function (PSF) is thus different for each observation. A blind or semi-blind restoration method needs to be developed for this system. For that purpose, we use a previously developed algorithm for the joint estimation of the specimen function (original image) and the 3D PSF, the continuously depth-variant PSF is approximated by a convex combination of a set of space-invariant PSFs taken at different depths. We propose to add to that algorithm a pupil-phase constraint for the PSF estimation, given by the the optical instrument geometry. We thus define a blind estimation algorithm by minimizing a regularized criterion in which we integrate the Gerchberg-Saxton algorithm allowing to include these physical constraints. We show the efficiency of this method relying on some numerical tests

  7. EXAMINATION ABOUT INFLUENCE FOR PRECISION OF 3D IMAGE MEASUREMENT FROM THE GROUND CONTROL POINT MEASUREMENT AND SURFACE MATCHING

    Directory of Open Access Journals (Sweden)

    T. Anai

    2015-05-01

    Full Text Available As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results

  8. Examination about Influence for Precision of 3d Image Measurement from the Ground Control Point Measurement and Surface Matching

    Science.gov (United States)

    Anai, T.; Kochi, N.; Yamada, M.; Sasaki, T.; Otani, H.; Sasaki, D.; Nishimura, S.; Kimoto, K.; Yasui, N.

    2015-05-01

    As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching) by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results of analysis made

  9. Coarse-to-fine region selection and matching

    KAUST Repository

    Yang, Yanchao

    2015-10-15

    We present a new approach to wide baseline matching. We propose to use a hierarchical decomposition of the image domain and coarse-to-fine selection of regions to match. In contrast to interest point matching methods, which sample salient regions to reduce the cost of comparing all regions in two images, our method eliminates regions systematically to achieve efficiency. One advantage of our approach is that it is not restricted to covariant salient regions, which is too restrictive under large viewpoint and leads to few corresponding regions. Affine invariant matching of regions in the hierarchy is achieved efficiently by a coarse-to-fine search of the affine space. Experiments on two benchmark datasets shows that our method finds more correct correspondence of the image (with fewer false alarms) than other wide baseline methods on large viewpoint change. © 2015 IEEE.

  10. Optical Imaging and Radiometric Modeling and Simulation

    Science.gov (United States)

    Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.

    2010-01-01

    OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge

  11. Computer face-matching technology using two-dimensional photographs accurately matches the facial gestalt of unrelated individuals with the same syndromic form of intellectual disability.

    Science.gov (United States)

    Dudding-Byth, Tracy; Baxter, Anne; Holliday, Elizabeth G; Hackett, Anna; O'Donnell, Sheridan; White, Susan M; Attia, John; Brunner, Han; de Vries, Bert; Koolen, David; Kleefstra, Tjitske; Ratwatte, Seshika; Riveros, Carlos; Brain, Steve; Lovell, Brian C

    2017-12-19

    Massively parallel genetic sequencing allows rapid testing of known intellectual disability (ID) genes. However, the discovery of novel syndromic ID genes requires molecular confirmation in at least a second or a cluster of individuals with an overlapping phenotype or similar facial gestalt. Using computer face-matching technology we report an automated approach to matching the faces of non-identical individuals with the same genetic syndrome within a database of 3681 images [1600 images of one of 10 genetic syndrome subgroups together with 2081 control images]. Using the leave-one-out method, two research questions were specified: 1) Using two-dimensional (2D) photographs of individuals with one of 10 genetic syndromes within a database of images, did the technology correctly identify more than expected by chance: i) a top match? ii) at least one match within the top five matches? or iii) at least one in the top 10 with an individual from the same syndrome subgroup? 2) Was there concordance between correct technology-based matches and whether two out of three clinical geneticists would have considered the diagnosis based on the image alone? The computer face-matching technology correctly identifies a top match, at least one correct match in the top five and at least one in the top 10 more than expected by chance (P syndromes except Kabuki syndrome. Although the accuracy of the computer face-matching technology was tested on images of individuals with known syndromic forms of intellectual disability, the results of this pilot study illustrate the potential utility of face-matching technology within deep phenotyping platforms to facilitate the interpretation of DNA sequencing data for individuals who remain undiagnosed despite testing the known developmental disorder genes.

  12. Radargrammetric DSM generation in mountainous areas through adaptive-window least squares matching constrained by enhanced epipolar geometry

    Science.gov (United States)

    Dong, Yuting; Zhang, Lu; Balz, Timo; Luo, Heng; Liao, Mingsheng

    2018-03-01

    Radargrammetry is a powerful tool to construct digital surface models (DSMs) especially in heavily vegetated and mountainous areas where SAR interferometry (InSAR) technology suffers from decorrelation problems. In radargrammetry, the most challenging step is to produce an accurate disparity map through massive image matching, from which terrain height information can be derived using a rigorous sensor orientation model. However, precise stereoscopic SAR (StereoSAR) image matching is a very difficult task in mountainous areas due to the presence of speckle noise and dissimilar geometric/radiometric distortions. In this article, an adaptive-window least squares matching (AW-LSM) approach with an enhanced epipolar geometric constraint is proposed to robustly identify homologous points after compensation for radiometric discrepancies and geometric distortions. The matching procedure consists of two stages. In the first stage, the right image is re-projected into the left image space to generate epipolar images using rigorous imaging geometries enhanced with elevation information extracted from the prior DEM data e.g. SRTM DEM instead of the mean height of the mapped area. Consequently, the dissimilarities in geometric distortions between the left and right images are largely reduced, and the residual disparity corresponds to the height difference between true ground surface and the prior DEM. In the second stage, massive per-pixel matching between StereoSAR epipolar images identifies the residual disparity. To ensure the reliability and accuracy of the matching results, we develop an iterative matching scheme in which the classic cross correlation matching is used to obtain initial results, followed by the least squares matching (LSM) to refine the matching results. An adaptively resizing search window strategy is adopted during the dense matching step to help find right matching points. The feasibility and effectiveness of the proposed approach is demonstrated using

  13. Desempenho do PSF no Sul e no Nordeste do Brasil: avaliação institucional e epidemiológica da Atenção Básica à Saúde

    Directory of Open Access Journals (Sweden)

    Luiz Augusto Facchini

    Full Text Available A pesquisa, desenvolvida dentro dos Estudos de Linha de Base do Proesf analisou o desempenho do Programa Saúde da Família (PSF em 41 municípios dos Estados de Alagoas, Paraíba, Pernambuco, Piauí, Rio Grande do Norte, Rio Grande do Sul e Santa Catarina. Utilizou delineamento transversal, com grupo de comparação externo (atenção básica tradicional. Entrevistou 41 presidentes de Conselhos Municipais de Saúde, 29 secretários municipais de Saúde e 32 coordenadores de Atenção Básica. Foram caracterizados a estrutura e o processo de trabalho em 234 Unidades Básicas de Saúde (UBS, incluindo 4.749 trabalhadores de saúde; 4.079 crianças; 3.945 mulheres; 4.060 adultos e 4.006 idosos. O controle de qualidade alcançou 6% dos domicílios amostrados. A cobertura do PSF de 1999 a 2004 cresceu mais no Nordeste do que no Sul. Menos da metade dos trabalhadores ingressaram por concurso público e o trabalho precário foi maior no PSF do que em UBS tradicionais. Os achados sugerem um desempenho da Atenção Básica à Saúde (ABS ainda distante das prescrições do SUS. Menos da metade da demanda potencial utilizou a UBS de sua área de abrangência. A oferta de ações de saúde, a sua utilização e o contato por ações programáticas foram mais adequados no PSF.

  14. Development of automatic navigation measuring system using template-matching software in image guided neurosurgery

    International Nuclear Information System (INIS)

    Watanabe, Yohei; Hayashi, Yuichiro; Fujii, Masazumi; Wakabayashi, Toshihiko; Kimura, Miyuki; Tsuzaka, Masatoshi; Sugiura, Akihiro

    2010-01-01

    An image-guided neurosurgery and neuronavigation system based on magnetic resonance imaging has been used as an indispensable tool for resection of brain tumors. Therefore, accuracy of the neuronavigation system, provided by periodic quality assurance (QA), is essential for image-guided neurosurgery. Two types of accuracy index, fiducial registration error (FRE) and target registration error (TRE), have been used to evaluate navigation accuracy. FRE shows navigation accuracy on points that have been registered. On the other hand, TRE shows navigation accuracy on points such as tumor, skin, and fiducial markers. This study shows that TRE is more reliable than FRE. However, calculation of TRE is a time-consuming, subjective task. Software for QA was developed to compute TRE. This software calculates TRE automatically by an image processing technique, such as automatic template matching. TRE was calculated by the software and compared with the results obtained by manual calculation. Using the software made it possible to achieve a reliable QA system. (author)

  15. Template match using local feature with view invariance

    Science.gov (United States)

    Lu, Cen; Zhou, Gang

    2013-10-01

    Matching the template image in the target image is the fundamental task in the field of computer vision. Aiming at the deficiency in the traditional image matching methods and inaccurate matching in scene image with rotation, illumination and view changing, a novel matching algorithm using local features are proposed in this paper. The local histograms of the edge pixels (LHoE) are extracted as the invariable feature to resist view and brightness changing. The merits of the LHoE is that the edge points have been little affected with view changing, and the LHoE can resist not only illumination variance but also the polution of noise. For the process of matching are excuded only on the edge points, the computation burden are highly reduced. Additionally, our approach is conceptually simple, easy to implement and do not need the training phase. The view changing can be considered as the combination of rotation, illumination and shear transformation. Experimental results on simulated and real data demonstrated that the proposed approach is superior to NCC(Normalized cross-correlation) and Histogram-based methods with view changing.

  16. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  17. Identity-level representations affect unfamiliar face matching performance in sequential but not simultaneous tasks.

    Science.gov (United States)

    Menon, Nadia; White, David; Kemp, Richard I

    2015-01-01

    According to cognitive and neurological models of the face-processing system, faces are represented at two levels of abstraction. First, image-based pictorial representations code a particular instance of a face and include information that is unrelated to identity-such as lighting, pose, and expression. Second, at a more abstract level, identity-specific representations combine information from various encounters with a single face. Here we tested whether identity-level representations mediate unfamiliar face matching performance. Across three experiments we manipulated identity attributions to pairs of target images and measured the effect on subsequent identification decisions. Participants were instructed that target images were either two photos of the same person (1ID condition) or photos of two different people (2ID condition). This manipulation consistently affected performance in sequential matching: 1ID instructions improved accuracy on "match" trials and caused participants to adopt a more liberal response bias than the 2ID condition. However, this manipulation did not affect performance in simultaneous matching. We conclude that identity-level representations, generated in working memory, influence the amount of variation tolerated between images, when making identity judgements in sequential face matching.

  18. Face recognition using elastic grid matching through photoshop: A new approach

    Directory of Open Access Journals (Sweden)

    Manavpreet Kaur

    2015-12-01

    Full Text Available Computing grids propose to be a very efficacious, economic and ascendable way of image identification. In this paper, we propose a grid based face recognition overture employing a general template matching method to solve the timeconsuming face recognition problem. A new approach has been employed in which the grid was prepared for a specific individual over his photograph using Adobe Photoshop CS5 software. The background was later removed and the grid prepared by merging layers was used as a template for image matching or comparison. This overture is computationally efficient, has high recognition rates and is able to identify a person with minimal efforts and in short time even from photographs taken at different magnifications and from different distances.

  19. Memory retrieval of smoking-related images induce greater insula activation as revealed by an fMRI-based delayed matching to sample task.

    Science.gov (United States)

    Janes, Amy C; Ross, Robert S; Farmer, Stacey; Frederick, Blaise B; Nickerson, Lisa D; Lukas, Scott E; Stern, Chantal E

    2015-03-01

    Nicotine dependence is a chronic and difficult to treat disorder. While environmental stimuli associated with smoking precipitate craving and relapse, it is unknown whether smoking cues are cognitively processed differently than neutral stimuli. To evaluate working memory differences between smoking-related and neutral stimuli, we conducted a delay-match-to-sample (DMS) task concurrently with functional magnetic resonance imaging (fMRI) in nicotine-dependent participants. The DMS task evaluates brain activation during the encoding, maintenance and retrieval phases of working memory. Smoking images induced significantly more subjective craving, and greater midline cortical activation during encoding in comparison to neutral stimuli that were similar in content yet lacked a smoking component. The insula, which is involved in maintaining nicotine dependence, was active during the successful retrieval of previously viewed smoking versus neutral images. In contrast, neutral images required more prefrontal cortex-mediated active maintenance during the maintenance period. These findings indicate that distinct brain regions are involved in the different phases of working memory for smoking-related versus neutral images. Importantly, the results implicate the insula in the retrieval of smoking-related stimuli, which is relevant given the insula's emerging role in addiction. © 2013 Society for the Study of Addiction.

  20. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Directory of Open Access Journals (Sweden)

    E. Dall'Asta

    2014-06-01

    Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  1. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Science.gov (United States)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  2. Efficient Topological Localization Using Global and Local Feature Matching

    Directory of Open Access Journals (Sweden)

    Junqiu Wang

    2013-03-01

    Full Text Available We present an efficient vision-based global topological localization approach in which different image features are used in a coarse-to-fine matching framework. Orientation Adjacency Coherence Histogram (OACH, a novel image feature, is proposed to improve the coarse localization. The coarse localization results are taken as inputs for the fine localization which is carried out by matching Harris-Laplace interest points characterized by the SIFT descriptor. The computation of OACHs and interest points is efficient due to the fact that these features are computed in an integrated process. The matching of local features is improved by using approximate nearest neighbor searching technique. We have implemented and tested the localization system in real environments. The experimental results demonstrate that our approach is efficient and reliable in both indoor and outdoor environments. This work has also been compared with previous works. The comparison results show that our approach has better performance with higher correct ratio and lower computational complexity.

  3. SEGMENTATION OF MICROSCOPIC IMAGES OF BACTERIA IN BULGARIAN YOGHURT BY TEMPLATE MATCHING

    Directory of Open Access Journals (Sweden)

    Zlatin Zlatev

    2016-12-01

    Full Text Available The diagnosis of deviations in quality of yogurt is performed by approved methods set out in the Bulgarian national standard (BNS and its adjacent regulations. The basic method of evaluation of the microbiological quality of the product is the microscopic. The method is subjective and requires significant processing time of the samples. The precision of diagnosis is not high and depends on the qualifications of the expert. The systems for pattern recognition in the most natural way interpret this specific expert activity. The aim of this report is to assess the possibility of application of a method of processing and image analysis for determination of the microbiological quality of yogurt. Selected method is template matching. A comparative analysis is made of the methods for template matching. The comparative analysis of available algorithms showed that the known ones have certain disadvantages associated with their rapid-action, the use of simplified procedures, they are sensitive to rotation of the object in the template. It is developed algorithm that complement these known and overcome some of their disadvantages.

  4. A multiscale approach to mutual information matching

    NARCIS (Netherlands)

    Pluim, J.P.W.; Maintz, J.B.A.; Viergever, M.A.; Hanson, K.M.

    1998-01-01

    Methods based on mutual information have shown promising results for matching of multimodal brain images. This paper discusses a multiscale approach to mutual information matching, aiming for an acceleration of the matching process while considering the accuracy and robustness of the method. Scaling

  5. The effect of 18F-FDG-PET image reconstruction algorithms on the expression of characteristic metabolic brain network in Parkinson's disease.

    Science.gov (United States)

    Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja

    2017-09-01

    To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (palgorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, palgorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. A MATCHING METHOD TO REDUCE THE INFLUENCE OF SAR GEOMETRIC DEFORMATION

    Directory of Open Access Journals (Sweden)

    C. Gao

    2018-04-01

    Full Text Available There are large geometrical deformations in SAR image, including foreshortening, layover, shade,which leads to SAR Image matching with low accuracy. Especially in complex terrain area, the control points are difficult to obtain, and the matching is difficult to achieve. Considering the impact of geometric distortions in SAR image pairs, a matching algorithm with a combination of speeded up robust features (SURF and summed of normalize cross correlation (SNCC was proposed, which can avoid the influence of SAR geometric deformation. Firstly, SURF algorithm was utilized to predict the search area. Then the matching point pairs was selected based on summed of normalized cross correlation. Finally, false match points were eliminated by the bidirectional consistency. SURF algorithm can control the range of matching points, and the matching points extracted from the deformation area are eliminated, and the matching points with stable and even distribution are obtained. The experimental results demonstrated that the proposed algorithm had high precision, and can effectively avoid the effect of geometric distortion on SAR image matching. Meet accuracy requirements of the block adjustment with sparse control points.

  7. A dual-adaptive support-based stereo matching algorithm

    Science.gov (United States)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  8. Comparative Performance Analysis of Different Fingerprint Biometric Scanners for Patient Matching.

    Science.gov (United States)

    Kasiiti, Noah; Wawira, Judy; Purkayastha, Saptarshi; Were, Martin C

    2017-01-01

    Unique patient identification within health services is an operational challenge in healthcare settings. Use of key identifiers, such as patient names, hospital identification numbers, national ID, and birth date are often inadequate for ensuring unique patient identification. In addition approximate string comparator algorithms, such as distance-based algorithms, have proven suboptimal for improving patient matching, especially in low-resource settings. Biometric approaches may improve unique patient identification. However, before implementing the technology in a given setting, such as health care, the right scanners should be rigorously tested to identify an optimal package for the implementation. This study aimed to investigate the effects of factors such as resolution, template size, and scan capture area on the matching performance of different fingerprint scanners for use within health care settings. Performance analysis of eight different scanners was tested using the demo application distributed as part of the Neurotech Verifinger SDK 6.0.

  9. Matching and correlation computations in stereoscopic depth perception.

    Science.gov (United States)

    Doi, Takahiro; Tanabe, Seiji; Fujita, Ichiro

    2011-03-02

    A fundamental task of the visual system is to infer depth by using binocular disparity. To encode binocular disparity, the visual cortex performs two distinct computations: one detects matched patterns in paired images (matching computation); the other constructs the cross-correlation between the images (correlation computation). How the two computations are used in stereoscopic perception is unclear. We dissociated their contributions in near/far discrimination by varying the magnitude of the disparity across separate sessions. For small disparity (0.03°), subjects performed at chance level to a binocularly opposite-contrast (anti-correlated) random-dot stereogram (RDS) but improved their performance with the proportion of contrast-matched (correlated) dots. For large disparity (0.48°), the direction of perceived depth reversed with an anti-correlated RDS relative to that for a correlated one. Neither reversed nor normal depth was perceived when anti-correlation was applied to half of the dots. We explain the decision process as a weighted average of the two computations, with the relative weight of the correlation computation increasing with the disparity magnitude. We conclude that matching computation dominates fine depth perception, while both computations contribute to coarser depth perception. Thus, stereoscopic depth perception recruits different computations depending on the disparity magnitude.

  10. Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions.

    Science.gov (United States)

    Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng

    2015-07-28

    Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed

  11. High performance embedded system for real-time pattern matching

    Energy Technology Data Exchange (ETDEWEB)

    Sotiropoulou, C.-L., E-mail: c.sotiropoulou@cern.ch [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Luciano, P. [University of Cassino and Southern Lazio, Gaetano di Biasio 43, Cassino 03043 (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Gkaitatzis, S. [Aristotle University of Thessaloniki, 54124 Thessaloniki (Greece); Citraro, S. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Giannetti, P. [INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dell' Orso, M. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy)

    2017-02-11

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  12. High performance embedded system for real-time pattern matching

    International Nuclear Information System (INIS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-01-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  13. Camera pose refinement by matching uncertain 3D building models with thermal infrared image sequences for high quality texture extraction

    Science.gov (United States)

    Iwaszczuk, Dorota; Stilla, Uwe

    2017-10-01

    Thermal infrared (TIR) images are often used to picture damaged and weak spots in the insulation of the building hull, which is widely used in thermal inspections of buildings. Such inspection in large-scale areas can be carried out by combining TIR imagery and 3D building models. This combination can be achieved via texture mapping. Automation of texture mapping avoids time consuming imaging and manually analyzing each face independently. It also provides a spatial reference for façade structures extracted in the thermal textures. In order to capture all faces, including the roofs, façades, and façades in the inner courtyard, an oblique looking camera mounted on a flying platform is used. Direct geo-referencing is usually not sufficient for precise texture extraction. In addition, 3D building models have also uncertain geometry. In this paper, therefore, methodology for co-registration of uncertain 3D building models with airborne oblique view images is presented. For this purpose, a line-based model-to-image matching is developed, in which the uncertainties of the 3D building model, as well as of the image features are considered. Matched linear features are used for the refinement of the exterior orientation parameters of the camera in order to ensure optimal co-registration. Moreover, this study investigates whether line tracking through the image sequence supports the matching. The accuracy of the extraction and the quality of the textures are assessed. For this purpose, appropriate quality measures are developed. The tests showed good results on co-registration, particularly in cases where tracking between the neighboring frames had been applied.

  14. Karlsruhe Research Center, Nuclear Safety Research Project (PSF). Annual report 1994; Forschungszentrum Karlsruhe, Projekt Nukleare Sicherheitsforschung. Jahrsbericht 1994

    Energy Technology Data Exchange (ETDEWEB)

    Hueper, R. [ed.

    1995-08-01

    The reactor safety R and D work of the Karlsruhe Research Centre (FZKA) has been part of the Nuclear Safety Research Projet (PSF) since 1990. The present annual report 1994 summarizes the R and D results. The research tasks are coordinated in agreement with internal and external working groups. The contributions to this report correspond to the status of early 1995. An abstract in English precedes each of them, whenever the respective article is written in German. (orig.) [Deutsch] Seit Beginn 1990 sind die F+E-Arbeiten des Forschungszentrum Karlsruhe (FZKA) zur Reaktorsicherheit im Projekt Nukleare Sicherheitsforschung (PSF) zusammengefasst. Der vorliegende Jahresbericht 1994 enthaelt Beitraege zu aktuellen Fragen der Sicherheit von Leichtwasserreaktoren und innovativen Systemen sowie der Umwandlung von minoren Aktiniden. Die konkreten Forschungsthemen und -vorhaben werden mit internen und externen Fachgremien laufend abgestimmt. An den beschriebenen Arbeiten sind die folgenden Institute und Abteilungen des FZKA beteiligt: Institut fuer Materialforschung IMF I, II, III; Institut fuer Neutronenphysik und Reaktortechnik INR; Institut fuer Angewandte Thermo- und Fluiddynamik IATF; Institut fuer Reaktorsicherheit IRS; Hauptabteilung Ingenieurtechnik HIT; Hauptabteilung Versuchstechnik HVT sowie vom KfK beauftragte externe Institutionen. Die einzelnen Beitraege stellen den Stand der Arbeiten zum Fruehjahr 1995 dar und sind entsprechend dem F+E-Programm 1994 numeriert. Den in deutscher Sprache verfassten Beitraege sind Kurzfassungen in englischer Sprache vorangestellt. (orig.)

  15. The Effectiveness Evaluation among Different Player-Matching Mechanisms in a Multi-Player Quiz Game

    Science.gov (United States)

    Tsai, Fu-Hsing

    2016-01-01

    This study aims to investigate whether different player-matching mechanisms in educational multi-player online games (MOGs) can affect students' learning performance, enjoyment perception and gaming behaviors. Based on the multi-player quiz game, TRIS-Q, developed by Tsai, Tsai and Lin (2015) using a free player-matching (FPM) mechanism, the same…

  16. Fingerprint Recognition Using Minutia Score Matching

    OpenAIRE

    J, Ravi.; Raja, K. B.; R, Venugopal. K.

    2010-01-01

    The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae ...

  17. Automatic registration of remote sensing images based on SIFT and fuzzy block matching for change detection

    Directory of Open Access Journals (Sweden)

    Cai Guo-Rong

    2011-10-01

    Full Text Available This paper presents an automated image registration approach to detecting changes in multi-temporal remote sensing images. The proposed algorithm is based on the scale invariant feature transform (SIFT and has two phases. The first phase focuses on SIFT feature extraction and on estimation of image transformation. In the second phase, Structured Local Binary Haar Pattern (SLBHP combined with a fuzzy similarity measure is then used to build a new and effective block similarity measure for change detection. Experimental results obtained on multi-temporal data sets show that compared with three mainstream block matching algorithms, the proposed algorithm is more effective in dealing with scale, rotation and illumination changes.

  18. A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2016-12-01

    Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.

  19. As várias abordagens da família no cenário do programa/estratégia de saúde da família (PSF Las diversas abordajes de la familia en el escenario del programa/estrategia de salud de la familia (PSF Different approaches to the family in the context of the family health program/strategy

    Directory of Open Access Journals (Sweden)

    Edilza Maria Ribeiro

    2004-08-01

    como: familia/individuo; familia/domicilio; familia/individuo/domicilio; familia/comunidad; familia/riesgo social; familia/familia. Estos abordajes, por no dialogar entre si, acaban conformando un cuadro insuficientemente identificado, dificultando la asistencia. Se sugiere un examen de las condiciones señaladas como una forma de dar, efectivamente, oportunidad a la familia.This study presents the scenario that favored the inclusion of the family as a care focus in public policies. The strategies to interrupt the impoverishment and vulnerability of families in the XXth century occur in a different form, according to different "welfare states" in capitalist societies. However, in view of the welfare state crisis and the increasing costs of public and private services and privates, at least a partial family solution is required in terms of reducing its dependency. The Family Health Program (PSF put the family on the Brazilian social policy agenda in 1994, reflecting interests from the neoliberal model as well as from solidary social forces. This inclusion generated different approaches, such as: family/individual; family/home; family/individual/home; family/community; family/social risk; family/family. These approaches, due to the lack of a mutual dialogue, end up composing an insufficiently identified picture, thus turning care more difficult. The conditions indicated here should be examined as a way of giving a true chance to the family.

  20. Measurements of incoherent light and background structure at exo-Earth detection levels in the High Contrast Imaging Testbed

    Science.gov (United States)

    Cady, Eric; Shaklan, Stuart

    2014-08-01

    A major component of the estimation and correction of starlight at very high contrasts is the creation of a dark hole: a region in the vicinity of the core of the stellar point spread function (PSF) where speckles in the PSF wings have been greatly attenuated, up to a factor of 1010 for the imaging of terrestrial exoplanets. At these very high contrasts, removing these speckles requires distinguishing between light from the stellar PSF scattered by instrument imperfections, which may be partially corrected across a broad band using deformable mirrors in the system, from light from other sources which generally may not. These other sources may be external or internal to the instrument (e.g. planets, exozodiacal light), but in either case, their distinguishing characteristic is their inability to interfere coherently with the PSF. In the following we discuss the estimation, structure, and expected origin of this incoherent" signal, primarily in the context of a series of experiments made with a linear band-limited mask in Jan-Mar 2013. We find that the incoherent" signal at moderate contrasts is largely estimation error of the coherent signal, while at very high contrasts it represents a true floor which is stable over week-timescales.

  1. Hybrid-Based Dense Stereo Matching

    Science.gov (United States)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  2. Improved Stereo Matching With Boosting Method

    Directory of Open Access Journals (Sweden)

    Shiny B

    2015-06-01

    Full Text Available Abstract This paper presents an approach based on classification for improving the accuracy of stereo matching methods. We propose this method for occlusion handling. This work employs classification of pixels for finding the erroneous disparity values. Due to the wide applications of disparity map in 3D television medical imaging etc the accuracy of disparity map has high significance. An initial disparity map is obtained using local or global stereo matching methods from the input stereo image pair. The various features for classification are computed from the input stereo image pair and the obtained disparity map. Then the computed feature vector is used for classification of pixels by using GentleBoost as the classification method. The erroneous disparity values in the disparity map found by classification are corrected through a completion stage or filling stage. A performance evaluation of stereo matching using AdaBoostM1 RUSBoost Neural networks and GentleBoost is performed.

  3. Predicting Visible Image Degradation by Colour Image Difference Formulae

    Institute of Scientific and Technical Information of China (English)

    Eriko Bando; Jon Y. Hardeberg; David Connah; Ivar Farup

    2004-01-01

    It carried out a CRT monitor based psychophysical experiment to investigate the quality of three colour image difference metrics, the CIEAE ab equation, the iCAM and the S-CIELAB metrics. Six original images were reproduced through six gamut mapping algorithms for the observer experiment. The result indicates that the colour image difference calculated by each metric does not directly relate to perceived image difference.

  4. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  5. Active filtering applied to radiographic images unfolded by the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2011-01-01

    Degradation of images caused by systematic uncertainties can be reduced when one knows the features of the spoiling agent. Typical uncertainties of this kind arise in radiographic images due to the non - zero resolution of the detector used to acquire them, and from the non-punctual character of the source employed in the acquisition, or from the beam divergence when extended sources are used. Both features blur the image, which, instead of a single point exhibits a spot with a vanishing edge, reproducing hence the point spread function - PSF of the system. Once this spoiling function is known, an inverse problem approach, involving inversion of matrices, can then be used to retrieve the original image. As these matrices are generally ill-conditioned, due to statistical fluctuation and truncation errors, iterative procedures should be applied, such as the Richardson-Lucy algorithm. This algorithm has been applied in this work to unfold radiographic images acquired by transmission of thermal neutrons and gamma-rays. After this procedure, the resulting images undergo an active filtering which fairly improves their final quality at a negligible cost in terms of processing time. The filter ruling the process is based on the matrix of the correction factors for the last iteration of the deconvolution procedure. Synthetic images degraded with a known PSF, and undergone to the same treatment, have been used as benchmark to evaluate the soundness of the developed active filtering procedure. The deconvolution and filtering algorithms have been incorporated to a Fortran program, written to deal with real images, generate the synthetic ones and display both. (author)

  6. Processing and evaluation of image matching tools in radiotherapy; Mise en oeuvre et evaluation d'outils de fusion d'image en radiotherapie

    Energy Technology Data Exchange (ETDEWEB)

    Bondiau, P.Y

    2004-11-15

    Cancer is a major problem of public health. Treatment can be done in a general or loco-regional way, in this last case medical images are important as they specify the localization of the tumour. The objective of the radiotherapy is to deliver a curative dose of radiation in the target volume while sparing the organs at risks (O.A.R.). The determination of the accurate localization of the targets volume as well as O.A.R. make it possible to define the ballistic of irradiation beams. After the description of the principles of radiotherapy and cancers treatment, we specify the clinical stakes of ocular, cerebral and prostatic tumours. We present a state of the art of image matching, the various techniques reviewed with an aim of being didactic with respect to the medical community. The results of matching are presented within the framework of the planning of the cerebral and prostatic radiotherapy in order to specify the types of applicable matching in oncology and more particularly in radiotherapy. Then, we present the prospects for this type of application according to various anatomical areas. Applications of automatic segmentation and the evaluation of the results in the framework of brain tumour are described after a review of the various segmentation methods according to anatomical localizations. We will see an original application: the digital simulation of the virtual tumoral growth and the comparison with the real growth of a cerebral tumour presented by a patient. Lastly, we will expose the future developments possible of the tools for image processing in radiotherapy as well as the tracks of research to be explored in oncology. (author)

  7. Speed-up Template Matching through Integral Image based Weak Classifiers

    NARCIS (Netherlands)

    Wu, t.; Toet, A.

    2014-01-01

    Template matching is a widely used pattern recognition method, especially in industrial inspection. However, the computational costs of traditional template matching increase dramatically with both template-and scene imagesize. This makes traditional template matching less useful for many (e.g.

  8. A medium resolution fingerprint matching system

    Directory of Open Access Journals (Sweden)

    Ayman Mohammad Bahaa-Eldin

    2013-09-01

    Full Text Available In this paper, a novel minutiae based fingerprint matching system is proposed. The system is suitable for medium resolution fingerprint images obtained by low cost commercial sensors. The paper presents a new thinning algorithm, a new features extraction and representation, and a novel feature distance matching algorithm. The proposed system is rotation and translation invariant and is suitable for complete or partial fingerprint matching. The proposed algorithms are optimized to be executed on low resource environments both in CPU power and memory space. The system was evaluated using a standard fingerprint dataset and good performance and accuracy were achieved under certain image quality requirements. In addition, the proposed system was compared favorably to that of the state of the art systems.

  9. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany).

    2017-10-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  10. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    International Nuclear Information System (INIS)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc

    2017-01-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  11. Stereo matching based on SIFT descriptor with illumination and camera invariance

    Science.gov (United States)

    Niu, Haitao; Zhao, Xunjie; Li, Chengjin; Peng, Xiang

    2010-10-01

    Stereo matching is the process of finding corresponding points in two or more images. The description of interest points is a critical aspect of point correspondence which is vital in stereo matching. SIFT descriptor has been proven to be better on the distinctiveness and robustness than other local descriptors. However, SIFT descriptor does not involve color information of feature point which provides powerfully distinguishable feature in matching tasks. Furthermore, in a real scene, image color are affected by various geometric and radiometric factors,such as gamma correction and exposure. These situations are very common in stereo images. For this reason, the color recorded by a camera is not a reliable cue, and the color consistency assumption is no longer valid between stereo images in real scenes. Hence the performance of other SIFT-based stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new improved SIFT stereo matching algorithms that is invariant to various radiometric variations between left and right images. Unlike other improved SIFT stereo matching algorithms, we explicitly employ the color formation model with the parameters of lighting geometry, illuminant color and camera gamma in SIFT descriptor. Firstly, we transform the input color images to log-chromaticity color space, thus a linear relationship can be established. Then, we use a log-polar histogram to build three color invariance components for SIFT descriptor. So that our improved SIFT descriptor is invariant to lighting geometry, illuminant color and camera gamma changes between left and right images. Then we can match feature points between two images and use SIFT descriptor Euclidean distance as a geometric measure in our data sets to make it further accurate and robust. Experimental results show that our method is superior to other SIFT-based algorithms including conventional stereo matching algorithms under various

  12. Matching Two-dimensional Gel Electrophoresis' Spots

    DEFF Research Database (Denmark)

    Dos Anjos, António; AL-Tam, Faroq; Shahbazkia, Hamid Reza

    2012-01-01

    This paper describes an approach for matching Two-Dimensional Electrophoresis (2-DE) gels' spots, involving the use of image registration. The number of false positive matches produced by the proposed approach is small, when compared to academic and commercial state-of-the-art approaches. This ar...

  13. Annealing optimization in the process of making membrane PSF19%DMFEVA2 for wastewater treatment of palm oil mill effluent

    Science.gov (United States)

    Said, A. A.; Mustafa

    2018-02-01

    A small proportion of the Palm Oil Mill Effluent (POME) treatment has used its wastewater to converted to methane gas which will then be converted again into electrical energy. However, for Palm Oil Mill whose has a value of Chemical Oxygen Demand in its wastewater is less than 60.000 mg / L this can’t so that the purpose wastewater treatment only to reach the standard that can be safe to dispose into the environment. Wastewater treatment systems that are general applied by Palm Oil Mill especially in North Sumatera are aerobic and anaerobic, this method takes a relatively long time due to very dependent on microbial activity. An alternative method for wastewater treatment offered is membrane technology because the process is much more effective, the time is relatively short, and expected to give more optimal result. The optimum membrane obtained is PSF19%DMFEVA2T75 membrane,while the parameter condition of the permeate analysis produced in the treatment of POME wastewater with membrane PSF19%DMFEVA2T75 obtained at pH = 7.0; TSS = 148 mg / L; BOD = 149 mg / L; And COD = 252 mg / L. The results obtained is accordance with the standard of the quality of POME.

  14. Clinical evaluation of whole-body oncologic PET with time-of-flight and point-spread function for the hybrid PET/MR system.

    Science.gov (United States)

    Shang, Kun; Cui, Bixiao; Ma, Jie; Shuai, Dongmei; Liang, Zhigang; Jansen, Floris; Zhou, Yun; Lu, Jie; Zhao, Guoguang

    2017-08-01

    Hybrid positron emission tomography/magnetic resonance (PET/MR) imaging is a new multimodality imaging technology that can provide structural and functional information simultaneously. The aim of this study was to investigate the effects of the time-of-flight (TOF) and point-spread function (PSF) on small lesions observed in PET/MR images from clinical patient image sets. This study evaluated 54 small lesions in 14 patients who had undergone 18 F-fluorodeoxyglucose (FDG) PET/MR. Lesions up to 30mm in diameter were included. The PET data were reconstructed with a baseline ordered-subsets expectation-maximization (OSEM) algorithm, OSEM+PSF, OSEM+TOF and OSEM+TOF+PSF. PET image quality and small lesions were visually evaluated and scored by a 3-point scale. A quantitative analysis was then performed using the mean and maximum standardized uptake value (SUV) of the small lesions (SUV mean and SUV max ). The lesions were divided into two groups according to the long-axis diameter and the location respectively and evaluated with each reconstruction algorithm. We also evaluated the background signal by analyzing the SUV liver . OSEM+TOF+PSF provided the highest value and OSEM+TOF or PSF showed a higher value than OSEM for the visual assessment and quantitative analysis. The combination of TOF and PSF increased the SUV mean by 26.6% and the SUV max by 30.0%. The SUV liver was not influenced by PSF or TOF. For the OSEM+TOF+PSF model, the change in SUV mean and SUV max for lesions PET/MR images, potentially improving small lesion detectability. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    Science.gov (United States)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  16. High Performance Embedded System for Real-Time Pattern Matching

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration; Gkaitatzis, Stamatios; Citraro, Saverio; Giannetti, Paola; Dell'Orso, Mauro

    2016-01-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics (HEP) and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturised version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory (AM) chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering...

  17. A new template matching method based on contour information

    Science.gov (United States)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process

  18. SAD-Based Stereo Matching Using FPGAs

    Science.gov (United States)

    Ambrosch, Kristian; Humenberger, Martin; Kubinger, Wilfried; Steininger, Andreas

    In this chapter we present a field-programmable gate array (FPGA) based stereo matching architecture. This architecture uses the sum of absolute differences (SAD) algorithm and is targeted at automotive and robotics applications. The disparity maps are calculated using 450×375 input images and a disparity range of up to 150 pixels. We discuss two different implementation approaches for the SAD and analyze their resource usage. Furthermore, block sizes ranging from 3×3 up to 11×11 and their impact on the consumed logic elements as well as on the disparity map quality are discussed. The stereo matching architecture enables a frame rate of up to 600 fps by calculating the data in a highly parallel and pipelined fashion. This way, a software solution optimized by using Intel's Open Source Computer Vision Library running on an Intel Pentium 4 with 3 GHz clock frequency is outperformed by a factor of 400.

  19. History Matching: Towards Geologically Reasonable Models

    DEFF Research Database (Denmark)

    Melnikova, Yulia; Cordua, Knud Skou; Mosegaard, Klaus

    This work focuses on the development of a new method for history matching problem that through a deterministic search finds a geologically feasible solution. Complex geology is taken into account evaluating multiple point statistics from earth model prototypes - training images. Further a function...... that measures similarity between statistics of a training image and statistics of any smooth model is introduced and its analytical gradient is computed. This allows us to apply any gradientbased method to history matching problem and guide a solution until it satisfies both production data and complexity...

  20. The match-to-match variation of match-running in elite female soccer.

    Science.gov (United States)

    Trewin, Joshua; Meylan, César; Varley, Matthew C; Cronin, John

    2018-02-01

    The purpose of this study was to examine the match-to-match variation of match-running in elite female soccer players utilising GPS, using full-match and rolling period analyses. Longitudinal study. Elite female soccer players (n=45) from the same national team were observed during 55 international fixtures across 5 years (2012-2016). Data was analysed using a custom built MS Excel spreadsheet as full-matches and using a rolling 5-min analysis period, for all players who played 90-min matches (files=172). Variation was examined using co-efficient of variation and 90% confidence limits, calculated following log transformation. Total distance per minute exhibited the smallest variation when both the full-match and peak 5-min running periods were examined (CV=6.8-7.2%). Sprint-efforts were the most variable during a full-match (CV=53%), whilst high-speed running per minute exhibited the greatest variation in the post-peak 5-min period (CV=143%). Peak running periods were observed as slightly more variable than full-match analyses, with the post-peak period very-highly variable. Variability of accelerations (CV=17%) and Player Load (CV=14%) was lower than that of high-speed actions. Positional differences were also present, with centre backs exhibiting the greatest variation in high-speed movements (CV=41-65%). Practitioners and researchers should account for within player variability when examining match performances. Identification of peak running periods should be used to assist worst case scenarios. Whilst micro-sensor technology should be further examined as to its viable use within match-analyses. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  1. Pengenalan Angka Pada Sistem Operasi Android Dengan Menggunakan Metode Template Matching

    Directory of Open Access Journals (Sweden)

    Abdi Pandu Kusuma

    2016-07-01

    input image with the image of the template. Template matching results are calculated from the number of points in the input image corresponding to the image of the template. Templates are provided in the database to provide an example of how to write a pattern of numbers. Tests performed on the application as much as 40 times with different patterns. From the test results obtained percentage of success of these applications reached 75.75%.Key word: Early age, playing, study,Template Matching.

  2. High Performance Embedded System for Real-Time Pattern Matching

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration; Gkaitatzis, Stamatios; Citraro, Saverio; Giannetti, Paola; Dell'Orso, Mauro

    2016-01-01

    We present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics (HEP) and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The design uses the flexibility of Field Programmable Gate Arrays (FPGAs) and the powerful Associative Memory Chip (ASIC) to achieve real-time performance. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain.

  3. Lenses matching of compound eye for target positioning

    Science.gov (United States)

    Guo, Fang; Zheng, Yan Pei; Wang, Keyi

    2012-10-01

    Compound eye, as a new imaging method with multi-lens for a large field of view, could complete target positioning and detection fastly, especially at close range. Therefore it could be applicated in the fields of military and medical treatment and aviation with vast market potential and development prospect. Yet the compound eye imaging method designed use three layer construction of multiple lens array arranged in a curved surface and refractive lens and imaging sensor of CMOS. In order to simplify process structure and increase the imaging area of every sub-eye, the imaging area of every eye is coved with the whole CMOS. Therefore, for several imaging point of one target, the corresponding lens of every imaging point is unkonown, and thus to identify. So an algorithm was put forward. Firstly, according to the Regular Geometry relationship of several adjacent lenses, data organization of seven lenses with a main lens was built. Subsequently, by the data organization, when one target was caught by several unknown lenses, we search every combined type of the received lenses. And for every combined type, two lenses were selected to combine and were used to calculate one three-dimensional (3D) coordinate of the target. If the 3D coordinates are same to the some combine type of the lenses numbers, in theory, the lenses and the imaging points are matched. So according to error of the 3D coordinates is calculated by the different seven lenses numbers combines, the unknown lenses could be distinguished. The experimental results show that the presented algorithm is feasible and can complete matching task for imaging points and corresponding lenses.

  4. Dense Matching Comparison Between Census and a Convolutional Neural Network Algorithm for Plant Reconstruction

    Science.gov (United States)

    Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.

    2018-05-01

    3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  5. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging

    International Nuclear Information System (INIS)

    Prato, M; Camera, A La; Bertero, M; Bonettini, S

    2013-01-01

    In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback–Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson–Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets

  6. Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images. A study based on phantom experiments and clinical images

    International Nuclear Information System (INIS)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho

    2014-01-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV. (author)

  7. [Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images: a study based on phantom experiments and clinical images].

    Science.gov (United States)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho; Ito, Shigeru; Sano, Yoshitaka; Sato, Mayumi; Kanno, Toshihiko; Okada, Hiroyuki; Torizuka, Tatsuo; Nishizawa, Sadahiko

    2014-06-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV.

  8. A comparison of substantia nigra T1 hyperintensity in Parkinson's disease dementia, Alzheimer's disease and age-matched controls: Volumetric analysis of neuromelanin imaging

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Won Jin; Park, Ju Yeon; Yun, Won Sung; Jeon, Ji Yeong; Moon, Yeon Sil; Kim, Hee Jin; Han, Seol Heui [Konkuk University School of Medicine, Seoul (Korea, Republic of); Kwak, Ki Chang; Lee, Jong Min [Dept. of Biomedical Engineering, Hanyang University, Seoul (Korea, Republic of)

    2016-09-15

    Neuromelanin loss of substantia nigra (SN) can be visualized as a T1 signal reduction on T1-weighted high-resolution imaging. We investigated whether volumetric analysis of T1 hyperintensity for SN could be used to differentiate between Parkinson's disease dementia (PDD), Alzheimer's disease (AD) and age-matched controls. This retrospective study enrolled 10 patients with PDD, 18 patients with AD, and 13 age-matched healthy elderly controls. MR imaging was performed at 3 tesla. To measure the T1 hyperintense area of SN, we obtained an axial thin section high-resolution T1-weighted fast spin echo sequence. The volumes of interest for the T1 hyperintense SN were drawn onto heavily T1-weighted FSE sequences through midbrain level, using the MIPAV software. The measurement differences were tested using the Kruskal-Wallis test followed by a post hoc comparison. A comparison of the three groups showed significant differences in terms of volume of T1 hyperintensity (p < 0.001, Bonferroni corrected). The volume of T1 hyperintensity was significantly lower in PDD than in AD and normal controls (p < 0.005, Bonferroni corrected). However, the volume of T1 hyperintensity was not different between AD and normal controls (p = 0.136, Bonferroni corrected). The volumetric measurement of the T1 hyperintensity of SN can be an imaging marker for evaluating neuromelanin loss in neurodegenerative diseases and a differential in PDD and AD cases.

  9. Body composition differences between adults with multiple sclerosis and BMI-matched controls without MS.

    Science.gov (United States)

    Wingo, Brooks C; Young, Hui-Ju; Motl, Robert W

    2018-04-01

    Persons with multiple sclerosis (MS) have many health conditions related to overweight and obesity, but little is known about how body composition among those with MS compares to those without MS at the same weight. To compare differences in whole body and regional body composition between persons with and without MS matched for sex and body mass index (BMI). Persons with MS (n = 51) and non-MS controls (n = 51) matched for sex and BMI. Total mass, lean mass, fat mass, and percent body fat (%BF) of total body and arm, leg, and trunk segments were assessed using dual-energy X-ray absorptiometry (DXA). Men with MS had significantly less whole body lean mass (mean difference: 9933.5 ± 3123.1 g, p MS counterparts. Further, men with MS had significantly lower lean mass in the arm (p = 0.02) and leg (p MS. Men with MS had significantly higher %BF in all three regions (p MS. There were no differences between women with and without MS. We observed significant differences in whole body and regional body composition between BMI-matched men with and without MS. Additional research is needed to further explore differences in body composition, adipose distribution, and the impact of these differences on the health and function of men with MS. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. ABISM: an interactive image quality assessment tool for adaptive optics instruments

    Science.gov (United States)

    Girard, Julien H.; Tourneboeuf, Martin

    2016-07-01

    ABISM (Automatic Background Interactive Strehl Meter) is a interactive tool to evaluate the image quality of astronomical images. It works on seeing-limited point spread functions (PSF) but was developed in particular for diffraction-limited PSF produced by adaptive optics (AO) systems. In the VLT service mode (SM) operations framework, ABISM is designed to help support astronomers or telescope and instruments operators (TIOs) to quickly measure the Strehl ratio (SR) during or right after an observing block (OB) to evaluate whether it meets the requirements/predictions or whether is has to be repeated and will remain in the SM queue. It's a Python-based tool with a graphical user interface (GUI) that can be used with little AO knowledge. The night astronomer (NA) or Telescope and Instrument Operator (TIO) can launch ABISM in one click and the program is able to read keywords from the FITS header to avoid mistakes. A significant effort was also put to make ABISM as robust (and forgiven) with a high rate of repeatability. As a matter of fact, ABISM is able to automatically correct for bad pixels, eliminate stellar neighbours and estimate/fit properly the background, etc.

  11. Performance analysis of different PSF shapes for the quad-HIDAC PET submillimetre resolution recovery

    Energy Technology Data Exchange (ETDEWEB)

    Ortega Maynez, Leticia, E-mail: lortega@uacj.mx [Departamento de Ingenieria Eectrica y Computacion , Universidad Autonoma de Ciudad Juarez, Avenida del Charro 450 Norte, C.P. 32310 Ciudad Juarez, Chihuahua (Mexico); Dominguez de Jesus Ochoa, Humberto; Villegas Osiris Vergara, Osslan; Gordillo, Nelly; Guadalupe Cruz Sanchez, Vianey; Gutierrez Casas, Efren David [Departamento de Ingenieria Eectrica y Computacion, Universidad Autonoma de Ciudad Juarez, Avenida del Charro 450 Norte, C.P. 32310 Ciudad Juarez, Chihuahua (Mexico)

    2011-10-01

    In pre-clinical applications, it is quite important to preserve the image resolution because it is necessary to show the details of structures of small animals. Therefore, small animal PET scanners require high spatial resolution and good sensitivity. For the quad-HIDAC PET scanner, which has virtually continuous spatial sampling; improvements in resolution, noise and contrast are obtained as a result of avoiding artifacts introduced by binning the data into sampled projections used during the reconstruction process. In order to reconstruct high-resolution images in 3D-PET, background correction and resolution recovery are included within the Maximum Likelihood list-mode Expectation Maximization reconstruction model. This paper, introduces the performance analysis of the Gaussian, Laplacian and Triangular kernels. The Full-Width Half-Maximum used for each kernel was varied from 0.8 to 1.6 mm. For each quality compartment within the phantom, transaxial middle slices from the 3D reconstructed images are shown. Results show that, according to the quantitative measures, the triangular kernel has the best performance.

  12. WFC3/UVIS image skew

    Science.gov (United States)

    Petro, Larry

    2009-07-01

    This proposal will provide an independent check of the skew in the ACS astrometric catalog of Omega Cen stars, using exposures taken in a 45-deg range of telescope roll. The roll sequence will also provide a test for orbital variation of skew and field angle dependent PSF variations. The astrometric catalog of Omega Cen, improved for a skew, will be used to derive the geometric distorion to all UVIS filters, which has preliminarily been determined from F606W images and an astrometric catalog of 47 Tuc.

  13. Matching by Monotonic Tone Mapping.

    Science.gov (United States)

    Kovacs, Gyorgy

    2018-06-01

    In this paper, a novel dissimilarity measure called Matching by Monotonic Tone Mapping (MMTM) is proposed. The MMTM technique allows matching under non-linear monotonic tone mappings and can be computed efficiently when the tone mappings are approximated by piecewise constant or piecewise linear functions. The proposed method is evaluated in various template matching scenarios involving simulated and real images, and compared to other measures developed to be invariant to monotonic intensity transformations. The results show that the MMTM technique is a highly competitive alternative of conventional measures in problems where possible tone mappings are close to monotonic.

  14. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  15. Long T2 suppression in native lung 3-D imaging using k-space reordered inversion recovery dual-echo ultrashort echo time MRI.

    Science.gov (United States)

    Gai, Neville D; Malayeri, Ashkan A; Bluemke, David A

    2017-08-01

    Long T2 species can interfere with visualization of short T2 tissue imaging. For example, visualization of lung parenchyma can be hindered by breathing artifacts primarily from fat in the chest wall. The purpose of this work was to design and evaluate a scheme for long T2 species suppression in lung parenchyma imaging using 3-D inversion recovery double-echo ultrashort echo time imaging with a k-space reordering scheme for artifact suppression. A hyperbolic secant (HS) pulse was evaluated for different tissues (T1/T2). Bloch simulations were performed with the inversion pulse followed by segmented UTE acquisition. Point spread function (PSF) was simulated for a standard interleaved acquisition order and a modulo 2 forward-reverse acquisition order. Phantom and in vivo images (eight volunteers) were acquired with both acquisition orders. Contrast to noise ratio (CNR) was evaluated in in vivo images prior to and after introduction of the long T2 suppression scheme. The PSF as well as phantom and in vivo images demonstrated reduction in artifacts arising from k-space modulation after using the reordering scheme. CNR measured between lung and fat and lung and muscle increased from -114 and -148.5 to +12.5 and 2.8 after use of the IR-DUTE sequence. Paired t test between the CNRs obtained from UTE and IR-DUTE showed significant positive change (p lung-fat CNR and p = 0.03 for lung-muscle CNR). Full 3-D lung parenchyma imaging with improved positive contrast between lung and other long T2 tissue types can be achieved robustly in a clinically feasible time using IR-DUTE with image subtraction when segmented radial acquisition with k-space reordering is employed.

  16. 3D range-gated super-resolution imaging based on stereo matching for moving platforms and targets

    Science.gov (United States)

    Sun, Liang; Wang, Xinwei; Zhou, Yan

    2017-11-01

    3D range-gated superresolution imaging is a novel 3D reconstruction technique for target detection and recognition with good real-time performance. However, for moving targets or platforms such as airborne, shipborne, remote operated vehicle and autonomous vehicle, 3D reconstruction has a large error or failure. In order to overcome this drawback, we propose a method of stereo matching for 3D range-gated superresolution reconstruction algorithm. In experiment, the target is a doll of Mario with a height of 38cm at the location of 34m, and we obtain two successive frame images of the Mario. To confirm our method is effective, we transform the original images with translation, rotation, scale and perspective, respectively. The experimental result shows that our method has a good result of 3D reconstruction for moving targets or platforms.

  17. Measurement of the presampled two-dimensional modulation transfer function of digital imaging systems

    International Nuclear Information System (INIS)

    Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Schueler, Beth A.; Ritenour, E. Russell

    2002-01-01

    The purpose of this work was to develop methods to measure the presampled two-dimensional modulation transfer function (2D MTF) of digital imaging systems. A custom x-ray 'point source' phantom was created by machining 256 holes with diameter 0.107 mm through a 0.5-mm-thick copper plate. The phantom was imaged several times, resulting in many images of individual x-ray 'spots'. The center of each spot (with respect to the pixel matrix) was determined to subpixel accuracy by fitting each spot to a 2D Gaussian function. The subpixel spot center locations were used to create a 5x oversampled system point spread function (PSF), which characterizes the optical and electrical properties of the system and is independent of the pixel sampling of the original image. The modulus of the Fourier transform of the PSF was calculated. Next, the Fourier function was normalized to the zero frequency value. Finally, the Fourier transform function was divided by the first-order Bessel function that defined the frequency content of the holes, resulting in the presampled 2D MTF. The presampled 2D MTF of a 0.1 mm pixel pitch computed radiography system and 0.2 mm pixel pitch flat panel digital imaging system that utilized a cesium iodide scintillator was measured. Comparison of the axial components of the 2D MTF to one-dimensional MTF measurements acquired using an edge device method demonstrated that the two methods produced consistent results

  18. Conversion of mammographic images to appear with the noise and sharpness characteristics of a different detector and x-ray system

    Energy Technology Data Exchange (ETDEWEB)

    Mackenzie, Alistair; Dance, David R.; Workman, Adam; Yip, Mary; Wells, Kevin; Young, Kenneth C. [National Coordinating Centre for the Physics of Mammography, Royal Surrey County Hospital, Guildford, GU2 7XX, United Kingdom and Department of Physics, University of Surrey, Guildford, GU2 7XH (United Kingdom); Northern Ireland Regional Medical Physics Service, Forster Green Hospital, Belfast, BT8 4HD (United Kingdom); Department of Physics, University of Surrey, Guildford, GU2 7XH (United Kingdom); Centre for Vision, Speech and Signal Processing, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, GU2 7XH (United Kingdom); National Coordinating Centre for the Physics of Mammography, Royal Surrey County Hospital, Guildford, GU2 7XX, United Kingdom and Department of Physics, University of Surrey, Guildford, GU2 7XH (United Kingdom)

    2012-05-15

    Purpose: Undertaking observer studies to compare imaging technology using clinical radiological images is challenging due to patient variability. To achieve a significant result, a large number of patients would be required to compare cancer detection rates for different image detectors and systems. The aim of this work was to create a methodology where only one set of images is collected on one particular imaging system. These images are then converted to appear as if they had been acquired on a different detector and x-ray system. Therefore, the effect of a wide range of digital detectors on cancer detection or diagnosis can be examined without the need for multiple patient exposures. Methods: Three detectors and x-ray systems [Hologic Selenia (ASE), GE Essential (CSI), Carestream CR (CR)] were characterized in terms of signal transfer properties, noise power spectra (NPS), modulation transfer function, and grid properties. The contributions of the three noise sources (electronic, quantum, and structure noise) to the NPS were calculated by fitting a quadratic polynomial at each spatial frequency of the NPS against air kerma. A methodology was developed to degrade the images to have the characteristics of a different (target) imaging system. The simulated images were created by first linearizing the original images such that the pixel values were equivalent to the air kerma incident at the detector. The linearized image was then blurred to match the sharpness characteristics of the target detector. Noise was then added to the blurred image to correct for differences between the detectors and any required change in dose. The electronic, quantum, and structure noise were added appropriate to the air kerma selected for the simulated image and thus ensuring that the noise in the simulated image had the same magnitude and correlation as the target image. A correction was also made for differences in primary grid transmission, scatter, and veiling glare. The method was

  19. Modelling relationships between match events and match outcome in elite football.

    Science.gov (United States)

    Liu, Hongyou; Hopkins, Will G; Gómez, Miguel-Angel

    2016-08-01

    Identifying match events that are related to match outcome is an important task in football match analysis. Here we have used generalised mixed linear modelling to determine relationships of 16 football match events and 1 contextual variable (game location: home/away) with the match outcome. Statistics of 320 close matches (goal difference ≤ 2) of season 2012-2013 in the Spanish First Division Professional Football League were analysed. Relationships were evaluated with magnitude-based inferences and were expressed as extra matches won or lost per 10 close matches for an increase of two within-team or between-team standard deviations (SD) of the match event (representing effects of changes in team values from match to match and of differences between average team values, respectively). There was a moderate positive within-team effect from shots on target (3.4 extra wins per 10 matches; 99% confidence limits ±1.0), and a small positive within-team effect from total shots (1.7 extra wins; ±1.0). Effects of most other match events were related to ball possession, which had a small negative within-team effect (1.2 extra losses; ±1.0) but a small positive between-team effect (1.7 extra wins; ±1.4). Game location showed a small positive within-team effect (1.9 extra wins; ±0.9). In analyses of nine combinations of team and opposition end-of-season rank (classified as high, medium, low), almost all between-team effects were unclear, while within-team effects varied depending on the strength of team and opposition. Some of these findings will be useful to coaches and performance analysts when planning training sessions and match tactics.

  20. DENSE MATCHING COMPARISON BETWEEN CENSUS AND A CONVOLUTIONAL NEURAL NETWORK ALGORITHM FOR PLANT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Y. Xia

    2018-05-01

    Full Text Available 3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  1. Restoration of Thickness, Density, and Volume for Highly Blurred Thin Cortical Bones in Clinical CT Images.

    Science.gov (United States)

    Pakdel, Amirreza; Hardisty, Michael; Fialkov, Jeffrey; Whyne, Cari

    2016-11-01

    In clinical CT images containing thin osseous structures, accurate definition of the geometry and density is limited by the scanner's resolution and radiation dose. This study presents and validates a practical methodology for restoring information about thin bone structure by volumetric deblurring of images. The methodology involves 2 steps: a phantom-free, post-reconstruction estimation of the 3D point spread function (PSF) from CT data sets, followed by iterative deconvolution using the PSF estimate. Performance of 5 iterative deconvolution algorithms, blind, Richardson-Lucy (standard, plus Total Variation versions), modified residual norm steepest descent (MRNSD), and Conjugate Gradient Least-Squares were evaluated using CT scans of synthetic cortical bone phantoms. The MRNSD algorithm resulted in the highest relative deblurring performance as assessed by a cortical bone thickness error (0.18 mm) and intensity error (150 HU), and was subsequently applied on a CT image of a cadaveric skull. Performance was compared against micro-CT images of the excised thin cortical bone samples from the skull (average thickness 1.08 ± 0.77 mm). Error in quantitative measurements made from the deblurred images was reduced 82% (p < 0.01) for cortical thickness and 55% (p < 0.01) for bone mineral mass. These results demonstrate a significant restoration of geometrical and radiological density information derived for thin osseous features.

  2. A method for partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    International Nuclear Information System (INIS)

    Barbee, David L; Holden, James E; Nickles, Robert J; Jeraj, Robert; Flynn, Ryan T

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated

  3. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    Science.gov (United States)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  4. Real-time image restoration for iris recognition systems.

    Science.gov (United States)

    Kang, Byung Jun; Park, Kang Ryoung

    2007-12-01

    In the field of biometrics, it has been reported that iris recognition techniques have shown high levels of accuracy because unique patterns of the human iris, which has very many degrees of freedom, are used. However, because conventional iris cameras have small depth-of-field (DOF) areas, input iris images can easily be blurred, which can lead to lower recognition performance, since iris patterns are transformed by the blurring caused by optical defocusing. To overcome these problems, an autofocusing camera can be used. However, this inevitably increases the cost, size, and complexity of the system. Therefore, we propose a new real-time iris image-restoration method, which can increase the camera's DOF without requiring any additional hardware. This paper presents five novelties as compared to previous works: 1) by excluding eyelash and eyelid regions, it is possible to obtain more accurate focus scores from input iris images; 2) the parameter of the point spread function (PSF) can be estimated in terms of camera optics and measured focus scores; therefore, parameter estimation is more accurate than it has been in previous research; 3) because the PSF parameter can be obtained by using a predetermined equation, iris image restoration can be done in real-time; 4) by using a constrained least square (CLS) restoration filter that considers noise, performance can be greatly enhanced; and 5) restoration accuracy can also be enhanced by estimating the weight value of the noise-regularization term of the CLS filter according to the amount of image blurring. Experimental results showed that iris recognition errors when using the proposed restoration method were greatly reduced as compared to those results achieved without restoration or those achieved using previous iris-restoration methods.

  5. Pupil filter design by using a Bessel functions basis at the image plane.

    Science.gov (United States)

    Canales, Vidal F; Cagigal, Manuel P

    2006-10-30

    Many applications can benefit from the use of pupil filters for controlling the light intensity distribution near the focus of an optical system. Most of the design methods for such filters are based on a second-order expansion of the Point Spread Function (PSF). Here, we present a new procedure for designing radially-symmetric pupil filters. It is more precise than previous procedures as it considers the exact expression of the PSF, expanded as a function of first-order Bessel functions. Furthermore, this new method presents other advantages: the height of the side lobes can be easily controlled, it allows the design of amplitude-only, phase-only or hybrid filters, and the coefficients of the PSF expansion can be directly related to filter parameters. Finally, our procedure allows the design of filters with very different behaviours and optimal performance.

  6. Local Stereo Matching Using Adaptive Local Segmentation

    NARCIS (Netherlands)

    Damjanovic, S.; van der Heijden, Ferdinand; Spreeuwers, Lieuwe Jan

    We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the fronto-parallel assumption based on the local intensity variations in the 4-neighborhood of the matching pixel. The

  7. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    Science.gov (United States)

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction.

  8. Image deblurring with Poisson data: from cells to galaxies

    International Nuclear Information System (INIS)

    Bertero, M; Boccacci, P; Desiderà, G; Vicidomini, G

    2009-01-01

    Image deblurring is an important topic in imaging science. In this review, we consider together fluorescence microscopy and optical/infrared astronomy because of two common features: in both cases the imaging system can be described, with a sufficiently good approximation, by a convolution operator, whose kernel is the so-called point-spread function (PSF); moreover, the data are affected by photon noise, described by a Poisson process. This statistical property of the noise, that is common also to emission tomography, is the basis of maximum likelihood and Bayesian approaches introduced in the mid eighties. From then on, a huge amount of literature has been produced on these topics. This review is a tutorial and a review of a relevant part of this literature, including some of our previous contributions. We discuss the mathematical modeling of the process of image formation and detection, and we introduce the so-called Bayesian paradigm that provides the basis of the statistical treatment of the problem. Next, we describe and discuss the most frequently used algorithms as well as other approaches based on a different description of the Poisson noise. We conclude with a review of other topics related to image deblurring such as boundary effect correction, space-variant PSFs, super-resolution, blind deconvolution and multiple-image deconvolution. (topical review)

  9. Optimal usage of cone beam computed tomography system with different field of views in image guided radiotherapy (IGRT

    Directory of Open Access Journals (Sweden)

    Narayana Venkata Naga Madhusudhana Sresty

    2015-09-01

    Full Text Available Purpose: To find methods for optimal usage of XVI (X-ray volume imaging system in Elekta synergy linear accelerator with different field of views for same lesion in order to minimize patient dose due to imaging.Methods: 20 scans of 2 individual patients with ca sigmoid colon and ca lung were used in this study. Kilo voltage collimators with medium field of view were used as per the preset information. Images were reconstructed for another collimator with small field of view. The set up errors were evaluated with XVI software. Shift results of both methods were compared. Results: Variation in treatment set up errors with M20 and S20 collimators were ≤ 0.2 mm in translational and 0.30 in rotational shifts. Results showed almost equal translational and rotational shifts in both medium and small field of views with different collimators in all the scans. Visualization of target and surrounding structures were good enough and sufficient for XVI auto matching.Conclusion: Imaging with small field of view results less patient dose compared with medium or large field of views. It is Suggestible to use collimators with small field of view wherever possible. In this study, collimators with small field of view were sufficient for both patients though the preset information indicated medium field of view. But, it always depends on the area required for matching purpose. So, individual selection is important than preset information in the XVI system.

  10. High-resolution imaging of cellular processes across textured surfaces using an indexed-matched elastomer.

    Science.gov (United States)

    Ravasio, Andrea; Vaishnavi, Sree; Ladoux, Benoit; Viasnoff, Virgile

    2015-03-01

    Understanding and controlling how cells interact with the microenvironment has emerged as a prominent field in bioengineering, stem cell research and in the development of the next generation of in vitro assays as well as organs on a chip. Changing the local rheology or the nanotextured surface of substrates has proved an efficient approach to improve cell lineage differentiation, to control cell migration properties and to understand environmental sensing processes. However, introducing substrate surface textures often alters the ability to image cells with high precision, compromising our understanding of molecular mechanisms at stake in environmental sensing. In this paper, we demonstrate how nano/microstructured surfaces can be molded from an elastomeric material with a refractive index matched to the cell culture medium. Once made biocompatible, contrast imaging (differential interference contrast, phase contrast) and high-resolution fluorescence imaging of subcellular structures can be implemented through the textured surface using an inverted microscope. Simultaneous traction force measurements by micropost deflection were also performed, demonstrating the potential of our approach to study cell-environment interactions, sensing processes and cellular force generation with unprecedented resolution. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  11. Hierarchical Stereo Matching in Two-Scale Space for Cyber-Physical System

    Directory of Open Access Journals (Sweden)

    Eunah Choi

    2017-07-01

    Full Text Available Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results in more ambiguous correspondences. When a high-resolution image is down-sampled, the high-frequency components of the fine-scaled image are at risk of disappearing in the coarse-resolution image. Furthermore, if erroneous disparity estimates caused by missing high-frequency components are propagated across scale space, ultimately, false disparity estimates are obtained. To solve these problems, we introduce an efficient hierarchical stereo matching method in two-scale space. This method applies disparity estimation to the reduced-resolution image, and the disparity result is then up-sampled to the original resolution. The disparity estimation values of the high-frequency (or edge component regions of the full-resolution image are combined with the up-sampled disparity results. In this study, we extracted the high-frequency areas from the scale-space representation by using difference of Gaussian (DoG or found edge components, using a Canny operator. Then, edge-aware disparity propagation was used to refine the disparity map. The experimental results show that the proposed algorithm outperforms previous methods.

  12. Hierarchical Stereo Matching in Two-Scale Space for Cyber-Physical System.

    Science.gov (United States)

    Choi, Eunah; Lee, Sangyoon; Hong, Hyunki

    2017-07-21

    Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results in more ambiguous correspondences. When a high-resolution image is down-sampled, the high-frequency components of the fine-scaled image are at risk of disappearing in the coarse-resolution image. Furthermore, if erroneous disparity estimates caused by missing high-frequency components are propagated across scale space, ultimately, false disparity estimates are obtained. To solve these problems, we introduce an efficient hierarchical stereo matching method in two-scale space. This method applies disparity estimation to the reduced-resolution image, and the disparity result is then up-sampled to the original resolution. The disparity estimation values of the high-frequency (or edge component) regions of the full-resolution image are combined with the up-sampled disparity results. In this study, we extracted the high-frequency areas from the scale-space representation by using difference of Gaussian (DoG) or found edge components, using a Canny operator. Then, edge-aware disparity propagation was used to refine the disparity map. The experimental results show that the proposed algorithm outperforms previous methods.

  13. The application of computer color matching techniques to the matching of target colors in a food substrate: a first step in the development of foods with customized appearance.

    Science.gov (United States)

    Kim, Sandra; Golding, Matt; Archer, Richard H

    2012-06-01

    A predictive color matching model based on the colorimetric technique was developed and used to calculate the concentrations of primary food dyes needed in a model food substrate to match a set of standard tile colors. This research is the first stage in the development of novel three-dimensional (3D) foods in which color images or designs can be rapidly reproduced in 3D form. Absorption coefficients were derived for each dye, from a concentration series in the model substrate, a microwave-baked cake. When used in a linear, additive blending model these coefficients were able to predict cake color from selected dye blends to within 3 ΔE*(ab,10) color difference units, or within the limit of a visually acceptable match. Absorption coefficients were converted to pseudo X₁₀, Y₁₀, and Z₁₀ tri-stimulus values (X₁₀(P), Y₁₀(P), Z₁₀(P)) for colorimetric matching. The Allen algorithm was used to calculate dye concentrations to match the X₁₀(P), Y₁₀(P), and Z₁₀(P) values of each tile color. Several recipes for each color were computed with the tile specular component included or excluded, and tested in the cake. Some tile colors proved out-of-gamut, limited by legal dye concentrations; these were scaled to within legal range. Actual differences suggest reasonable visual matches could be achieved for within-gamut tile colors. The Allen algorithm, with appropriate adjustments of concentration outputs, could provide a sufficiently rapid and accurate calculation tool for 3D color food printing. The predictive color matching approach shows potential for use in a novel embodiment of 3D food printing in which a color image or design could be rendered within a food matrix through the selective blending of primary dyes to reproduce each color element. The on-demand nature of this food application requires rapid color outputs which could be provided by the color matching technique, currently used in nonfood industries, rather than by empirical food

  14. Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2017-08-01

    Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed

  15. APPLICATION OF A DAMPED LOCALLY OPTIMIZED COMBINATION OF IMAGES METHOD TO THE SPECTRAL CHARACTERIZATION OF FAINT COMPANIONS USING AN INTEGRAL FIELD SPECTROGRAPH

    International Nuclear Information System (INIS)

    Pueyo, Laurent; Crepp, Justin R.; Hinkley, Sasha; Hillenbrand, Lynne; Dekany, Richard; Bouchez, Antonin; Roberts, Jenny; Vasisht, Gautam; Roberts, Lewis C.; Shao, Mike; Burruss, Rick; Brenner, Douglas; Oppenheimer, Ben R.; Zimmerman, Neil; Parry, Ian; Beichman, Charles; Soummer, Rémi

    2012-01-01

    High-contrast imaging instruments are now being equipped with integral field spectrographs (IFSs) to facilitate the detection and characterization of faint substellar companions. Algorithms currently envisioned to handle IFS data, such as the Locally Optimized Combination of Images (LOCI) algorithm, rely on aggressive point-spread function (PSF) subtraction, which is ideal for initially identifying companions but results in significantly biased photometry and spectroscopy owing to unwanted mixing with residual starlight. This spectrophotometric issue is further complicated by the fact that algorithmic color response is a function of the companion's spectrum, making it difficult to calibrate the effects of the reduction without using iterations involving a series of injected synthetic companions. In this paper, we introduce a new PSF calibration method, which we call 'damped LOCI', that seeks to alleviate these concerns. By modifying the cost function that determines the weighting coefficients used to construct PSF reference images, and also forcing those coefficients to be positive, it is possible to extract companion spectra with a precision that is set by calibration of the instrument response and transmission of the atmosphere, and not by post-processing. We demonstrate the utility of this approach using on-sky data obtained with the Project 1640 IFS at Palomar. Damped LOCI does not require any iterations on the underlying spectral type of the companion, nor does it rely on priors involving the chromatic and statistical properties of speckles. It is a general technique that can readily be applied to other current and planned instruments that employ IFSs.

  16. Improved LSB matching steganography with histogram characters reserved

    Science.gov (United States)

    Chen, Zhihong; Liu, Wenyao

    2008-03-01

    This letter bases on the researches of LSB (least significant bit, i.e. the last bit of a binary pixel value) matching steganographic method and the steganalytic method which aims at histograms of cover images, and proposes a modification to LSB matching. In the LSB matching, if the LSB of the next cover pixel matches the next bit of secret data, do nothing; otherwise, choose to add or subtract one from the cover pixel value at random. In our improved method, a steganographic information table is defined and records the changes which embedded secrete bits introduce in. Through the table, the next LSB which has the same pixel value will be judged to add or subtract one dynamically in order to ensure the histogram's change of cover image is minimized. Therefore, the modified method allows embedding the same payload as the LSB matching but with improved steganographic security and less vulnerability to attacks compared with LSB matching. The experimental results of the new method show that the histograms maintain their attributes, such as peak values and alternative trends, in an acceptable degree and have better performance than LSB matching in the respects of histogram distortion and resistance against existing steganalysis.

  17. An investigation into CT radiation dose variations for head examinations on matched equipment

    International Nuclear Information System (INIS)

    Zarb, Francis; Foley, Shane; Toomey, Rachel; Rainford, Louise; Holm, Susanne; Evanoff, Michael G.

    2016-01-01

    This study investigated radiation dose and image quality differences for computed tomography (CT) head examinations across centres with matched CT equipment. Radiation dose records and imaging protocols currently employed across three European university teaching hospitals were collated, compared and coded as Centres A, B and C from specification matched CT equipment models. Patient scans (n = 40) obtained from Centres A and C were evaluated for image quality, based on the visualisation of Commission of European Community (CEC) image quality criteria using visual grading characteristic (VGC) analysis, where American Board of Radiology examiners (n = 11) stated their confidence in identifying anatomical criteria. Mean doses in terms of CT dose index (CTDI vol -mGy) and dose length product (DLP-mGy cm) were as follows: Centre A-33.12 mGy and 461.45 mGy cm; Centre B -101 mGy (base)/32 mGy (cerebrum) and 762 mGy cm and Centre C-71.98 mGy and 1047.26 mGy cm, showing a significant difference (p ≤ 0.05) in DLP across centres. VGC analysis indicated better visualisation of CEC criteria on Centre C images (VGC AUC 0.225). All three imaging protocols are routinely used clinically, and image quality is acceptable in each centre. Clinical centres with identical model CT scanners have variously customised their protocols achieving a range of dose savings and still resulting in clinically acceptable image quality. (authors)

  18. Practical method for appearance match between soft copy and hard copy

    Science.gov (United States)

    Katoh, Naoya

    1994-04-01

    CRT monitors are often used as a soft proofing device for the hard copy image output. However, what the user sees on the monitor does not match its output, even if the monitor and the output device are calibrated with CIE/XYZ or CIE/Lab. This is especially obvious when correlated color temperature (CCT) of CRT monitor's white point significantly differs from ambient light. In a typical office environment, one uses a computer graphic monitor having a CCT of 9300K in a room of white fluorescent light of 4150K CCT. In such a case, human visual system is partially adapted to the CRT monitor's white point and partially to the ambient light. The visual experiments were performed on the effect of the ambient lighting. Practical method for soft copy color reproduction that matches the hard copy image in appearance is presented in this paper. This method is fundamentally based on a simple von Kries' adaptation model and takes into account the human visual system's partial adaptation and contrast matching.

  19. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  20. A novel iris transillumination grading scale allowing flexible assessment with quantitative image analysis and visual matching.

    Science.gov (United States)

    Wang, Chen; Brancusi, Flavia; Valivullah, Zaheer M; Anderson, Michael G; Cunningham, Denise; Hedberg-Buenz, Adam; Power, Bradley; Simeonov, Dimitre; Gahl, William A; Zein, Wadih M; Adams, David R; Brooks, Brian

    2018-01-01

    To develop a sensitive scale of iris transillumination suitable for clinical and research use, with the capability of either quantitative analysis or visual matching of images. Iris transillumination photographic images were used from 70 study subjects with ocular or oculocutaneous albinism. Subjects represented a broad range of ocular pigmentation. A subset of images was subjected to image analysis and ranking by both expert and nonexpert reviewers. Quantitative ordering of images was compared with ordering by visual inspection. Images were binned to establish an 8-point scale. Ranking consistency was evaluated using the Kendall rank correlation coefficient (Kendall's tau). Visual ranking results were assessed using Kendall's coefficient of concordance (Kendall's W) analysis. There was a high degree of correlation among the image analysis, expert-based and non-expert-based image rankings. Pairwise comparisons of the quantitative ranking with each reviewer generated an average Kendall's tau of 0.83 ± 0.04 (SD). Inter-rater correlation was also high with Kendall's W of 0.96, 0.95, and 0.95 for nonexpert, expert, and all reviewers, respectively. The current standard for assessing iris transillumination is expert assessment of clinical exam findings. We adapted an image-analysis technique to generate quantitative transillumination values. Quantitative ranking was shown to be highly similar to a ranking produced by both expert and nonexpert reviewers. This finding suggests that the image characteristics used to quantify iris transillumination do not require expert interpretation. Inter-rater rankings were also highly similar, suggesting that varied methods of transillumination ranking are robust in terms of producing reproducible results.

  1. Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings.

    Science.gov (United States)

    Su, Nan; Yan, Yiming; Qiu, Mingjie; Zhao, Chunhui; Wang, Liguo

    2018-03-29

    In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC) dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods.

  2. Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings

    Directory of Open Access Journals (Sweden)

    Nan Su

    2018-03-01

    Full Text Available In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods.

  3. Wide baseline stereo matching based on double topological relationship consistency

    Science.gov (United States)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  4. Multispectral Image Feature Points

    Directory of Open Access Journals (Sweden)

    Cristhian Aguilera

    2012-09-01

    Full Text Available This paper presents a novel feature point descriptor for the multispectral image case: Far-Infrared and Visible Spectrum images. It allows matching interest points on images of the same scene but acquired in different spectral bands. Initially, points of interest are detected on both images through a SIFT-like based scale space representation. Then, these points are characterized using an Edge Oriented Histogram (EOH descriptor. Finally, points of interest from multispectral images are matched by finding nearest couples using the information from the descriptor. The provided experimental results and comparisons with similar methods show both the validity of the proposed approach as well as the improvements it offers with respect to the current state-of-the-art.

  5. A PSF photometry tool for NASA's Kepler, K2, and TESS missions

    Science.gov (United States)

    Cardoso, Jose Vinicius De Miranda; Barentsen, Geert; Hedges, Christina L.; Gully-Santiago, Michael A.; Cody, Ann Marie; Montet, Ben

    2018-01-01

    NASA's Kepler and K2 missions have impacted all areas of astrophysics in unique and important ways by delivering high-precision time series data on asteroids, stars, and galaxies. For example, both the official Kepler pipeline and the various community-owned pipelines have been successful at discovering a myriad of transiting exoplanets around a wide range of stellar types. However, the existing pipelines tend to focus on studying isolated stars using simple aperture photometry, and often perform sub-optimally in crowded fields where objects are blended. To address this issue, we present a Point Spread Function (PSF) photometry toolkit for Kepler and K2 which is able to extract light curves from crowded regions, such as the Beehive Cluster, the Lagoon Nebula, and the M67 globular cluster, which were all recently observed by Kepler. We present a detailed discussion on the theory, the practical use, and demonstrate our tool on various levels of crowding. Finally, we discuss the future use of the tool on data from the TESS mission. The code is open source and available on GitHub as part of the PyKE toolkit for Kepler/K2 data analysis.

  6. Matching Students to Schools

    Directory of Open Access Journals (Sweden)

    Dejan Trifunovic

    2017-08-01

    Full Text Available In this paper, we present the problem of matching students to schools by using different matching mechanisms. This market is specific since public schools are free and the price mechanism cannot be used to determine the optimal allocation of children in schools. Therefore, it is necessary to use different matching algorithms that mimic the market mechanism and enable us to determine the core of the cooperative game. In this paper, we will determine that it is possible to apply cooperative game theory in matching problems. This review paper is based on illustrative examples aiming to compare matching algorithms in terms of the incentive compatibility, stability and efficiency of the matching. In this paper we will present some specific problems that may occur in matching, such as improving the quality of schools, favoring minority students, the limited length of the list of preferences and generating strict priorities from weak priorities.

  7. Drop size distribution measured by imaging: determination of the measurement volume by the calibration of the point spread function

    International Nuclear Information System (INIS)

    Fdida, Nicolas; Blaisot, Jean-Bernard

    2010-01-01

    Measurement of drop size distributions in a spray depends on the definition of the control volume for drop counting. For image-based techniques, this implies the definition of a depth-of-field (DOF) criterion. A sizing procedure based on an imaging model and associated with a calibration procedure is presented. Relations between image parameters and object properties are used to provide a measure of the size of the droplets, whatever the distance from the in-focus plane. A DOF criterion independent of the size of the drops and based on the determination of the width of the point spread function (PSF) is proposed. It allows to extend the measurement volume to defocused droplets and, due to the calibration of the PSF, to clearly define the depth of the measurement volume. Calibrated opaque discs, calibrated pinholes and an optical edge are used for this calibration. A comparison of the technique with a phase Doppler particle analyser and a laser diffraction granulometer is performed on an application to an industrial spray. Good agreement is found between the techniques when particular care is given to the sampling of droplets. The determination of the measurement volume is used to determine the drop concentration in the spray and the maximum drop concentration that imaging can support

  8. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    Science.gov (United States)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  9. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    International Nuclear Information System (INIS)

    Ratnam, Challa; Rao, Vadlamudi Lakshmana; Goud, Sivagouni Lachaa

    2006-01-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper

  10. Reducing depth induced spherical aberration in 3D widefield fluorescence microscopy by wavefront coding using the SQUBIC phase mask

    Science.gov (United States)

    Patwary, Nurmohammed; Doblas, Ana; King, Sharon V.; Preza, Chrysanthe

    2014-03-01

    Imaging thick biological samples introduces spherical aberration (SA) due to refractive index (RI) mismatch between specimen and imaging lens immersion medium. SA increases with the increase of either depth or RI mismatch. Therefore, it is difficult to find a static compensator for SA1. Different wavefront coding methods2,3 have been studied to find an optimal way of static wavefront correction to reduce depth-induced SA. Inspired by a recent design of a radially symmetric squared cubic (SQUBIC) phase mask that was tested for scanning confocal microscopy1 we have modified the pupil using the SQUBIC mask to engineer the point spread function (PSF) of a wide field fluorescence microscope. In this study, simulated images of a thick test object were generated using a wavefront encoded engineered PSF (WFEPSF) and were restored using space-invariant (SI) and depth-variant (DV) expectation maximization (EM) algorithms implemented in the COSMOS software4. Quantitative comparisons between restorations obtained with both the conventional and WFE PSFs are presented. Simulations show that, in the presence of SA, the use of the SIEM algorithm and a single SQUBIC encoded WFE-PSF can yield adequate image restoration. In addition, in the presence of a large amount of SA, it is possible to get adequate results using the DVEM with fewer DV-PSFs than would typically be required for processing images acquired with a clear circular aperture (CCA) PSF. This result implies that modification of a widefield system with the SQUBIC mask renders the system less sensitive to depth-induced SA and suitable for imaging samples at larger optical depths.

  11. Quality assessment and enhancement for cone-beam computed tomography in dental imaging

    International Nuclear Information System (INIS)

    Jeon, Sung Chae

    2006-02-01

    Cone-beam CT will become increasingly important in diagnostic imaging modality in the dental practice over the next decade. For dental diagnostic imaging, cone-beam computed tomography (CBCT) system based on large area flat panel imager has been designed and developed for three-dimensional volumetric image. The new CBCT system can provide a 3-D volumetric image during only one circular scanning with relatively short times (20-30 seconds) and requires less radiation dose than that of conventional CT. To reconstruct volumetric image from 2-D projection images, FDK algorithm was employed. The prototype of our CBCT system gives the promising results that can be efficiently diagnosed. This dissertation deals with assessment, enhancement, and optimization for dental cone-beam computed tomography with high performance. A new blur estimation method was proposed, namely model based estimation algorithm. Based on the empirical model of the PSF, an image restoration is applied to radiological images. The accuracy of the PSF estimation under Poisson noise and readout electronic noise is significantly better for the R-L estimator than the Wiener estimator. In the image restoration experiment, the result showed much better improvement in the low and middle range of spatial frequency. Our proposed algorithm is more simple and effective method to determine 2-D PSF of the x-ray imaging system than traditional methods. Image based scatter correction scheme to reduce the scatter effects was proposed. This algorithm corrects scatter on projection images based on convolution, scatter fraction, and angular interpolation. The scatter signal was estimated by convolving a projection image with scatter point spread function (SPSF) followed by multiplication with scatter fraction. Scatter fraction was estimated using collimator which is similar to SPECS method. This method does not require extra x-ray dose and any additional phantom. Maximum estimated error for interpolation was less than 7

  12. Green's function matching method for adjoining regions having different masses

    International Nuclear Information System (INIS)

    Morgenstern Horing, Norman J

    2006-01-01

    We present a primer on the method of Green's function matching for the determination of the global Schroedinger Green's function for all space subject to joining conditions at an interface between two (or more) separate parts of the region having different masses. The object of this technique is to determine the full space Schroedinger Green's function in terms of the individual Green's functions of the constituent parts taken as if they were themselves extended to all space. This analytical method has had successful applications in the theory of surface states, and remains of interest for nanostructures

  13. Latent palmprint matching.

    Science.gov (United States)

    Jain, Anil K; Feng, Jianjiang

    2009-06-01

    The evidential value of palmprints in forensic applications is clear as about 30 percent of the latents recovered from crime scenes are from palms. While biometric systems for palmprint-based personal authentication in access control type of applications have been developed, they mostly deal with low-resolution (about 100 ppi) palmprints and only perform full-to-full palmprint matching. We propose a latent-to-full palmprint matching system that is needed in forensic applications. Our system deals with palmprints captured at 500 ppi (the current standard in forensic applications) or higher resolution and uses minutiae as features to be compatible with the methodology used by latent experts. Latent palmprint matching is a challenging problem because latent prints lifted at crime scenes are of poor image quality, cover only a small area of the palm, and have a complex background. Other difficulties include a large number of minutiae in full prints (about 10 times as many as fingerprints), and the presence of many creases in latents and full prints. A robust algorithm to reliably estimate the local ridge direction and frequency in palmprints is developed. This facilitates the extraction of ridge and minutiae features even in poor quality palmprints. A fixed-length minutia descriptor, MinutiaCode, is utilized to capture distinctive information around each minutia and an alignment-based minutiae matching algorithm is used to match two palmprints. Two sets of partial palmprints (150 live-scan partial palmprints and 100 latent palmprints) are matched to a background database of 10,200 full palmprints to test the proposed system. Despite the inherent difficulty of latent-to-full palmprint matching, rank-1 recognition rates of 78.7 and 69 percent, respectively, were achieved in searching live-scan partial palmprints and latent palmprints against the background database.

  14. A New FPGA Architecture of FAST and BRIEF Algorithm for On-Board Corner Detection and Matching.

    Science.gov (United States)

    Huang, Jingjin; Zhou, Guoqing; Zhou, Xiang; Zhang, Rongting

    2018-03-28

    Although some researchers have proposed the Field Programmable Gate Array (FPGA) architectures of Feature From Accelerated Segment Test (FAST) and Binary Robust Independent Elementary Features (BRIEF) algorithm, there is no consideration of image data storage in these traditional architectures that will result in no image data that can be reused by the follow-up algorithms. This paper proposes a new FPGA architecture that considers the reuse of sub-image data. In the proposed architecture, a remainder-based method is firstly designed for reading the sub-image, a FAST detector and a BRIEF descriptor are combined for corner detection and matching. Six pairs of satellite images with different textures, which are located in the Mentougou district, Beijing, China, are used to evaluate the performance of the proposed architecture. The Modelsim simulation results found that: (i) the proposed architecture is effective for sub-image reading from DDR3 at a minimum cost; (ii) the FPGA implementation is corrected and efficient for corner detection and matching, such as the average value of matching rate of natural areas and artificial areas are approximately 67% and 83%, respectively, which are close to PC's and the processing speed by FPGA is approximately 31 and 2.5 times faster than those by PC processing and by GPU processing, respectively.

  15. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    Science.gov (United States)

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  16. Stinging Insect Matching Game

    Science.gov (United States)

    ... for Kids ▸ Stinging Insect Matching Game Share | Stinging Insect Matching Game Stinging insects can ruin summer fun for those who are ... the difference between the different kinds of stinging insects in order to keep your summer safe and ...

  17. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Science.gov (United States)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  18. Fingerprint recognition system by use of graph matching

    Science.gov (United States)

    Shen, Wei; Shen, Jun; Zheng, Huicheng

    2001-09-01

    Fingerprint recognition is an important subject in biometrics to identify or verify persons by physiological characteristics, and has found wide applications in different domains. In the present paper, we present a finger recognition system that combines singular points and structures. The principal steps of processing in our system are: preprocessing and ridge segmentation, singular point extraction and selection, graph representation, and finger recognition by graphs matching. Our fingerprint recognition system is implemented and tested for many fingerprint images and the experimental result are satisfactory. Different techniques are used in our system, such as fast calculation of orientation field, local fuzzy dynamical thresholding, algebraic analysis of connections and fingerprints representation and matching by graphs. Wed find that for fingerprint database that is not very large, the recognition rate is very high even without using a prior coarse category classification. This system works well for both one-to-few and one-to-many problems.

  19. An Image Processing Approach to Pre-compensation for Higher-Order Aberrations in the Eye

    Directory of Open Access Journals (Sweden)

    Miguel Alonso Jr

    2004-06-01

    Full Text Available Human beings rely heavily on vision for almost all of the tasks that are required in daily life. Because of this dependence on vision, humans with visual limitations, caused by genetic inheritance, disease, or age, will have difficulty in completing many of the tasks required of them. Some individuals with severe visual impairments, known as high-order aberrations, may have difficulty in interacting with computers, even when using a traditional means of visual correction (e.g., spectacles, contact lenses. This is, in part, because these correction mechanisms can only compensate for the most regular (low-order distortions or aberrations of the image in the eye. This paper presents an image processing approach that will pre-compensate the images displayed on the computer screen, so as to counter the effect of the eye's aberrations on the image. The characterization of the eye required to perform this customized pre-compensation is the eye's Point Spread Function (PSF. Ophthalmic instruments generically called "Wavefront Analyzers" can now measure this description of the eye's optical properties. The characterization provided by these instruments also includes the "higher-order aberration components" and could, therefore, lead to a more comprehensive vision correction than traditional mechanisms. This paper explains the theoretical foundation of the methods proposed and illustrates them with experiments involving the emulation of a known and constant PSF by interposing a lens in the field of view of normally sighted test subjects.

  20. Image Feature Detection and Matching in Underwater Conditions

    Science.gov (United States)

    2010-04-01

    and ||jlr< li "-Hjj3 < l l=1’-’N „ (17) 0 if k 7^ argmin||r’ — s||2 or k = argmin||r’ - s||2 and Jin.) ’K. > t i=l N i=l N l|r Ŗ matchRNN(fl...Affix (v*01)[F-0_3 O He-un Affine (V Mil 1 F -0M •’ li •« \\irii« a,, lit i I 0-25 (a) SIFT with Nearest Neighbor Matching (b) GLOH...F - 0 𔄁 • Htm* I «pl.ee It. 3 0) | F - 0 17 -*-Hini»-L«(itaM (({,- 10)|F-0.I - He- Mian [ «place <V (I II I 0 < & He-un-Laplace (Z- 3 0

  1. Perfectly Matched Layer for the Wave Equation Finite Difference Time Domain Method

    Science.gov (United States)

    Miyazaki, Yutaka; Tsuchiya, Takao

    2012-07-01

    The perfectly matched layer (PML) is introduced into the wave equation finite difference time domain (WE-FDTD) method. The WE-FDTD method is a finite difference method in which the wave equation is directly discretized on the basis of the central differences. The required memory of the WE-FDTD method is less than that of the standard FDTD method because no particle velocity is stored in the memory. In this study, the WE-FDTD method is first combined with the standard FDTD method. Then, Berenger's PML is combined with the WE-FDTD method. Some numerical demonstrations are given for the two- and three-dimensional sound fields.

  2. Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.

    Science.gov (United States)

    Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi

    2018-03-24

    In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.

  3. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Formisano, E.; Pepino, A.; Bracale, M.; Di Salle, F.; Lanfermann, H.; Zanella, F.E.

    1998-01-01

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors)

  4. Matching based on biological categories in Orangutans (Pongo abelii and a Gorilla (Gorilla gorilla gorilla

    Directory of Open Access Journals (Sweden)

    Jennifer Vonk

    2013-09-01

    Full Text Available Following a series of experiments in which six orangutans and one gorilla discriminated photographs of different animal species in a two-choice touch screen procedure, Vonk & MacDonald (2002 and Vonk & MacDonald (2004 concluded that orangutans, but not the gorilla, seemed to learn intermediate level category discriminations, such as primates versus non-primates, more rapidly than they learned concrete level discriminations, such as orangutans versus humans. In the current experiments, four of the same orangutans and the gorilla were presented with delayed matching-to-sample tasks in which they were rewarded for matching photos of different members of the same primate species; golden lion tamarins, Japanese macaques, and proboscis monkeys, or family; gibbons, lemurs (Experiment 1, and subsequently for matching photos of different species within the following classes: birds, reptiles, insects, mammals, and fish (Experiment 2. Members of both Great Ape species were rapidly able to match the photos at levels above chance. Orangutans matched images from both category levels spontaneously whereas the gorilla showed effects of learning to match intermediate level categories. The results show that biological knowledge is not necessary to form natural categories at both concrete and intermediate levels.

  5. Task-based statistical image reconstruction for high-quality cone-beam CT

    Science.gov (United States)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a

  6. Does functional vision behave differently in low-vision patients with diabetic retinopathy?--A case-matched study.

    Science.gov (United States)

    Ahmadian, Lohrasb; Massof, Robert

    2008-09-01

    A retrospective case-matched study designed to compare patients with diabetic retinopathy (DR) and other ocular diseases, managed in a low-vision clinic, in four different types of functional vision. Reading, mobility, visual motor, and visual information processing were measured in the patients (n = 114) and compared with those in patients with other ocular diseases (n = 114) matched in sex, visual acuity (VA), general health status, and age, using the Activity Inventory as a Rasch-scaled measurement tool. Binocular distance visual acuity was categorized as normal (20/12.5-20/25), near normal (20/32-20/63), moderate (20/80-20/160), severe (20/200-20/400), profound (20/500-20/1000), and total blindness (20/1250 to no light perception). Both Wilcoxon matched pairs signed rank test and the sign test of matched pairs were used to compare estimated functional vision measures between DR cases and controls. Cases ranged in age from 19 to 90 years (mean age, 67.5), and 59% were women. The mean visual acuity (logMar scale) was 0.7. Based on the Wilcoxon signed rank test analyses and after adjusting the probability for multiple comparisons, there was no statistically significant difference (P > 0.05) between patients with DR and control subjects in any of four functional visions. Furthermore, diabetic retinopathy patients did not differ (P > 0.05) from their matched counterparts in goal-level vision-related functional ability and total visual ability. Visual impairment in patients with DR appears to be a generic and non-disease-specific outcome that can be explained mainly by the end impact of the disease in the patients' daily lives and not by the unique disease process that results in the visual impairment.

  7. Neural Network Blind Equalization Algorithm Applied in Medical CT Image Restoration

    Directory of Open Access Journals (Sweden)

    Yunshan Sun

    2013-01-01

    Full Text Available A new algorithm for iterative blind image restoration is presented in this paper. The method extends blind equalization found in the signal case to the image. A neural network blind equalization algorithm is derived and used in conjunction with Zigzag coding to restore the original image. As a result, the effect of PSF can be removed by using the proposed algorithm, which contributes to eliminate intersymbol interference (ISI. In order to obtain the estimation of the original image, what is proposed in this method is to optimize constant modulus blind equalization cost function applied to grayscale CT image by using conjugate gradient method. Analysis of convergence performance of the algorithm verifies the feasibility of this method theoretically; meanwhile, simulation results and performance evaluations of recent image quality metrics are provided to assess the effectiveness of the proposed method.

  8. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2010-01-01

    Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  9. The Interaction Between Schema Matching and Record Matching in Data Integration

    KAUST Repository

    Gu, Binbin; Li, Zhixu; Zhang, Xiangliang; Liu, An; Liu, Guanfeng; Zheng, Kai; Zhao, Lei; Zhou, Xiaofang

    2016-01-01

    Schema Matching (SM) and Record Matching (RM) are two necessary steps in integrating multiple relational tables of different schemas, where SM unifies the schemas and RM detects records referring to the same real-world entity. The two processes have

  10. Influence of different types of compression garments on exercise-induced muscle damage markers after a soccer match.

    Science.gov (United States)

    Marqués-Jiménez, Diego; Calleja-González, Julio; Arratibel-Imaz, Iñaki; Delextrat, Anne; Uriarte, Fernando; Terrados, Nicolás

    2018-01-01

    There is not enough evidence of positive effects of compression therapy on the recovery of soccer players after matches. Therefore, the objective was to evaluate the influence of different types of compression garments in reducing exercise-induced muscle damage (EIMD) during recovery after a friendly soccer match. Eighteen semi-professional soccer players (24 ± 4.07 years, 177 ± 5 cm; 71.8 ± 6.28 kg and 22.73 ± 1.81 BMI) participated in this study. A two-stage crossover design was chosen. Participants acted as controls in one match and were assigned to an experimental group (compression stockings group, full-leg compression group, shorts group) in the other match. Participants in experimental groups played the match wearing the assigned compression garments, which were also worn in the 3 days post-match, for 7 h each day. Results showed a positive, but not significant, effect of compression garments on attenuating EIMD biomarkers response, and inflammatory and perceptual responses suggest that compression may improve physiological and psychological recovery.

  11. Using modern imaging techniques to old HST data: a summary of the ALICE program.

    Science.gov (United States)

    Choquet, Elodie; Soummer, Remi; Perrin, Marshall; Pueyo, Laurent; Hagan, James Brendan; Zimmerman, Neil; Debes, John Henry; Schneider, Glenn; Ren, Bin; Milli, Julien; Wolff, Schuyler; Stark, Chris; Mawet, Dimitri; Golimowski, David A.; Hines, Dean C.; Roberge, Aki; Serabyn, Eugene

    2018-01-01

    Direct imaging of extrasolar systems is a powerful technique to study the physical properties of exoplanetary systems and understand their formation and evolution mechanisms. The detection and characterization of these objects are challenged by their high contrast with their host star. Several observing strategies and post-processing algorithms have been developed for ground-based high-contrast imaging instruments, enabling the discovery of directly-imaged and spectrally-characterized exoplanets. The Hubble Space Telescope (HST), pioneer in directly imaging extrasolar systems, has yet been often limited to the detection of bright debris disks systems, with sensitivity limited by the difficulty to implement an optimal PSF subtraction stategy, which is readily offered on ground-based telescopes in pupil tracking mode.The Archival Legacy Investigations of Circumstellar Environments (ALICE) program is a consistent re-analysis of the 10 year old coronagraphic archive of HST's NICMOS infrared imager. Using post-processing methods developed for ground-based observations, we used the whole archive to calibrate PSF temporal variations and improve NICMOS's detection limits. We have now delivered ALICE-reprocessed science products for the whole NICMOS archival data back to the community. These science products, as well as the ALICE pipeline, were used to prototype the JWST coronagraphic data and reduction pipeline. The ALICE program has enabled the detection of 10 faint debris disk systems never imaged before in the near-infrared and several substellar companion candidates, which we are all in the process of characterizing through follow-up observations with both ground-based facilities and HST-STIS coronagraphy. In this publication, we provide a summary of the results of the ALICE program, advertise its science products and discuss the prospects of the program.

  12. Visual Localization across Seasons Using Sequence Matching Based on Multi-Feature Combination.

    Science.gov (United States)

    Qiao, Yongliang

    2017-10-25

    Visual localization is widely used in autonomous navigation system and Advanced Driver Assistance Systems (ADAS). However, visual-based localization in seasonal changing situations is one of the most challenging topics in computer vision and the intelligent vehicle community. The difficulty of this task is related to the strong appearance changes that occur in scenes due to weather or season changes. In this paper, a place recognition based visual localization method is proposed, which realizes the localization by identifying previously visited places using the sequence matching method. It operates by matching query image sequences to an image database acquired previously (video acquired during traveling period). In this method, in order to improve matching accuracy, multi-feature is constructed by combining a global GIST descriptor and local binary feature CSLBP (Center-symmetric local binary patterns) to represent image sequence. Then, similarity measurement according to Chi-square distance is used for effective sequences matching. For experimental evaluation, the relationship between image sequence length and sequences matching performance is studied. To show its effectiveness, the proposed method is tested and evaluated in four seasons outdoor environments. The results have shown improved precision-recall performance against the state-of-the-art SeqSLAM algorithm.

  13. A difference tracking algorithm based on discrete sine transform

    Science.gov (United States)

    Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun

    2018-04-01

    Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.

  14. [Symbiotic matching between soybean cultivar Luhuang No. 1 and different rhizobia].

    Science.gov (United States)

    Ji, Zhao-jun; Wang, Fei-meng; Wang, Su-ge; Yang, Sheng-hui; Guo, Rui; Tang, Ru-you; Chen, Wen-xin; Chen, Wen-feng

    2014-12-01

    Soybean plants could establish symbiosis and fix nitrogen with different rhizobial species in the genera of Sinorhizobium and Bradyrhizobium. Studies on the symbiotic matching between soybean cultivars and different rhizobial species are theoretically and practically important for selecting effective strains used to inoculate the plants and improve the soybean production and quality. A total of 27 strains were isolated and purified from a soil sample of Huanghuaihai area by using the soybean cultivar Luhang No. 1, a protein-rich cultivar grown in that area, as the trapping plants. These strains were identified as members of Sinorhizobium (18 strains) and Bradyrhizobium (9 strains) based on the sequence analysis of housekeeping gene recA. Two representative strains (Sinorhizobium fredii S6 and Bradyrhizobium sp. S10) were used to inoculate the seeds of Luhang No. 1 alone or mixed, in pots filled with vermiculite or soil, and in the field trial to investigate their effects on soybean growth, nodulation, nitrogen fixation activity, yield, contents of protein and oil in seeds. The results demonstrated that strain S6 showed better effects on growth-promotion, yield of seeds and seed quality than strain S10. Thus strain S6 was finally regarded as the effective rhizobium matching to soybean Luhuang No. 1, which could be the candidate as a good inoculant for planting the soybean Luhuang No. 1 at a large scale in the Huanghuaihai area.

  15. Avaliação da saúde bucal das gestantes atendidas no PSF Adirbal Corralo na cidade Passo Fundo-RS

    OpenAIRE

    Carlos Alberto Rech; Patrícia Manfio

    2016-01-01

    O presente trabalho tem por objetivo analisar as condições e percepções de saúde bucal das gestantes que freqüentam o grupo de gestantes do PSF Adirbal Corralo na cidade de Passo Fundo-RS. Trata-se de um estudo quantitativo com abordagem descritiva. Para a coleta de dados foram utilizados questionários acerca da saúde bucal das gestantes, procurando observar quantas vezes e como é feita a escovação, o atendimento odontológico, as orientações odontológicas pré-natais e também exame clínico ver...

  16. Using wavefront coding technique as an optical encryption system: reliability analysis and vulnerabilities assessment

    Science.gov (United States)

    Konnik, Mikhail V.

    2012-04-01

    Wavefront coding paradigm can be used not only for compensation of aberrations and depth-of-field improvement but also for an optical encryption. An optical convolution of the image with the PSF occurs when a diffractive optical element (DOE) with a known point spread function (PSF) is placed in the optical path. In this case, an optically encoded image is registered instead of the true image. Decoding of the registered image can be performed using standard digital deconvolution methods. In such class of optical-digital systems, the PSF of the DOE is used as an encryption key. Therefore, a reliability and cryptographic resistance of such an encryption method depends on the size and complexity of the PSF used for optical encoding. This paper gives a preliminary analysis on reliability and possible vulnerabilities of such an encryption method. Experimental results on brute-force attack on the optically encrypted images are presented. Reliability estimation of optical coding based on wavefront coding paradigm is evaluated. An analysis of possible vulnerabilities is provided.

  17. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  18. Imaging characteristics of Zernike and annular polynomial aberrations.

    Science.gov (United States)

    Mahajan, Virendra N; Díaz, José Antonio

    2013-04-01

    The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.

  19. Applying Image Matching to Video Analysis

    Science.gov (United States)

    2010-09-01

    image groups, classified by the background scene, are the flag, the kitchen, the telephone, the bookshelf , the title screen, the...Kitchen 136 Telephone 3 Bookshelf 81 Title Screen 10 Map 1 24 Map 2 16 command line. This implementation of a Bloom filter uses two arbitrary...with the Bookshelf images. This scene is a much closer shot than the Kitchen scene so the host occupies much of the background. Algorithms for face

  20. EVALUATION OF PENALTY FUNCTIONS FOR SEMI-GLOBAL MATCHING COST AGGREGATION

    Directory of Open Access Journals (Sweden)

    C. Banz

    2012-07-01

    Full Text Available The stereo matching method semi-global matching (SGM relies on consistency constraints during the cost aggregation which are enforced by so-called penalty terms. This paper proposes new and evaluates four penalty functions for SGM. Due to mutual dependencies, two types of matching cost calculation, census and rank transform, are considered. Performance is measured using original and degenerated images exhibiting radiometric changes and noise from the Middlebury benchmark. The two best performing penalty functions are inversely proportional and negatively linear to the intensity gradient and perform equally with 6.05% and 5.91% average error, respectively. The experiments also show that adaptive penalty terms are mandatory when dealing with difficult imaging conditions. Consequently, for highest algorithmic performance in real-world systems, selection of a suitable penalty function and thorough parametrization with respect to the expected image quality is essential.