WorldWideScience

Sample records for images detects regular

  1. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  2. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  3. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong; Nan, Liangliang; Yan, Dongming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2015-01-01

    plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing

  4. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  5. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  6. Phantom experiments using soft-prior regularization EIT for breast cancer imaging.

    Science.gov (United States)

    Murphy, Ethan K; Mahara, Aditya; Wu, Xiaotian; Halter, Ryan J

    2017-06-01

    A soft-prior regularization (SR) electrical impedance tomography (EIT) technique for breast cancer imaging is described, which shows an ability to accurately reconstruct tumor/inclusion conductivity values within a dense breast model investigated using a cylindrical and a breast-shaped tank. The SR-EIT method relies on knowing the spatial location of a suspicious lesion initially detected from a second imaging modality. Standard approaches (using Laplace smoothing and total variation regularization) without prior structural information are unable to accurately reconstruct or detect the tumors. The soft-prior approach represents a very significant improvement to these standard approaches, and has the potential to improve conventional imaging techniques, such as automated whole breast ultrasound (AWB-US), by providing electrical property information of suspicious lesions to improve AWB-US's ability to discriminate benign from cancerous lesions. Specifically, the best soft-regularization technique found average absolute tumor/inclusion errors of 0.015 S m -1 for the cylindrical test and 0.055 S m -1 and 0.080 S m -1 for the breast-shaped tank for 1.8 cm and 2.5 cm inclusions, respectively. The standard approaches were statistically unable to distinguish the tumor from the mammary gland tissue. An analysis of false tumors (benign suspicious lesions) provides extra insight into the potential and challenges EIT has for providing clinically relevant information. The ability to obtain accurate conductivity values of a suspicious lesion (>1.8 cm) detected from another modality (e.g. AWB-US) could significantly reduce false positives and result in a clinically important technology.

  7. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  8. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman

    2017-11-02

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  9. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman; Ballal, Tarig; Masood, Mudassir; Al-Naffouri, Tareq Y.

    2017-01-01

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  10. Poisson image reconstruction with Hessian Schatten-norm regularization.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  11. EIT image reconstruction with four dimensional regularization.

    Science.gov (United States)

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  12. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  13. Iterative Method of Regularization with Application of Advanced Technique for Detection of Contours

    International Nuclear Information System (INIS)

    Niedziela, T.; Stankiewicz, A.

    2000-01-01

    This paper proposes a novel iterative method of regularization with application of an advanced technique for detection of contours. To eliminate noises, the properties of convolution of functions are utilized. The method can be accomplished in a simple neural cellular network, which creates the possibility of extraction of contours by automatic image recognition equipment. (author)

  14. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  15. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  16. Graph Regularized Auto-Encoders for Image Representation.

    Science.gov (United States)

    Yiyi Liao; Yue Wang; Yong Liu

    2017-06-01

    Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.

  17. Iterative reconstruction for x-ray computed tomography using prior-image induced nonlocal regularization.

    Science.gov (United States)

    Zhang, Hua; Huang, Jing; Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-09-01

    Repeated X-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the X-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as "PWLS-PINL". Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive overrelaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection, and edge detail preservation.

  18. Wavelet domain image restoration with adaptive edge-preserving regularization.

    Science.gov (United States)

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  19. Multiview Hessian regularization for image annotation.

    Science.gov (United States)

    Liu, Weifeng; Tao, Dacheng

    2013-07-01

    The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.

  20. Multilinear Graph Embedding: Representation and Regularization for Images.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  1. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  2. Manifold regularization for sparse unmixing of hyperspectral images.

    Science.gov (United States)

    Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin

    2016-01-01

    Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.

  3. HIERARCHICAL REGULARIZATION OF POLYGONS FOR PHOTOGRAMMETRIC POINT CLOUDS OF OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    L. Xie

    2017-05-01

    Full Text Available Despite the success of multi-view stereo (MVS reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  4. A visibility-based approach using regularization for imaging-spectroscopy in solar X-ray astronomy

    Energy Technology Data Exchange (ETDEWEB)

    Prato, M; Massone, A M; Piana, M [CNR - INFM LAMIA, Via Dodecaneso 33 1-16146 Genova (Italy); Emslie, A G [Department of Physics, Oklahoma State University, Stillwater, OK 74078 (United States); Hurford, G J [Space Sciences Laboratory, University of California at Berkeley, 8 Gauss Way, Berkeley, CA 94720-7450 (United States); Kontar, E P [Department of Physics and Astronomy, The University, Glasgow G12 8QQ, Scotland (United Kingdom); Schwartz, R A [CUA - Catholic University and LSSP at NASA Goddard Space Flight Center, code 671.1 Greenbelt, MD 20771 (United States)], E-mail: massone@ge.infm.it

    2008-11-01

    The Reuven Ramaty High-Energy Solar Spectroscopic Imager (RHESSI) is a nine-collimators satellite detecting X-rays and {gamma}-rays emitted by the Sun during flares. As the spacecraft rotates, imaging information is encoded as rapid time-variations of the detected flux. We recently proposed a method for the construction of electron flux maps at different electron energies from sets of count visibilities (i.e., direct, calibrated measurements of specific Fourier components of the source spatial structure) measured by RHESSI. The method requires the application of regularized inversion for the synthesis of electron visibility spectra and of imaging techniques for the reconstruction of two-dimensional electron flux maps. The method, already tested on real events registered by RHESSI, is validated in this paper by means of simulated realistic data.

  5. EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.

    Science.gov (United States)

    Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos

    2015-01-01

    Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.

  6. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  7. Image super-resolution reconstruction based on regularization technique and guided filter

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Gu, Pei-ting; Liu, Pei-zhong; Luo, Yan-min

    2017-06-01

    In order to improve the accuracy of sparse representation coefficients and the quality of reconstructed images, an improved image super-resolution algorithm based on sparse representation is presented. In the sparse coding stage, the autoregressive (AR) regularization and the non-local (NL) similarity regularization are introduced to improve the sparse coding objective function. A group of AR models which describe the image local structures are pre-learned from the training samples, and one or several suitable AR models can be adaptively selected for each image patch to regularize the solution space. Then, the image non-local redundancy is obtained by the NL similarity regularization to preserve edges. In the process of computing the sparse representation coefficients, the feature-sign search algorithm is utilized instead of the conventional orthogonal matching pursuit algorithm to improve the accuracy of the sparse coefficients. To restore image details further, a global error compensation model based on weighted guided filter is proposed to realize error compensation for the reconstructed images. Experimental results demonstrate that compared with Bicubic, L1SR, SISR, GR, ANR, NE + LS, NE + NNLS, NE + LLE and A + (16 atoms) methods, the proposed approach has remarkable improvement in peak signal-to-noise ratio, structural similarity and subjective visual perception.

  8. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  9. Analysis of sea-surface radar signatures by means of wavelet-based edge detection and detection of regularities; Analyse von Radarsignaturen der Meeresoberflaeche mittels auf Wavelets basierender Kantenerkennung und Regularitaetsbestimmung

    Energy Technology Data Exchange (ETDEWEB)

    Wolff, U. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Gewaesserphysik

    2000-07-01

    The derivation and implementation of an algorithm for edge detection in images and for the detection of the Lipschitz regularity in edge points are described. The method is based on the use of the wavelet transform for edge detection at different resolutions. The Lipschitz regularity is a measure that characterizes the edges. The description of the derivation is first performed in one dimension. The approach of Mallat is formulated consistently and proved. Subsequently, the two-dimensional case is addressed, for which the derivation, as well as the description of the algorithm, is analogous. The algorithm is applied to detect edges in nautical radar images using images collected at the island of Sylt. The edges discernible in the images and the Lipschitz values provide information about the position and nature of spatial variations in the depth of the seafloor. By comparing images from different periods of measurement, temporal changes in the bottom structures can be localized at different resolutions and interpreted. The method is suited to the monitoring of coastal areas. It is an inexpensive way to observe long-term changes in the seafloor character. Thus, the results of this technique may be used by the authorities responsible for coastal protection to decide whether measures should be taken or not. (orig.)

  10. Bayesian regularization of diffusion tensor images

    DEFF Research Database (Denmark)

    Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif

    2007-01-01

    Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...

  11. Handheld microwave bomb-detecting imaging system

    Science.gov (United States)

    Gorwara, Ashok; Molchanov, Pavlo

    2017-05-01

    Proposed novel imaging technique will provide all weather high-resolution imaging and recognition capability for RF/Microwave signals with good penetration through highly scattered media: fog, snow, dust, smoke, even foliage, camouflage, walls and ground. Image resolution in proposed imaging system is not limited by diffraction and will be determined by processor and sampling frequency. Proposed imaging system can simultaneously cover wide field of view, detect multiple targets and can be multi-frequency, multi-function. Directional antennas in imaging system can be close positioned and installed in cell phone size handheld device, on small aircraft or distributed around protected border or object. Non-scanning monopulse system allows dramatically decrease in transmitting power and at the same time provides increased imaging range by integrating 2-3 orders more signals than regular scanning imaging systems.

  12. An algorithm for total variation regularized photoacoustic imaging

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Görner, Torsten; Kunis, Stefan

    2014-01-01

    Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During the iter......Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During...... the iteration, each matrix vector multiplication is realized in an efficient way using a recently proposed spectral discretization of the spherical mean value operator. All theoretical results are illustrated by numerical experiments....

  13. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    Science.gov (United States)

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  14. Regularization design for high-quality cone-beam CT of intracranial hemorrhage using statistical reconstruction

    Science.gov (United States)

    Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2016-03-01

    Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.

  15. Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing.

    Science.gov (United States)

    Elmoataz, Abderrahim; Lezoray, Olivier; Bougleux, Sébastien

    2008-07-01

    We introduce a nonlocal discrete regularization framework on weighted graphs of the arbitrary topologies for image and manifold processing. The approach considers the problem as a variational one, which consists of minimizing a weighted sum of two energy terms: a regularization one that uses a discrete weighted p-Dirichlet energy and an approximation one. This is the discrete analogue of recent continuous Euclidean nonlocal regularization functionals. The proposed formulation leads to a family of simple and fast nonlinear processing methods based on the weighted p-Laplace operator, parameterized by the degree p of regularity, the graph structure and the graph weight function. These discrete processing methods provide a graph-based version of recently proposed semi-local or nonlocal processing methods used in image and mesh processing, such as the bilateral filter, the TV digital filter or the nonlocal means filter. It works with equal ease on regular 2-D and 3-D images, manifolds or any data. We illustrate the abilities of the approach by applying it to various types of images, meshes, manifolds, and data represented as graphs.

  16. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    Science.gov (United States)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  17. Brain MRI Tumor Detection using Active Contour Model and Local Image Fitting Energy

    Science.gov (United States)

    Nabizadeh, Nooshin; John, Nigel

    2014-03-01

    Automatic abnormality detection in Magnetic Resonance Imaging (MRI) is an important issue in many diagnostic and therapeutic applications. Here an automatic brain tumor detection method is introduced that uses T1-weighted images and K. Zhang et. al.'s active contour model driven by local image fitting (LIF) energy. Local image fitting energy obtains the local image information, which enables the algorithm to segment images with intensity inhomogeneities. Advantage of this method is that the LIF energy functional has less computational complexity than the local binary fitting (LBF) energy functional; moreover, it maintains the sub-pixel accuracy and boundary regularization properties. In Zhang's algorithm, a new level set method based on Gaussian filtering is used to implement the variational formulation, which is not only vigorous to prevent the energy functional from being trapped into local minimum, but also effective in keeping the level set function regular. Experiments show that the proposed method achieves high accuracy brain tumor segmentation results.

  18. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  19. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms.

  20. Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes

    International Nuclear Information System (INIS)

    Schee, Jan; Stuchlík, Zdeněk

    2015-01-01

    We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghost direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region

  1. Multiview vector-valued manifold regularization for multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang

    2013-05-01

    In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.

  2. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    Science.gov (United States)

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  3. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    Science.gov (United States)

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  4. Sparse regularization for EIT reconstruction incorporating structural information derived from medical imaging.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-06-01

    Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.

  5. Bayesian image reconstruction for improving detection performance of muon tomography.

    Science.gov (United States)

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  6. A general framework for regularized, similarity-based image restoration.

    Science.gov (United States)

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  7. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Science.gov (United States)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  8. Functional dissociation between regularity encoding and deviance detection along the auditory hierarchy.

    Science.gov (United States)

    Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles

    2016-02-01

    Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Image segmentation with a novel regularized composite shape prior based on surrogate study

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  10. Image segmentation with a novel regularized composite shape prior based on surrogate study

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  11. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    Science.gov (United States)

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  12. An algorithmic framework for Mumford–Shah regularization of inverse problems in imaging

    International Nuclear Information System (INIS)

    Hohm, Kilian; Weinmann, Andreas; Storath, Martin

    2015-01-01

    The Mumford–Shah model is a very powerful variational approach for edge preserving regularization of image reconstruction processes. However, it is algorithmically challenging because one has to deal with a non-smooth and non-convex functional. In this paper, we propose a new efficient algorithmic framework for Mumford–Shah regularization of inverse problems in imaging. It is based on a splitting into specific subproblems that can be solved exactly. We derive fast solvers for the subproblems which are key for an efficient overall algorithm. Our method neither requires a priori knowledge of the gray or color levels nor of the shape of the discontinuity set. We demonstrate the wide applicability of the method for different modalities. In particular, we consider the reconstruction from Radon data, inpainting, and deconvolution. Our method can be easily adapted to many further imaging setups. The relevant condition is that the proximal mapping of the data fidelity can be evaluated a within reasonable time. In other words, it can be used whenever classical Tikhonov regularization is possible. (paper)

  13. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    Science.gov (United States)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  14. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  15. A regularized relaxed ordered subset list-mode reconstruction algorithm and its preliminary application to undersampling PET imaging

    International Nuclear Information System (INIS)

    Cao, Xiaoqing; Xie, Qingguo; Xiao, Peng

    2015-01-01

    List mode format is commonly used in modern positron emission tomography (PET) for image reconstruction due to certain special advantages. In this work, we proposed a list mode based regularized relaxed ordered subset (LMROS) algorithm for static PET imaging. LMROS is able to work with regularization terms which can be formulated as twice differentiable convex functions. Such a versatility would make LMROS a convenient and general framework for fulfilling different regularized list mode reconstruction methods. LMROS was applied to two simulated undersampling PET imaging scenarios to verify its effectiveness. Convex quadratic function, total variation constraint, non-local means and dictionary learning based regularization methods were successfully realized for different cases. The results showed that the LMROS algorithm was effective and some regularization methods greatly reduced the distortions and artifacts caused by undersampling. (paper)

  16. SU-E-I-93: Improved Imaging Quality for Multislice Helical CT Via Sparsity Regularized Iterative Image Reconstruction Method Based On Tensor Framelet

    International Nuclear Information System (INIS)

    Nam, H; Guo, M; Lee, K; Li, R; Xing, L; Gao, H

    2014-01-01

    Purpose: Inspired by compressive sensing, sparsity regularized iterative reconstruction method has been extensively studied. However, its utility pertinent to multislice helical 4D CT for radiotherapy with respect to imaging quality, dose, and time has not been thoroughly addressed. As the beginning of such an investigation, this work carries out the initial comparison of reconstructed imaging quality between sparsity regularized iterative method and analytic method through static phantom studies using a state-of-art 128-channel multi-slice Siemens helical CT scanner. Methods: In our iterative method, tensor framelet (TF) is chosen as the regularization method for its superior performance from total variation regularization in terms of reduced piecewise-constant artifacts and improved imaging quality that has been demonstrated in our prior work. On the other hand, X-ray transforms and its adjoints are computed on-the-fly through GPU implementation using our previous developed fast parallel algorithms with O(1) complexity per computing thread. For comparison, both FDK (approximate analytic method) and Katsevich algorithm (exact analytic method) are used for multislice helical CT image reconstruction. Results: The phantom experimental data with different imaging doses were acquired using a state-of-art 128-channel multi-slice Siemens helical CT scanner. The reconstructed image quality was compared between TF-based iterative method, FDK and Katsevich algorithm with the quantitative analysis for characterizing signal-to-noise ratio, image contrast, and spatial resolution of high-contrast and low-contrast objects. Conclusion: The experimental results suggest that our tensor framelet regularized iterative reconstruction algorithm improves the helical CT imaging quality from FDK and Katsevich algorithm for static experimental phantom studies that have been performed

  17. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  18. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  19. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    Science.gov (United States)

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  20. Real time QRS complex detection using DFA and regular grammar.

    Science.gov (United States)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed Hedi

    2017-02-28

    The sequence of Q, R, and S peaks (QRS) complex detection is a crucial procedure in electrocardiogram (ECG) processing and analysis. We propose a novel approach for QRS complex detection based on the deterministic finite automata with the addition of some constraints. This paper confirms that regular grammar is useful for extracting QRS complexes and interpreting normalized ECG signals. A QRS is assimilated to a pair of adjacent peaks which meet certain criteria of standard deviation and duration. The proposed method was applied on several kinds of ECG signals issued from the standard MIT-BIH arrhythmia database. A total of 48 signals were used. For an input signal, several parameters were determined, such as QRS durations, RR distances, and the peaks' amplitudes. σRR and σQRS parameters were added to quantify the regularity of RR distances and QRS durations, respectively. The sensitivity rate of the suggested method was 99.74% and the specificity rate was 99.86%. Moreover, the sensitivity and the specificity rates variations according to the Signal-to-Noise Ratio were performed. Regular grammar with the addition of some constraints and deterministic automata proved functional for ECG signals diagnosis. Compared to statistical methods, the use of grammar provides satisfactory and competitive results and indices that are comparable to or even better than those cited in the literature.

  1. Application of regularization technique in image super-resolution algorithm via sparse representation

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Huang, Hui; Zheng, Li-xin

    2017-11-01

    To make use of the prior knowledge of the image more effectively and restore more details of the edges and structures, a novel sparse coding objective function is proposed by applying the principle of the non-local similarity and manifold learning on the basis of super-resolution algorithm via sparse representation. Firstly, the non-local similarity regularization term is constructed by using the similar image patches to preserve the edge information. Then, the manifold learning regularization term is constructed by utilizing the locally linear embedding approach to enhance the structural information. The experimental results validate that the proposed algorithm has a significant improvement compared with several super-resolution algorithms in terms of the subjective visual effect and objective evaluation indices.

  2. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning

    OpenAIRE

    Lai, Rongjie; Li, Jia

    2017-01-01

    Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regu...

  4. Assessment of prior image induced nonlocal means regularization for low-dose CT reconstruction: Change in anatomy.

    Science.gov (United States)

    Zhang, Hao; Ma, Jianhua; Wang, Jing; Moore, William; Liang, Zhengrong

    2017-09-01

    Repeated computed tomography (CT) scans are prescribed for some clinical applications such as lung nodule surveillance. Several studies have demonstrated that incorporating a high-quality prior image into the reconstruction of subsequent low-dose CT (LDCT) acquisitions can either improve image quality or reduce data fidelity requirements. Our proposed previous normal-dose image induced nonlocal means (ndiNLM) regularization method for LDCT is an example of such a method. However, one major concern with prior image based methods is that they might produce false information when the prior image and the current LDCT image show different structures (for example, if a lung nodule emerges, grows, shrinks, or disappears over time). This study aims to assess the performance of the ndiNLM regularization method in situations with change in anatomy. We incorporated the ndiNLM regularization into the statistical image reconstruction (SIR) framework for reconstruction of subsequent LDCT images. Because of its patch-based search mechanism, a rough registration between the prior image and the current LDCT image is adequate for the SIR-ndiNLM method. We assessed the performance of the SIR-ndiNLM method in lung nodule surveillance for two different scenarios: (a) the nodule was not found in a baseline exam but appears in a follow-up LDCT scan; (b) the nodule was present in a baseline exam but disappears in a follow-up LDCT scan. We further investigated the effect of nodule size on the performance of the SIR-ndiNLM method. We found that a relatively large search-window (e.g., 33 × 33) should be used for the SIR-ndiNLM method to account for misalignment between the prior image and the current LDCT image, and to ensure that enough similar patches can be found in the prior image. With proper selection of other parameters, experimental results with two patient datasets demonstrated that the SIR-ndiNLM method did not miss true nodules nor introduce false nodules in the lung nodule

  5. Videokymography. Imaging and quantification of regular and irregular vocal fold vibrations

    NARCIS (Netherlands)

    Schutte, HK; Svec, JG; Sram, F; McCafferty, G; Coman, W; Carroll, R

    1996-01-01

    A newly developed imaging technique makes it possible to observe the vocal fold vibration pattern also under unstable conditions. In contrast to stroboscopy, which strongly relies on the regularity of the vibration under study videokymography enables the study of irregular patterns as well. The

  6. Detection and recognition of road markings in panoramic images

    Science.gov (United States)

    Li, Cheng; Creusen, Ivo; Hazelhoff, Lykele; de With, Peter H. N.

    2015-03-01

    Detection of road lane markings is attractive for practical applications such as advanced driver assistance systems and road maintenance. This paper proposes a system to detect and recognize road lane markings in panoramic images. The system can be divided into four stages. First, an inverse perspective mapping is applied to the original panoramic image to generate a top-view road view, in which the potential road markings are segmented based on their intensity difference compared to the surrounding pixels. Second, a feature vector of each potential road marking segment is extracted by calculating the Euclidean distance between the center and the boundary at regular angular steps. Third, the shape of each segment is classified using a Support Vector Machine (SVM). Finally, by modeling the lane markings, previous falsely detected segments can be rejected based on their orientation and position relative to the lane markings. Our experiments show that the system is promising and is capable of recognizing 93%, 95% and 91% of striped line segments, blocks and arrows respectively, as well as 94% of the lane markings.

  7. A Novel Kernel-Based Regularization Technique for PET Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Abdelwahhab Boudjelal

    2017-06-01

    Full Text Available Positron emission tomography (PET is an imaging technique that generates 3D detail of physiological processes at the cellular level. The technique requires a radioactive tracer, which decays and releases a positron that collides with an electron; consequently, annihilation photons are emitted, which can be measured. The purpose of PET is to use the measurement of photons to reconstruct the distribution of radioisotopes in the body. Currently, PET is undergoing a revamp, with advancements in data measurement instruments and the computing methods used to create the images. These computer methods are required to solve the inverse problem of “image reconstruction from projection”. This paper proposes a novel kernel-based regularization technique for maximum-likelihood expectation-maximization ( κ -MLEM to reconstruct the image. Compared to standard MLEM, the proposed algorithm is more robust and is more effective in removing background noise, whilst preserving the edges; this suppresses image artifacts, such as out-of-focus slice blur.

  8. ℓ1/2-norm regularized nonnegative low-rank and sparse affinity graph for remote sensing image segmentation

    Science.gov (United States)

    Tian, Shu; Zhang, Ye; Yan, Yiming; Su, Nan

    2016-10-01

    Segmentation of real-world remote sensing images is a challenge due to the complex texture information with high heterogeneity. Thus, graph-based image segmentation methods have been attracting great attention in the field of remote sensing. However, most of the traditional graph-based approaches fail to capture the intrinsic structure of the feature space and are sensitive to noises. A ℓ-norm regularization-based graph segmentation method is proposed to segment remote sensing images. First, we use the occlusion of the random texture model (ORTM) to extract the local histogram features. Then, a ℓ-norm regularized low-rank and sparse representation (LNNLRS) is implemented to construct a ℓ-regularized nonnegative low-rank and sparse graph (LNNLRS-graph), by the union of feature subspaces. Moreover, the LNNLRS-graph has a high ability to discriminate the manifold intrinsic structure of highly homogeneous texture information. Meanwhile, the LNNLRS representation takes advantage of the low-rank and sparse characteristics to remove the noises and corrupted data. Last, we introduce the LNNLRS-graph into the graph regularization nonnegative matrix factorization to enhance the segmentation accuracy. The experimental results using remote sensing images show that when compared to five state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  9. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    Science.gov (United States)

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J. [Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS foundation Trust, Sutton, London SM2 5PT (United Kingdom)

    2016-01-15

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison with normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking

  11. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    Science.gov (United States)

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  12. From regular text to artistic writing and artworks: Fourier statistics of images with low and high aesthetic appeal

    Directory of Open Access Journals (Sweden)

    Tamara eMelmer

    2013-04-01

    Full Text Available The spatial characteristics of letters and their influence on readability and letter identification have been intensely studied during the last decades. There have been few studies, however, on statistical image properties that reflect more global aspects of text, for example properties that may relate to its aesthetic appeal. It has been shown that natural scenes and a large variety of visual artworks possess a scale-invariant Fourier power spectrum that falls off linearly with increasing frequency in log-log plots. We asked whether images of text share this property. As expected, the Fourier spectrum of images of regular typed or handwritten text is highly anisotropic, i.e. the spectral image properties in vertical, horizontal and oblique orientations differ. Moreover, the spatial frequency spectra of text images are not scale invariant in any direction. The decline is shallower in the low-frequency part of the spectrum for text than for aesthetic artworks, whereas, in the high-frequency part, it is steeper. These results indicate that, in general, images of regular text contain less global structure (low spatial frequencies relative to fine detail (high spatial frequencies than images of aesthetics artworks. Moreover, we studied images of text with artistic claim (ornate print and calligraphy and ornamental art. For some measures, these images assume average values intermediate between regular text and aesthetic artworks. Finally, to answer the question of whether the statistical properties measured by us are universal amongst humans or are subject to intercultural differences, we compared images from three different cultural backgrounds (Western, East Asian and Arabic. Results for different categories (regular text, aesthetic writing, ornamental art and fine art were similar across cultures.

  13. From regular text to artistic writing and artworks: Fourier statistics of images with low and high aesthetic appeal

    Science.gov (United States)

    Melmer, Tamara; Amirshahi, Seyed A.; Koch, Michael; Denzler, Joachim; Redies, Christoph

    2013-01-01

    The spatial characteristics of letters and their influence on readability and letter identification have been intensely studied during the last decades. There have been few studies, however, on statistical image properties that reflect more global aspects of text, for example properties that may relate to its aesthetic appeal. It has been shown that natural scenes and a large variety of visual artworks possess a scale-invariant Fourier power spectrum that falls off linearly with increasing frequency in log-log plots. We asked whether images of text share this property. As expected, the Fourier spectrum of images of regular typed or handwritten text is highly anisotropic, i.e., the spectral image properties in vertical, horizontal, and oblique orientations differ. Moreover, the spatial frequency spectra of text images are not scale-invariant in any direction. The decline is shallower in the low-frequency part of the spectrum for text than for aesthetic artworks, whereas, in the high-frequency part, it is steeper. These results indicate that, in general, images of regular text contain less global structure (low spatial frequencies) relative to fine detail (high spatial frequencies) than images of aesthetics artworks. Moreover, we studied images of text with artistic claim (ornate print and calligraphy) and ornamental art. For some measures, these images assume average values intermediate between regular text and aesthetic artworks. Finally, to answer the question of whether the statistical properties measured by us are universal amongst humans or are subject to intercultural differences, we compared images from three different cultural backgrounds (Western, East Asian, and Arabic). Results for different categories (regular text, aesthetic writing, ornamental art, and fine art) were similar across cultures. PMID:23554592

  14. Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Gang, Grace J. [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (Canada); Stayman, J. Webster; Zbijewski, Wojciech [Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (United States); Siewerdsen, Jeffrey H., E-mail: jeff.siewerdsen@jhu.edu [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States)

    2014-08-15

    Purpose: Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. Methods: Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according toFessler [“Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography,” IEEE Trans. Image Process. 5(3), 493–506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. Results: Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP

  15. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  16. Prospective regularization design in prior-image-based reconstruction

    International Nuclear Information System (INIS)

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-01-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  17. Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning.

    Science.gov (United States)

    Huang, Yawen; Shao, Ling; Frangi, Alejandro F

    2018-03-01

    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.

  18. Bias correction for magnetic resonance images via joint entropy regularization.

    Science.gov (United States)

    Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang

    2014-01-01

    Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.

  19. Detecting regular sound changes in linguistics as events of concerted evolution.

    Science.gov (United States)

    Hruschka, Daniel J; Branford, Simon; Smith, Eric D; Wilkins, Jon; Meade, Andrew; Pagel, Mark; Bhattacharya, Tanmoy

    2015-01-05

    Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Automatic metal parts inspection: Use of thermographic images and anomaly detection algorithms

    Science.gov (United States)

    Benmoussat, M. S.; Guillaume, M.; Caulier, Y.; Spinnler, K.

    2013-11-01

    A fully-automatic approach based on the use of induction thermography and detection algorithms is proposed to inspect industrial metallic parts containing different surface and sub-surface anomalies such as open cracks, open and closed notches with different sizes and depths. A practical experimental setup is developed, where lock-in and pulsed thermography (LT and PT, respectively) techniques are used to establish a dataset of thermal images for three different mockups. Data cubes are constructed by stacking up the temporal sequence of thermogram images. After the reduction of the data space dimension by means of denoising and dimensionality reduction methods; anomaly detection algorithms are applied on the reduced data cubes. The dimensions of the reduced data spaces are automatically calculated with arbitrary criterion. The results show that, when reduced data cubes are used, the anomaly detection algorithms originally developed for hyperspectral data, the well-known Reed and Xiaoli Yu detector (RX) and the regularized adaptive RX (RARX), give good detection performances for both surface and sub-surface defects in a non-supervised way.

  1. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  2. A combined use of multispectral and SAR images for ship detection and characterization through object based image analysis

    Science.gov (United States)

    Aiello, Martina; Gianinetto, Marco

    2017-10-01

    Marine routes represent a huge portion of commercial and human trades, therefore surveillance, security and environmental protection themes are gaining increasing importance. Being able to overcome the limits imposed by terrestrial means of monitoring, ship detection from satellite has recently prompted a renewed interest for a continuous monitoring of illegal activities. This paper describes an automatic Object Based Image Analysis (OBIA) approach to detect vessels made of different materials in various sea environments. The combined use of multispectral and SAR images allows for a regular observation unrestricted by lighting and atmospheric conditions and complementarity in terms of geographic coverage and geometric detail. The method developed adopts a region growing algorithm to segment the image in homogeneous objects, which are then classified through a decision tree algorithm based on spectral and geometrical properties. Then, a spatial analysis retrieves the vessels' position, length and heading parameters and a speed range is associated. Optimization of the image processing chain is performed by selecting image tiles through a statistical index. Vessel candidates are detected over amplitude SAR images using an adaptive threshold Constant False Alarm Rate (CFAR) algorithm prior the object based analysis. Validation is carried out by comparing the retrieved parameters with the information provided by the Automatic Identification System (AIS), when available, or with manual measurement when AIS data are not available. The estimation of length shows R2=0.85 and estimation of heading R2=0.92, computed as the average of R2 values obtained for both optical and radar images.

  3. Buried object detection in GPR images

    Science.gov (United States)

    Paglieroni, David W; Chambers, David H; Bond, Steven W; Beer, W. Reginald

    2014-04-29

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  4. Semi-supervised manifold learning with affinity regularization for Alzheimer's disease identification using positron emission tomography imaging.

    Science.gov (United States)

    Lu, Shen; Xia, Yong; Cai, Tom Weidong; Feng, David Dagan

    2015-01-01

    Dementia, Alzheimer's disease (AD) in particular is a global problem and big threat to the aging population. An image based computer-aided dementia diagnosis method is needed to providing doctors help during medical image examination. Many machine learning based dementia classification methods using medical imaging have been proposed and most of them achieve accurate results. However, most of these methods make use of supervised learning requiring fully labeled image dataset, which usually is not practical in real clinical environment. Using large amount of unlabeled images can improve the dementia classification performance. In this study we propose a new semi-supervised dementia classification method based on random manifold learning with affinity regularization. Three groups of spatial features are extracted from positron emission tomography (PET) images to construct an unsupervised random forest which is then used to regularize the manifold learning objective function. The proposed method, stat-of-the-art Laplacian support vector machine (LapSVM) and supervised SVM are applied to classify AD and normal controls (NC). The experiment results show that learning with unlabeled images indeed improves the classification performance. And our method outperforms LapSVM on the same dataset.

  5. Beamforming Through Regularized Inverse Problems in Ultrasound Medical Imaging.

    Science.gov (United States)

    Szasz, Teodora; Basarab, Adrian; Kouame, Denis

    2016-12-01

    Beamforming (BF) in ultrasound (US) imaging has significant impact on the quality of the final image, controlling its resolution and contrast. Despite its low spatial resolution and contrast, delay-and-sum (DAS) is still extensively used nowadays in clinical applications, due to its real-time capabilities. The most common alternatives are minimum variance (MV) method and its variants, which overcome the drawbacks of DAS, at the cost of higher computational complexity that limits its utilization in real-time applications. In this paper, we propose to perform BF in US imaging through a regularized inverse problem based on a linear model relating the reflected echoes to the signal to be recovered. Our approach presents two major advantages: 1) its flexibility in the choice of statistical assumptions on the signal to be beamformed (Laplacian and Gaussian statistics are tested herein) and 2) its robustness to a reduced number of pulse emissions. The proposed framework is flexible and allows for choosing the right tradeoff between noise suppression and sharpness of the resulted image. We illustrate the performance of our approach on both simulated and experimental data, with in vivo examples of carotid and thyroid. Compared with DAS, MV, and two other recently published BF techniques, our method offers better spatial resolution, respectively contrast, when using Laplacian and Gaussian priors.

  6. Imaging, object detection, and change detection with a polarized multistatic GPR array

    Science.gov (United States)

    Beer, N. Reginald; Paglieroni, David W.

    2015-07-21

    A polarized detection system performs imaging, object detection, and change detection factoring in the orientation of an object relative to the orientation of transceivers. The polarized detection system may operate on one of several modes of operation based on whether the imaging, object detection, or change detection is performed separately for each transceiver orientation. In combined change mode, the polarized detection system performs imaging, object detection, and change detection separately for each transceiver orientation, and then combines changes across polarizations. In combined object mode, the polarized detection system performs imaging and object detection separately for each transceiver orientation, and then combines objects across polarizations and performs change detection on the result. In combined image mode, the polarized detection system performs imaging separately for each transceiver orientation, and then combines images across polarizations and performs object detection followed by change detection on the result.

  7. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  8. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  9. Image processing based detection of lung cancer on CT scan images

    Science.gov (United States)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  10. An image-segmentation-based framework to detect oil slicks from moving vessels in the Southern African oceans using SAR imagery

    CSIR Research Space (South Africa)

    Mdakane, Lizwe W

    2017-06-01

    Full Text Available Oil slick events caused due to bilge leakage/dumps from ships and from other anthropogenic sources pose a threat to the aquatic ecosystem and need to be monitored on a regular basis. An automatic image-segmentation-based framework to detect oil...

  11. Medical Image Tamper Detection Based on Passive Image Authentication.

    Science.gov (United States)

    Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa

    2017-12-01

    Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.

  12. Bayesian estimation of regularization and atlas building in diffeomorphic image registration.

    Science.gov (United States)

    Zhang, Miaomiao; Singh, Nikhil; Fletcher, P Thomas

    2013-01-01

    This paper presents a generative Bayesian model for diffeomorphic image registration and atlas building. We develop an atlas estimation procedure that simultaneously estimates the parameters controlling the smoothness of the diffeomorphic transformations. To achieve this, we introduce a Monte Carlo Expectation Maximization algorithm, where the expectation step is approximated via Hamiltonian Monte Carlo sampling on the manifold of diffeomorphisms. An added benefit of this stochastic approach is that it can successfully solve difficult registration problems involving large deformations, where direct geodesic optimization fails. Using synthetic data generated from the forward model with known parameters, we demonstrate the ability of our model to successfully recover the atlas and regularization parameters. We also demonstrate the effectiveness of the proposed method in the atlas estimation problem for 3D brain images.

  13. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  14. Digital breast tomosynthesis: computer-aided detection of clustered microcalcifications on planar projection images

    International Nuclear Information System (INIS)

    Samala, Ravi K; Chan, Heang-Ping; Lu, Yao; Hadjiiski, Lubomir M; Wei, Jun; Helvie, Mark A

    2014-01-01

    This paper describes a new approach to detect microcalcification clusters (MCs) in digital breast tomosynthesis (DBT) via its planar projection (PPJ) image. With IRB approval, two-view (cranio-caudal and mediolateral oblique views) DBTs of human subject breasts were obtained with a GE GEN2 prototype DBT system that acquires 21 projection angles spanning 60° in 3° increments. A data set of 307 volumes (154 human subjects) was divided by case into independent training (127 with MCs) and test sets (104 with MCs and 76 free of MCs). A simultaneous algebraic reconstruction technique with multiscale bilateral filtering (MSBF) regularization was used to enhance microcalcifications and suppress noise. During the MSBF regularized reconstruction, the DBT volume was separated into high frequency (HF) and low frequency components representing microcalcifications and larger structures. At the final iteration, maximum intensity projection was applied to the regularized HF volume to generate a PPJ image that contained MCs with increased contrast-to-noise ratio (CNR) and reduced search space. High CNR objects in the PPJ image were extracted and labeled as microcalcification candidates. Convolution neural network trained to recognize the image pattern of microcalcifications was used to classify the candidates into true calcifications and tissue structures and artifacts. The remaining microcalcification candidates were grouped into MCs by dynamic conditional clustering based on adaptive CNR threshold and radial distance criteria. False positive (FP) clusters were further reduced using the number of candidates in a cluster, CNR and size of microcalcification candidates. At 85% sensitivity an FP rate of 0.71 and 0.54 was achieved for view- and case-based sensitivity, respectively, compared to 2.16 and 0.85 achieved in DBT. The improvement was significant (p-value = 0.003) by JAFROC analysis. (paper)

  15. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    Science.gov (United States)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  16. Interactive facades analysis and synthesis of semi-regular facades

    KAUST Repository

    AlHalawani, Sawsan; Yang, Yongliang; Liu, Han; Mitra, Niloy J.

    2013-01-01

    Urban facades regularly contain interesting variations due to allowed deformations of repeated elements (e.g., windows in different open or close positions) posing challenges to state-of-the-art facade analysis algorithms. We propose a semi-automatic framework to recover both repetition patterns of the elements and their individual deformation parameters to produce a factored facade representation. Such a representation enables a range of applications including interactive facade images, improved multi-view stereo reconstruction, facade-level change detection, and novel image editing possibilities. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  17. Interactive facades analysis and synthesis of semi-regular facades

    KAUST Repository

    AlHalawani, Sawsan

    2013-05-01

    Urban facades regularly contain interesting variations due to allowed deformations of repeated elements (e.g., windows in different open or close positions) posing challenges to state-of-the-art facade analysis algorithms. We propose a semi-automatic framework to recover both repetition patterns of the elements and their individual deformation parameters to produce a factored facade representation. Such a representation enables a range of applications including interactive facade images, improved multi-view stereo reconstruction, facade-level change detection, and novel image editing possibilities. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  18. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    International Nuclear Information System (INIS)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-01-01

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise

  19. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Novelty detection in dermatological images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel

    2003-01-01

    The problem of novelty detection is considered for at set of dermatological image data. Different points of view are analyzed in detail. First, novelty detection is treated as a contextual classification problem. Different learning phases can be detected when the sample size is increased. The det......The problem of novelty detection is considered for at set of dermatological image data. Different points of view are analyzed in detail. First, novelty detection is treated as a contextual classification problem. Different learning phases can be detected when the sample size is increased...

  1. Regular Discrete Cosine Transform and its Application to Digital Images Representation

    Directory of Open Access Journals (Sweden)

    Yuri A. Gadzhiev

    2011-11-01

    Full Text Available Discrete cosine transform dct-i, unlike dct-ii, does not concentrate the energy of a transformed vector sufficiently well, so it is not used practically for the purposes of digital image compression. By performing regular normalization of the basic cosine transform matrix, we obtain a discrete cosine transform which has the same cosine basis as dct-i, coincides as dct-i with its own inverse transform, but unlike dct-i, it does not reduce the proper ability of cosine transform to the energy concentration. In this paper we consider briefly the properties of this transform, its possible integer implementation for the case of 8x8-matrix, its applications to the image itself and to the preliminary rgb colour space transformations, further more we investigate some models of quantization, perform an experiment for the estimation of the level of digital images compression and the quality achieved by use of this transform. This experiment shows that the transform can be sufficiently effective for practical use, but the question of its comparative effectiveness with respect to dct-ii remains open.

  2. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    Science.gov (United States)

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  3. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology

    Science.gov (United States)

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-01

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a

  4. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    International Nuclear Information System (INIS)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  5. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Science.gov (United States)

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  6. Automatic detection of blurred images in UAV image sets

    Science.gov (United States)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of

  7. Marker Detection in Aerial Images

    KAUST Repository

    Alharbi, Yazeed

    2017-04-09

    The problem that the thesis is trying to solve is the detection of small markers in high-resolution aerial images. Given a high-resolution image, the goal is to return the pixel coordinates corresponding to the center of the marker in the image. The marker has the shape of two triangles sharing a vertex in the middle, and it occupies no more than 0.01% of the image size. An improvement on the Histogram of Oriented Gradients (HOG) is proposed, eliminating the majority of baseline HOG false positives for marker detection. The improvement is guided by the observation that standard HOG description struggles to separate markers from negatives patches containing an X shape. The proposed method alters intensities with the aim of altering gradients. The intensity-dependent gradient alteration leads to more separation between filled and unfilled shapes. The improvement is used in a two-stage algorithm to achieve high recall and high precision in detection of markers in aerial images. In the first stage, two classifiers are used: one to quickly eliminate most of the uninteresting parts of the image, and one to carefully select the marker among the remaining interesting regions. Interesting regions are selected by scanning the image with a fast classifier trained on the HOG features of markers in all rotations and scales. The next classifier is more precise and uses our method to eliminate the majority of the false positives of standard HOG. In the second stage, detected markers are tracked forward and backward in time. Tracking is needed to detect extremely blurred or distorted markers that are missed by the previous stage. The algorithm achieves 94% recall with minimal user guidance. An average of 30 guesses are given per image; the user verifies for each whether it is a marker or not. The brute force approach would return 100,000 guesses per image.

  8. Breast ultrasound tomography with total-variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  9. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  10. Improved image quality and detectability of hypovascular liver metastases on DECT with different adjusted window settings

    Energy Technology Data Exchange (ETDEWEB)

    Altenbernd, Jens; Forsting, Michael; Lauenstein, Thomas; Wetter, Axel [Duisburg-Essen Univ., Essen (Germany). Dept. of Diagnostic and Interventional Radiology and Neuroradiology

    2017-03-15

    To investigate dual-energy CT of hypovascular liver metastases (LMs) with special focus on window settings (WSs). The aim of the study is to investigate the extent to which adapted WSs and the low-energy images of DECT improve the visibility especially of smaller LMs. 30 patients with LMs of colorectal cancer were investigated with DECT of the liver. In each patient contrast-enhanced DECT imaging with portal-venous delay was performed. The total number, mean number and conspicuity (1= excellent - 5 = poor) of LMs were documented on 80-kVp images and virtual 120-kVp images with different WSs (25/200 HU, 50/200, 75/200 HU, 25/350 HU, 50/350 HU, 75/350 HU, 25/500 HU, 50/500 HU, 75/500 HU). The attenuation (HU) of LMs and several anatomic regions and the background noise on 80 kVp images and virtual 120 kVp images were documented. Signal (liver)/noise and liver/LM ratio (SNR/LLMR) were calculated. The total number of LMs depending on size (<1cm, 1-2cm, >2cm) on 80 kVp images and virtual 120 kVp images with previously investigated best and regular WSs were documented. The highest total number, mean number per patient and total number of LMs <1cm were detected with the WS 25/350 HU on 80kVp images (7.0; p = 0.02/218; p = 0.01/64;p<0.001) compared to the WS 75/200 HU on virtual 120 kVp images and the regular WS 50/350 HU on 80 kVp images and virtual 120 kVp images. The best conspicuity of LMs on 80 kVp images was documented with the WS 25/350 HU compared to the best WS on virtual 120 kVp images with 75/200 HU (1.2 vs. 2.5; p = 0.01). HU of normal liver, aorta, SNR and LLMR differed significantly between 80 kVp images and virtual 120 kVp images (128.1 vs. 93.6; < 0.05/192.8 vs. 131.4; < 0.05/10.3 vs. 8.1; p < 0.05/2.8 vs. 2.1; p < 0.05). Low kVp images of DECT datasets are more precise in detecting hypovascular liver metastases than virtual 120 kVp images. Dedicated window settings have a relevant influence on conspicuity.

  11. Ghost imaging with bucket detection and point detection

    Science.gov (United States)

    Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao

    2018-04-01

    We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.

  12. Task-based statistical image reconstruction for high-quality cone-beam CT

    Science.gov (United States)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a

  13. Photoacoustic Imaging in Oxygen Detection

    Directory of Open Access Journals (Sweden)

    Fei Cao

    2017-12-01

    Full Text Available Oxygen level, including blood oxygen saturation (sO2 and tissue oxygen partial pressure (pO2, are crucial physiological parameters in life science. This paper reviews the importance of these two parameters and the detection methods for them, focusing on the application of photoacoustic imaging in this scenario. sO2 is traditionally detected with optical spectra-based methods, and has recently been proven uniquely efficient by using photoacoustic methods. pO2, on the other hand, is typically detected by PET, MRI, or pure optical approaches, yet with limited spatial resolution, imaging frame rate, or penetration depth. Great potential has also been demonstrated by employing photoacoustic imaging to overcome the existing limitations of the aforementioned techniques.

  14. Salient Object Detection via Structured Matrix Decomposition.

    Science.gov (United States)

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  15. Optimal Scale Edge Detection Utilizing Noise within Images

    Directory of Open Access Journals (Sweden)

    Adnan Khashman

    2003-04-01

    Full Text Available Edge detection techniques have common problems that include poor edge detection in low contrast images, speed of recognition and high computational cost. An efficient solution to the edge detection of objects in low to high contrast images is scale space analysis. However, this approach is time consuming and computationally expensive. These expenses can be marginally reduced if an optimal scale is found in scale space edge detection. This paper presents a new approach to detecting objects within images using noise within the images. The novel idea is based on selecting one optimal scale for the entire image at which scale space edge detection can be applied. The selection of an ideal scale is based on the hypothesis that "the optimal edge detection scale (ideal scale depends on the noise within an image". This paper aims at providing the experimental evidence on the relationship between the optimal scale and the noise within images.

  16. Crack detection using image processing

    International Nuclear Information System (INIS)

    Moustafa, M.A.A

    2010-01-01

    This thesis contains five main subjects in eight chapters and two appendices. The first subject discus Wiener filter for filtering images. In the second subject, we examine using different methods, as Steepest Descent Algorithm (SDA) and the Wavelet Transformation, to detect and filling the cracks, and it's applications in different areas as Nano technology and Bio-technology. In third subject, we attempt to find 3-D images from 1-D or 2-D images using texture mapping with Open Gl under Visual C ++ language programming. The fourth subject consists of the process of using the image warping methods for finding the depth of 2-D images using affine transformation, bilinear transformation, projective mapping, Mosaic warping and similarity transformation. More details about this subject will be discussed below. The fifth subject, the Bezier curves and surface, will be discussed in details. The methods for creating Bezier curves and surface with unknown distribution, using only control points. At the end of our discussion we will obtain the solid form, using the so called NURBS (Non-Uniform Rational B-Spline); which depends on: the degree of freedom, control points, knots, and an evaluation rule; and is defined as a mathematical representation of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or (solid) which depends on finding the Bezier curve and creating family of curves (surface), then filling in between to obtain the solid form. Another representation for this subject is concerned with building 3D geometric models from physical objects using image-based techniques. The advantage of image techniques is that they require no expensive equipment; we use NURBS, subdivision surface and mesh for finding the depth of any image with one still view or 2D image. The quality of filtering depends on the way the data is incorporated into the model. The data should be treated with

  17. Experiment Design Regularization-Based Hardware/Software Codesign for Real-Time Enhanced Imaging in Uncertain Remote Sensing Environment

    Directory of Open Access Journals (Sweden)

    Castillo Atoche A

    2010-01-01

    Full Text Available A new aggregated Hardware/Software (HW/SW codesign approach to optimization of the digital signal processing techniques for enhanced imaging with real-world uncertain remote sensing (RS data based on the concept of descriptive experiment design regularization (DEDR is addressed. We consider the applications of the developed approach to typical single-look synthetic aperture radar (SAR imaging systems operating in the real-world uncertain RS scenarios. The software design is aimed at the algorithmic-level decrease of the computational load of the large-scale SAR image enhancement tasks. The innovative algorithmic idea is to incorporate into the DEDR-optimized fixed-point iterative reconstruction/enhancement procedure the convex convergence enforcement regularization via constructing the proper multilevel projections onto convex sets (POCS in the solution domain. The hardware design is performed via systolic array computing based on a Xilinx Field Programmable Gate Array (FPGA XC4VSX35-10ff668 and is aimed at implementing the unified DEDR-POCS image enhancement/reconstruction procedures in a computationally efficient multi-level parallel fashion that meets the (near real-time image processing requirements. Finally, we comment on the simulation results indicative of the significantly increased performance efficiency both in resolution enhancement and in computational complexity reduction metrics gained with the proposed aggregated HW/SW co-design approach.

  18. Bistatic Forward Scattering Radar Detection and Imaging

    Directory of Open Access Journals (Sweden)

    Hu Cheng

    2016-06-01

    Full Text Available Forward Scattering Radar (FSR is a special type of bistatic radar that can implement image detection, imaging, and identification using the forward scattering signals provided by the moving targets that cross the baseline between the transmitter and receiver. Because the forward scattering effect has a vital significance in increasing the targets’ Radar Cross Section (RCS, FSR is quite advantageous for use in counter stealth detection. This paper first introduces the front line technology used in forward scattering RCS, FSR detection, and Shadow Inverse Synthetic Aperture Radar (SISAR imaging and key problems such as the statistical characteristics of forward scattering clutter, accurate parameter estimation, and multitarget discrimination are then analyzed. Subsequently, the current research progress in FSR detection and SISAR imaging are described in detail, including the theories and experiments. In addition, with reference to the BeiDou navigation satellite, the results of forward scattering experiments in civil aircraft detection are shown. Finally, this paper considers future developments in FSR target detection and imaging and presents a new, promising technique for stealth target detection.

  19. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  20. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    Science.gov (United States)

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-03-15

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes

  1. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    International Nuclear Information System (INIS)

    Bildhauer, Michael; Fuchs, Martin

    2012-01-01

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  2. Application of image editing software for forensic detection of image ...

    African Journals Online (AJOL)

    Application of image editing software for forensic detection of image. ... The image editing software's available today is apt for creating visually compelling and sophisticated fake images, ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  3. Regularized non-stationary morphological reconstruction algorithm for weak signal detection in microseismic monitoring: methodology

    Science.gov (United States)

    Huang, Weilin; Wang, Runqiu; Chen, Yangkang

    2018-05-01

    Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.

  4. TU-CD-BRA-12: Coupling PET Image Restoration and Segmentation Using Variational Method with Multiple Regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was used on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural

  5. From Matched Spatial Filtering towards the Fused Statistical Descriptive Regularization Method for Enhanced Radar Imaging

    Directory of Open Access Journals (Sweden)

    Shkvarko Yuriy

    2006-01-01

    Full Text Available We address a new approach to solve the ill-posed nonlinear inverse problem of high-resolution numerical reconstruction of the spatial spectrum pattern (SSP of the backscattered wavefield sources distributed over the remotely sensed scene. An array or synthesized array radar (SAR that employs digital data signal processing is considered. By exploiting the idea of combining the statistical minimum risk estimation paradigm with numerical descriptive regularization techniques, we address a new fused statistical descriptive regularization (SDR strategy for enhanced radar imaging. Pursuing such an approach, we establish a family of the SDR-related SSP estimators, that encompass a manifold of existing beamforming techniques ranging from traditional matched filter to robust and adaptive spatial filtering, and minimum variance methods.

  6. Smart CMOS image sensor for lightning detection and imaging.

    Science.gov (United States)

    Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor

    2013-03-01

    We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.

  7. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  8. INTEGRATION OF IMAGE-DERIVED AND POS-DERIVED FEATURES FOR IMAGE BLUR DETECTION

    Directory of Open Access Journals (Sweden)

    T.-A. Teo

    2016-06-01

    Full Text Available The image quality plays an important role for Unmanned Aerial Vehicle (UAV’s applications. The small fixed wings UAV is suffering from the image blur due to the crosswind and the turbulence. Position and Orientation System (POS, which provides the position and orientation information, is installed onto an UAV to enable acquisition of UAV trajectory. It can be used to calculate the positional and angular velocities when the camera shutter is open. This study proposes a POS-assisted method to detect the blur image. The major steps include feature extraction, blur image detection and verification. In feature extraction, this study extracts different features from images and POS. The image-derived features include mean and standard deviation of image gradient. For POS-derived features, we modify the traditional degree-of-linear-blur (blinear method to degree-of-motion-blur (bmotion based on the collinear condition equations and POS parameters. Besides, POS parameters such as positional and angular velocities are also adopted as POS-derived features. In blur detection, this study uses Support Vector Machines (SVM classifier and extracted features (i.e. image information, POS data, blinear and bmotion to separate blur and sharp UAV images. The experiment utilizes SenseFly eBee UAV system. The number of image is 129. In blur image detection, we use the proposed degree-of-motion-blur and other image features to classify the blur image and sharp images. The classification result shows that the overall accuracy using image features is only 56%. The integration of image-derived and POS-derived features have improved the overall accuracy from 56% to 76% in blur detection. Besides, this study indicates that the performance of the proposed degree-of-motion-blur is better than the traditional degree-of-linear-blur.

  9. Sky Detection in Hazy Image.

    Science.gov (United States)

    Song, Yingchao; Luo, Haibo; Ma, Junkai; Hui, Bin; Chang, Zheng

    2018-04-01

    Sky detection plays an essential role in various computer vision applications. Most existing sky detection approaches, being trained on ideal dataset, may lose efficacy when facing unfavorable conditions like the effects of weather and lighting conditions. In this paper, a novel algorithm for sky detection in hazy images is proposed from the perspective of probing the density of haze. We address the problem by an image segmentation and a region-level classification. To characterize the sky of hazy scenes, we unprecedentedly introduce several haze-relevant features that reflect the perceptual hazy density and the scene depth. Based on these features, the sky is separated by two imbalance SVM classifiers and a similarity measurement. Moreover, a sky dataset (named HazySky) with 500 annotated hazy images is built for model training and performance evaluation. To evaluate the performance of our method, we conducted extensive experiments both on our HazySky dataset and the SkyFinder dataset. The results demonstrate that our method performs better on the detection accuracy than previous methods, not only under hazy scenes, but also under other weather conditions.

  10. Early skin tumor detection from microscopic images through image processing

    International Nuclear Information System (INIS)

    Siddiqi, A.A.; Narejo, G.B.; Khan, A.M.

    2017-01-01

    The research is done to provide appropriate detection technique for skin tumor detection. The work is done by using the image processing toolbox of MATLAB. Skin tumors are unwanted skin growth with different causes and varying extent of malignant cells. It is a syndrome in which skin cells mislay the ability to divide and grow normally. Early detection of tumor is the most important factor affecting the endurance of a patient. Studying the pattern of the skin cells is the fundamental problem in medical image analysis. The study of skin tumor has been of great interest to the researchers. DIP (Digital Image Processing) allows the use of much more complex algorithms for image processing, and hence, can offer both more sophisticated performance at simple task, and the implementation of methods which would be impossibly by analog means. It allows much wider range of algorithms to be applied to the input data and can avoid problems such as build up of noise and signal distortion during processing. The study shows that few works has been done on cellular scale for the images of skin. This research allows few checks for the early detection of skin tumor using microscopic images after testing and observing various algorithms. After analytical evaluation the result has been observed that the proposed checks are time efficient techniques and appropriate for the tumor detection. The algorithm applied provides promising results in lesser time with accuracy. The GUI (Graphical User Interface) that is generated for the algorithm makes the system user friendly. (author)

  11. A survey of landmine detection using hyperspectral imaging

    Science.gov (United States)

    Makki, Ihab; Younes, Rafic; Francis, Clovis; Bianchi, Tiziano; Zucchetti, Massimo

    2017-02-01

    Hyperspectral imaging is a trending technique in remote sensing that finds its application in many different areas, such as agriculture, mapping, target detection, food quality monitoring, etc. This technique gives the ability to remotely identify the composition of each pixel of the image. Therefore, it is a natural candidate for the purpose of landmine detection, thanks to its inherent safety and fast response time. In this paper, we will present the results of several studies that employed hyperspectral imaging for the purpose of landmine detection, discussing the different signal processing techniques used in this framework for hyperspectral image processing and target detection. Our purpose is to highlight the progresses attained in the detection of landmines using hyperspectral imaging and to identify possible perspectives for future work, in order to achieve a better detection in real-time operation mode.

  12. Mixture models with entropy regularization for community detection in networks

    Science.gov (United States)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  13. Detection of Dendritic Spines Using Wavelet-Based Conditional Symmetric Analysis and Regularized Morphological Shared-Weight Neural Networks

    Directory of Open Access Journals (Sweden)

    Shuihua Wang

    2015-01-01

    Full Text Available Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer’s disease, Parkinson’s diseases, and autism. In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby. We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for “mushroom” spines, 97.6% for “stubby” spines, and 98.6% for “thin” spines.

  14. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Simultaneous macula detection and optic disc boundary segmentation in retinal fundus images

    Science.gov (United States)

    Girard, Fantin; Kavalec, Conrad; Grenier, Sébastien; Ben Tahar, Houssem; Cheriet, Farida

    2016-03-01

    The optic disc (OD) and the macula are important structures in automatic diagnosis of most retinal diseases inducing vision defects such as glaucoma, diabetic or hypertensive retinopathy and age-related macular degeneration. We propose a new method to detect simultaneously the macula and the OD boundary. First, the color fundus images are processed to compute several maps highlighting the different anatomical structures such as vessels, the macula and the OD. Then, macula candidates and OD candidates are found simultaneously and independently using seed detectors identified on the corresponding maps. After selecting a set of macula/OD pairs, the top candidates are sent to the OD segmentation method. The segmentation method is based on local K-means applied to color coordinates in polar space followed by a polynomial fitting regularization step. Pair scores are updated, resulting in the final best macula/OD pair. The method was evaluated on two public image databases: ONHSD and MESSIDOR. The results show an overlapping area of 0.84 on ONHSD and 0.90 on MESSIDOR, which is better than recent state of the art methods. Our segmentation method is robust to contrast and illumination problems and outputs the exact boundary of the OD, not just a circular or elliptical model. The macula detection has an accuracy of 94%, which again outperforms other macula detection methods. This shows that combining the OD and macula detections improves the overall accuracy. The computation time for the whole process is 6.4 seconds, which is faster than other methods in the literature.

  16. Similarity regularized sparse group lasso for cup to disc ratio computation.

    Science.gov (United States)

    Cheng, Jun; Zhang, Zhuo; Tao, Dacheng; Wong, Damon Wing Kee; Liu, Jiang; Baskaran, Mani; Aung, Tin; Wong, Tien Yin

    2017-08-01

    Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well.

  17. An investigation on the effects of brand equity, trust, image and customer satisfaction on regular insurance firm customers’ loyalty

    Directory of Open Access Journals (Sweden)

    Hamid Reza Saeednia

    2014-03-01

    Full Text Available Brand plays essential role on the success of most organizations and it has been considered as organizational assets. Therefore, brand management is important in today’s structure of organizations. A good brand helps gain new customer and future preferences, which leads to customer retention. Brand loyalty is one of the most important components of brand management. It can raise firm’s market share and it has close relationship with firm’s return of investment and profits. This research tries to answer this question and finds out more about the relationship between customer satisfaction, trust, brand equity, brand image and customer loyalty. The study uses a sample of 384 regular customers who use insurance services in Iran. Using Pearson correlation ratio as well as structural equation modeling, the study has detected positive and meaningful relationship between brand equity and other factors such as customer satisfaction, trust, etc.

  18. Automatic detection and classification of damage zone(s) for incorporating in digital image correlation technique

    Science.gov (United States)

    Bhattacharjee, Sudipta; Deb, Debasis

    2016-07-01

    Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.

  19. Mean magnetic susceptibility regularized susceptibility tensor imaging (MMSR-STI) for estimating orientations of white matter fibers in human brain.

    Science.gov (United States)

    Li, Xu; van Zijl, Peter C M

    2014-09-01

    An increasing number of studies show that magnetic susceptibility in white matter fibers is anisotropic and may be described by a tensor. However, the limited head rotation possible for in vivo human studies leads to an ill-conditioned inverse problem in susceptibility tensor imaging (STI). Here we suggest the combined use of limiting the susceptibility anisotropy to white matter and imposing morphology constraints on the mean magnetic susceptibility (MMS) for regularizing the STI inverse problem. The proposed MMS regularized STI (MMSR-STI) method was tested using computer simulations and in vivo human data collected at 3T. The fiber orientation estimated from both the STI and MMSR-STI methods was compared to that from diffusion tensor imaging (DTI). Computer simulations show that the MMSR-STI method provides a more accurate estimation of the susceptibility tensor than the conventional STI approach. Similarly, in vivo data show that use of the MMSR-STI method leads to a smaller difference between the fiber orientation estimated from STI and DTI for most selected white matter fibers. The proposed regularization strategy for STI can improve estimation of the susceptibility tensor in white matter. © 2014 Wiley Periodicals, Inc.

  20. Surface-based prostate registration with biomechanical regularization

    Science.gov (United States)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  1. The Regularized Iteratively Reweighted MAD Method for Change Detection in Multi- and Hyperspectral Data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This paper describes new extensions to the previously published multivariate alteration detection (MAD) method for change detection in bi-temporal, multi- and hypervariate data such as remote sensing imagery. Much like boosting methods often applied in data mining work, the iteratively reweighted...... to observations that show little change, i.e., for which the sum of squared, standardized MAD variates is small, and small weights are assigned to observations for which the sum is large. Like the original MAD method, the iterative extension is invariant to linear (affine) transformations of the original...... an agricultural region in Kenya, and hyperspectral airborne HyMap data from a small rural area in southeastern Germany are given. The latter case demonstrates the need for regularization....

  2. Detect Image Tamper by Semi-Fragile Digital Watermarking

    Institute of Scientific and Technical Information of China (English)

    LIUFeilong; WANGYangsheng

    2004-01-01

    To authenticate the integrity of image while resisting some valid image processing such as JPEG compression, a semi-fragile image watermarking is described. Image name, one of the image features, has been used as the key of pseudo-random function to generate the special watermarks for the different image. Watermarks are embedded by changing the relationship between the blocks' DCT DC coefficients, and the image tamper are detected with the relationship of these DCT DC coefficients.Experimental results show that the proposed technique can resist JPEG compression, and detect image tamper in the meantime.

  3. Application and Analysis of Wavelet Transform in Image Edge Detection

    Institute of Scientific and Technical Information of China (English)

    Jianfang gao[1

    2016-01-01

    For the image processing technology, technicians have been looking for a convenient and simple detection method for a long time, especially for the innovation research on image edge detection technology. Because there are a lot of original information at the edge during image processing, thus, we can get the real image data in terms of the data acquisition. The usage of edge is often in the case of some irregular geometric objects, and we determine the contour of the image by combining with signal transmitted data. At the present stage, there are different algorithms in image edge detection, however, different types of algorithms have divergent disadvantages so It is diffi cult to detect the image changes in a reasonable range. We try to use wavelet transformation in image edge detection, making full use of the wave with the high resolution characteristics, and combining multiple images, in order to improve the accuracy of image edge detection.

  4. Minimum detectable gas concentration performance evaluation method for gas leak infrared imaging detection systems.

    Science.gov (United States)

    Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo

    2017-04-01

    Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.

  5. Detection of Defects of BGA by Tomography Imaging

    Directory of Open Access Journals (Sweden)

    Tetsuhiro SUMIMOTO

    2005-08-01

    Full Text Available To improve a cost performance and the reliability of PC boards, an inspection of BGA is required in the surface mount process. Types of defects at BGA solder joints are solder bridges, missing connections, solder voids, open connections and miss-registrations of parts. As we can find mostly solder bridges in these defects, we pick up this to detect solder bridge in a production line. The problems of image analysis for the detection of defects at BGA solder joints are the detection accuracy and image processing time according to a line speed of production. To get design data for the development of the inspection system, which can be used easily in the surface mount process, it is important to develop image analysis techniques based on the X-ray image data. We attempt to detect the characteristics of the defects of BGA based on an image analysis. Using the X-ray penetration equipment, we have captured images of an IC package to search an abnormal BGA. Besides, in order to get information in detail of an abnormal BGA, we tried to capture the tomographic images utilizing the latest imaging techniques.

  6. Image Denoising And Segmentation Approchto Detect Tumor From BRAINMRI Images

    Directory of Open Access Journals (Sweden)

    Shanta Rangaswamy

    2018-04-01

    Full Text Available The detection of the Brain Tumor is a challenging problem, due to the structure of the Tumor cells in the brain. This project presents a systematic method that enhances the detection of brain tumor cells and to analyze functional structures by training and classification of the samples in SVM and tumor cell segmentation of the sample using DWT algorithm. From the input MRI Images collected, first noise is removed from MRI images by applying wiener filtering technique. In image enhancement phase, all the color components of MRI Images will be converted into gray scale image and make the edges clear in the image to get better identification and improvised quality of the image. In the segmentation phase, DWT on MRI Image to segment the grey-scale image is performed. During the post-processing, classification of tumor is performed by using SVM classifier. Wiener Filter, DWT, SVM Segmentation strategies were used to find and group the tumor position in the MRI filtered picture respectively. An essential perception in this work is that multi arrange approach utilizes various leveled classification strategy which supports execution altogether. This technique diminishes the computational complexity quality in time and memory. This classification strategy works accurately on all images and have achieved the accuracy of 93%.

  7. Strong lensing of a regular black hole with an electrodynamics source

    Science.gov (United States)

    Manna, Tuhina; Rahaman, Farook; Molla, Sabiruddin; Bhadra, Jhumpa; Shah, Hasrat Hussain

    2018-05-01

    In this paper we have investigated the gravitational lensing phenomenon in the strong field regime for a regular, charged, static black holes with non-linear electrodynamics source. We have obtained the angle of deflection and compared it to a Schwarzschild black hole and Reissner Nordström black hole with similar properties. We have also done a graphical study of the relativistic image positions and magnifications. We hope that this method may be useful in the detection of non-luminous bodies like this current black hole.

  8. Natural-pose hand detection in low-resolution images

    Directory of Open Access Journals (Sweden)

    Nyan Bo Bo1

    2009-07-01

    Full Text Available Robust real-time hand detection and tracking in video sequences would enable many applications in areas as diverse ashuman-computer interaction, robotics, security and surveillance, and sign language-based systems. In this paper, we introducea new approach for detecting human hands that works on single, cluttered, low-resolution images. Our prototype system, whichis primarily intended for security applications in which the images are noisy and low-resolution, is able to detect hands as smallas 2424 pixels in cluttered scenes. The system uses grayscale appearance information to classify image sub-windows as eithercontaining or not containing a human hand very rapidly at the cost of a high false positive rate. To improve on the false positiverate of the main classifier without affecting its detection rate, we introduce a post-processor system that utilizes the geometricproperties of skin color blobs. When we test our detector on a test image set containing 106 hands, 92 of those hands aredetected (86.8% detection rate, with an average false positive rate of 1.19 false positive detections per image. The rapiddetection speed, the high detection rate of 86.8%, and the low false positive rate together ensure that our system is useable asthe main detector in a diverse variety of applications requiring robust hand detection and tracking in low-resolution, clutteredscenes.

  9. An improved computing method for the image edge detection

    Institute of Scientific and Technical Information of China (English)

    Gang Wang; Liang Xiao; Anzhi He

    2007-01-01

    The framework of detecting the image edge based on the sub-pixel multi-fractal measure (SPMM) is presented. The measure is defined, which gives the sub-pixel local distribution of the image gradient. The more precise singularity exponent of every pixel can be obtained by performing the SPMM analysis on the image. Using the singularity exponents and the multi-fractal spectrum of the image, the image can be segmented into a series of sets with different singularity exponents, thus the image edge can be detected automatically and easily. The simulation results show that the SPMM has higher quality factor in the image edge detection.

  10. Image denoising based on noise detection

    Science.gov (United States)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  11. Image Fusion-Based Land Cover Change Detection Using Multi-Temporal High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Biao Wang

    2017-08-01

    Full Text Available Change detection is usually treated as a problem of explicitly detecting land cover transitions in satellite images obtained at different times, and helps with emergency response and government management. This study presents an unsupervised change detection method based on the image fusion of multi-temporal images. The main objective of this study is to improve the accuracy of unsupervised change detection from high-resolution multi-temporal images. Our method effectively reduces change detection errors, since spatial displacement and spectral differences between multi-temporal images are evaluated. To this end, a total of four cross-fused images are generated with multi-temporal images, and the iteratively reweighted multivariate alteration detection (IR-MAD method—a measure for the spectral distortion of change information—is applied to the fused images. In this experiment, the land cover change maps were extracted using multi-temporal IKONOS-2, WorldView-3, and GF-1 satellite images. The effectiveness of the proposed method compared with other unsupervised change detection methods is demonstrated through experimentation. The proposed method achieved an overall accuracy of 80.51% and 97.87% for cases 1 and 2, respectively. Moreover, the proposed method performed better when differentiating the water area from the vegetation area compared to the existing change detection methods. Although the water area beneath moderate and sparse vegetation canopy was captured, vegetation cover and paved regions of the water body were the main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the water body edge. Nevertheless, the proposed method, in conjunction with high-resolution satellite imagery, offers a robust and flexible approach to land cover change mapping that requires no ancillary data for rapid implementation.

  12. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  13. Image Processing Methods Usable for Object Detection on the Chessboard

    Directory of Open Access Journals (Sweden)

    Beran Ladislav

    2016-01-01

    Full Text Available Image segmentation and object detection is challenging problem in many research. Although many algorithms for image segmentation have been invented, there is no simple algorithm for image segmentation and object detection. Our research is based on combination of several methods for object detection. The first method suitable for image segmentation and object detection is colour detection. This method is very simply, but there is problem with different colours. For this method it is necessary to have precisely determined colour of segmented object before all calculations. In many cases it is necessary to determine this colour manually. Alternative simply method is method based on background removal. This method is based on difference between reference image and detected image. In this paper several methods suitable for object detection are described. Thisresearch is focused on coloured object detection on chessboard. The results from this research with fusion of neural networks for user-computer game checkers will be applied.

  14. Object detection from images obtained through underwater turbulence medium

    Science.gov (United States)

    Furhad, Md. Hasan; Tahtali, Murat; Lambert, Andrew

    2017-09-01

    Imaging through underwater experiences severe distortions due to random fluctuations of temperature and salinity in water, which produces underwater turbulence through diffraction limited blur. Lights reflecting from objects perturb and attenuate contrast, making the recognition of objects of interest difficult. Thus, the information available for detecting underwater objects of interest becomes a challenging task as they have inherent confusion among the background, foreground and other image properties. In this paper, a saliency-based approach is proposed to detect the objects acquired through an underwater turbulent medium. This approach has drawn attention among a wide range of computer vision applications, such as image retrieval, artificial intelligence, neuro-imaging and object detection. The image is first processed through a deblurring filter. Next, a saliency technique is used on the image for object detection. In this step, a saliency map that highlights the target regions is generated and then a graph-based model is proposed to extract these target regions for object detection.

  15. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  16. Human Body Image Edge Detection Based on Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    李勇; 付小莉

    2003-01-01

    Human dresses are different in thousands way.Human body image signals have big noise, a poor light and shade contrast and a narrow range of gray gradation distribution. The application of a traditional grads method or gray method to detect human body image edges can't obtain satisfactory results because of false detections and missed detections. According to tte peculiarity of human body image, dyadic wavelet transform of cubic spline is successfully applied to detect the face and profile edges of human body image and Mallat algorithm is used in the wavelet decomposition in this paper.

  17. Segmentation of Brain Tissues from Magnetic Resonance Images Using Adaptively Regularized Kernel-Based Fuzzy C-Means Clustering

    Directory of Open Access Journals (Sweden)

    Ahmed Elazab

    2015-01-01

    Full Text Available An adaptively regularized kernel-based fuzzy C-means clustering framework is proposed for segmentation of brain magnetic resonance images. The framework can be in the form of three algorithms for the local average grayscale being replaced by the grayscale of the average filter, median filter, and devised weighted images, respectively. The algorithms employ the heterogeneity of grayscales in the neighborhood and exploit this measure for local contextual information and replace the standard Euclidean distance with Gaussian radial basis kernel functions. The main advantages are adaptiveness to local context, enhanced robustness to preserve image details, independence of clustering parameters, and decreased computational costs. The algorithms have been validated against both synthetic and clinical magnetic resonance images with different types and levels of noises and compared with 6 recent soft clustering algorithms. Experimental results show that the proposed algorithms are superior in preserving image details and segmentation accuracy while maintaining a low computational complexity.

  18. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  19. Detecting content adaptive scaling of images for forensic applications

    Science.gov (United States)

    Fillion, Claude; Sharma, Gaurav

    2010-01-01

    Content-aware resizing methods have recently been developed, among which, seam-carving has achieved the most widespread use. Seam-carving's versatility enables deliberate object removal and benign image resizing, in which perceptually important content is preserved. Both types of modifications compromise the utility and validity of the modified images as evidence in legal and journalistic applications. It is therefore desirable that image forensic techniques detect the presence of seam-carving. In this paper we address detection of seam-carving for forensic purposes. As in other forensic applications, we pose the problem of seam-carving detection as the problem of classifying a test image in either of two classes: a) seam-carved or b) non-seam-carved. We adopt a pattern recognition approach in which a set of features is extracted from the test image and then a Support Vector Machine based classifier, trained over a set of images, is utilized to estimate which of the two classes the test image lies in. Based on our study of the seam-carving algorithm, we propose a set of intuitively motivated features for the detection of seam-carving. Our methodology for detection of seam-carving is then evaluated over a test database of images. We demonstrate that the proposed method provides the capability for detecting seam-carving with high accuracy. For images which have been reduced 30% by benign seam-carving, our method provides a classification accuracy of 91%.

  20. Segmentasi Paru-Paru pada Citra X-Ray Thorax Menggunakan Distance Regularized Levelset Evolution (DRLSE

    Directory of Open Access Journals (Sweden)

    M Amin Hariyadi

    2017-03-01

    Full Text Available -- Lung is one in control in the circulatory system of air (oxygen in the human body, so the detection of disorders of the human respiratory urgently needed to detect any disturbance in the lungs used X-ray beam, from the results of x-ray image of the thorax contained information used to analyze and determine the shape of an object from the lungs, in order to obtain such information, a process of segmentation. In this study used methods Distance regularized Levelset Evolution (DLRSE, this method region based models which is an improvement of edge-based models. The purpose of this study to implement segmentation methods DRLSE the lungs of the results of x-ray image of the thorax. The trial results with the system DLRSE method performed on the 20 data from X-ray image of the thorax obtained an average result accuracy of 87.90%, a sensitivity of 76.27% and a specificity of 93.98%.

  1. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  2. Adapting Local Features for Face Detection in Thermal Image

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-11-01

    Full Text Available A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses. We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  3. The regularized monotonicity method: detecting irregular indefinite inclusions

    DEFF Research Database (Denmark)

    Garde, Henrik; Staboulis, Stratos

    2018-01-01

    inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...

  4. Global regularizing flows with topology preservation for active contours and polygons.

    Science.gov (United States)

    Sundaramoorthi, Ganesh; Yezzi, Anthony

    2007-03-01

    Active contour and active polygon models have been used widely for image segmentation. In some applications, the topology of the object(s) to be detected from an image is known a priori, despite a complex unknown geometry, and it is important that the active contour or polygon maintain the desired topology. In this work, we construct a novel geometric flow that can be added to image-based evolutions of active contours and polygons in order to preserve the topology of the initial contour or polygon. We emphasize that, unlike other methods for topology preservation, the proposed geometric flow continually adjusts the geometry of the original evolution in a gradual and graceful manner so as to prevent a topology change long before the curve or polygon becomes close to topology change. The flow also serves as a global regularity term for the evolving contour, and has smoothness properties similar to curvature flow. These properties of gradually adjusting the original flow and global regularization prevent geometrical inaccuracies common with simple discrete topology preservation schemes. The proposed topology preserving geometric flow is the gradient flow arising from an energy that is based on electrostatic principles. The evolution of a single point on the contour depends on all other points of the contour, which is different from traditional curve evolutions in the computer vision literature.

  5. Hemorrhage detection in MRI brain images using images features

    Science.gov (United States)

    Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela

    2013-11-01

    The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.

  6. Landuse change detection in a surface coal mine area using multi-temporal high resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Demirel, N.; Duzgun, S.; Kemal Emil, M. [Middle East Technical Univ., Ankara (Turkey). Dept. of Mining Engineering

    2010-07-01

    Changes in the landcover and landuse of a mine area can be caused by surface mining activities, exploitation of ore and stripping and dumping overburden. In order to identify the long-term impacts of mining on the environment and land cover, these changes must be continuously monitored. A facility to regularly observe the progress of surface mining and reclamation is important for effective enforcement of mining and environmental regulations. Remote sensing provides a powerful tool to obtain rigorous data and reduce the need for time-consuming and expensive field measurements. The purpose of this study was to conduct post classification change detection for identifying, quantifying, and analyzing the spatial response of landscape due to surface lignite coal mining activities in Goynuk, Bolu, Turkey, from 2004 to 2008. The paper presented the research algorithm which involved acquiring multi temporal high resolution satellite data; preprocessing the data; performing image classification using maximum likelihood classification algorithm and performing accuracy assessment on the classification results; performing post classification change detection algorithm; and analyzing the results. Specifically, the paper discussed the study area, data and methodology, and image preprocessing using radiometric correction. Image classification and change detection were also discussed. It was concluded that the mine and dump area decreased by 192.5 ha from 2004 to 2008 and was caused by the diminishing reserves in the area and decline in the required production. 5 refs., 2 tabs., 4 figs.

  7. Gamma regularization based reconstruction for low dose CT

    International Nuclear Information System (INIS)

    Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis

    2015-01-01

    Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l 0 -norm and l 1 -norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms. (paper)

  8. Fuzzy Logic Based Edge Detection in Smooth and Noisy Clinical Images.

    Directory of Open Access Journals (Sweden)

    Izhar Haq

    Full Text Available Edge detection has beneficial applications in the fields such as machine vision, pattern recognition and biomedical imaging etc. Edge detection highlights high frequency components in the image. Edge detection is a challenging task. It becomes more arduous when it comes to noisy images. This study focuses on fuzzy logic based edge detection in smooth and noisy clinical images. The proposed method (in noisy images employs a 3 × 3 mask guided by fuzzy rule set. Moreover, in case of smooth clinical images, an extra mask of contrast adjustment is integrated with edge detection mask to intensify the smooth images. The developed method was tested on noise-free, smooth and noisy images. The results were compared with other established edge detection techniques like Sobel, Prewitt, Laplacian of Gaussian (LOG, Roberts and Canny. When the developed edge detection technique was applied to a smooth clinical image of size 270 × 290 pixels having 24 dB 'salt and pepper' noise, it detected very few (22 false edge pixels, compared to Sobel (1931, Prewitt (2741, LOG (3102, Roberts (1451 and Canny (1045 false edge pixels. Therefore it is evident that the developed method offers improved solution to the edge detection problem in smooth and noisy clinical images.

  9. Defect detection and sizing in ultrasonic imaging

    International Nuclear Information System (INIS)

    Moysan, J.; Benoist, P.; Chapuis, N.; Magnin, I.

    1991-01-01

    This paper introduces imaging processing developed with the SPARTACUS system in the field of ultrasonic testing. The aim of the imaging processing is to detect and to separate defects echoes from background noise. Image segmentation and particularities of ultrasonic images are the base of studied methods. 4 figs.; 6 refs [fr

  10. Advanced concepts in multi-dimensional radiation detection and imaging

    International Nuclear Information System (INIS)

    Vetter, Kai; Barnowski, Ross; Pavlovsky, Ryan; Haefner, Andy; Torii, Tatsuo; Shikaze, Yoshiaki; Sanada, Yukihisa

    2016-01-01

    Recent developments in the detector fabrication, signal readout, and data processing enable new concepts in radiation detection that are relevant for applications ranging from fundamental physics to medicine as well as nuclear security and safety. We present recent progress in multi-dimensional radiation detection and imaging in the Berkeley Applied Nuclear Physics program. It is based on the ability to reconstruct scenes in three dimensions and fuse it with gamma-ray image information. We are using the High-Efficiency Multimode Imager HEMI in its Compton imaging mode and combining it with contextual sensors such as the Microsoft Kinect or visual cameras. This new concept of volumetric imaging or scene data fusion provides unprecedented capabilities in radiation detection and imaging relevant for the detection and mapping of radiological and nuclear materials. This concept brings us one step closer to the seeing the world with gamma-ray eyes. (author)

  11. THE EFFECT OF IMAGE ENHANCEMENT METHODS DURING FEATURE DETECTION AND MATCHING OF THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    O. Akcay

    2017-05-01

    Full Text Available A successful image matching is essential to provide an automatic photogrammetric process accurately. Feature detection, extraction and matching algorithms have performed on the high resolution images perfectly. However, images of cameras, which are equipped with low-resolution thermal sensors are problematic with the current algorithms. In this paper, some digital image processing techniques were applied to the low-resolution images taken with Optris PI 450 382 x 288 pixel optical resolution lightweight thermal camera to increase extraction and matching performance. Image enhancement methods that adjust low quality digital thermal images, were used to produce more suitable images for detection and extraction. Three main digital image process techniques: histogram equalization, high pass and low pass filters were considered to increase the signal-to-noise ratio, sharpen image, remove noise, respectively. Later on, the pre-processed images were evaluated using current image detection and feature extraction methods Maximally Stable Extremal Regions (MSER and Speeded Up Robust Features (SURF algorithms. Obtained results showed that some enhancement methods increased number of extracted features and decreased blunder errors during image matching. Consequently, the effects of different pre-process techniques were compared in the paper.

  12. Effect of image quality on calcification detection in digital mammography

    Energy Technology Data Exchange (ETDEWEB)

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C. [National Co-ordinating Centre for the Physics of Mammography, Royal Surrey County Hospital NHS Foundation Trust, Guildford GU2 7XX, United Kingdom and Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, GU2 7XH (United Kingdom); Jarvis Breast Screening and Diagnostic Centre, Guildford GU1 1LJ (United Kingdom); Department of Radiology, St. George' s Healthcare NHS Trust, Tooting, London SW17 0QT (United Kingdom); Cambridge Breast Unit, Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ, United Kingdom and NIHR Cambridge Biomedical Research Centre, Cambridge CB2 0QQ (United Kingdom); Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15210 (United States); National Co-ordinating Centre for the Physics of Mammography, Royal Surrey County Hospital NHS Foundation Trust, Guildford GU2 7XX, United Kingdom and Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom); University Hospitals Leuven, Herestraat 49, 3000 Leuven (Belgium); National Co-ordinating Centre for the Physics of Mammography, Royal Surrey County Hospital NHS Foundation Trust, Guildford GU2 7XX, United Kingdom and Department of Physics, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford GU2 7XH (United Kingdom)

    2012-06-15

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  13. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  14. Coarse to fine aircraft detection from front-looking infrared images

    Science.gov (United States)

    Lin, Jin; Tan, Yihua; Tian, Jinwen

    2018-03-01

    Due to the weak feature and wide angle of long-distance aircraft targeting in the parking apron from front-looking infrared images, there are always false alarms in aircraft targeting detection. This leads to relatively poor reliability for detection results. In this paper, we present a scene-driven coarse-to-fine aircraft target detection method. First, we preprocess the image by combining the sharpened and enhanced images. Second, the region of interest (ROI) is segmented by using the local mean variance of the image and a series of subsequent processing. Then, target candidate areas are located by using the feature of local marginal distributions. Lastly, aircrafts can be detected accurately by a novel aircraft shape filter. Experiments on three infrared image sequences have shown that the presented method is effective and robust in detecting long-distance aircraft from front-looking infrared images and can also improve the reliability of the detection results.

  15. Effect of image quality on calcification detection in digital mammography

    International Nuclear Information System (INIS)

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-01-01

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC

  16. Effect of image quality on calcification detection in digital mammography.

    Science.gov (United States)

    Warren, Lucy M; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M; Wallis, Matthew G; Chakraborty, Dev P; Dance, David R; Bosmans, Hilde; Young, Kenneth C

    2012-06-01

    This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from

  17. Can missed breast cancer be recognized by regular peer auditing on screening mammography?

    Science.gov (United States)

    Pan, Huay-Ben; Yang, Tsung-Lung; Hsu, Giu-Cheng; Chiang, Chia-Ling; Huang, Jer-Shyung; Chou, Chen-Pin; Wang, Yen-Chi; Liang, Huei-Lung; Lee, San-Kan; Chou, Yi-Hong; Wong, Kam-Fai

    2012-09-01

    This study was conducted to investigate whether detectable missed breast cancers could be distinguished from truly false negative images in a mammographic screening by a regular peer auditing. Between 2004 and 2007, a total of 311,193 free nationwide biennial mammographic screenings were performed for 50- to 69-year-old women in Taiwan. Retrospectively comparing the records in Taiwan's Cancer registry, 1283 cancers were detected (4.1 per 1000). Of the total, 176 (0.6 per 1000) initial mammographic negative assessments were reported to have cancers (128 traditional films and 48 laser-printed digital images). We selected 186 true negative films (138 traditional films and 48 laser-printed ones) as control group. These were seeded into 4815 films of 2008 images to be audited in 2009. Thirty-four auditors interpreted all the films in a single-blind, randomized, pair-control study. The performance of 34 auditors was analyzed by chi-square test. A p value of auditing should reduce the probability of missed cancers. 2012 Published by Elsevier B.V

  18. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    Science.gov (United States)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  19. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  20. Surface Distresses Detection of Pavement Based on Digital Image Processing

    OpenAIRE

    Ouyang , Aiguo; Luo , Chagen; Zhou , Chao

    2010-01-01

    International audience; Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The applications of digital image processing in pavement crack detection, distresses classificati...

  1. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2013-09-22

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.

  2. Directional Total Generalized Variation Regularization for Impulse Noise Removal

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas; Dong, Yiqiu

    2017-01-01

    this regularizer for directional images is highly advantageous. In order to estimate directions in impulse noise corrupted images, which is much more challenging compared to Gaussian noise corrupted images, we introduce a new Fourier transform-based method. Numerical experiments show that this method is more...

  3. Roi Detection and Vessel Segmentation in Retinal Image

    Science.gov (United States)

    Sabaz, F.; Atila, U.

    2017-11-01

    Diabetes disrupts work by affecting the structure of the eye and afterwards leads to loss of vision. Depending on the stage of disease that called diabetic retinopathy, there are sudden loss of vision and blurred vision problems. Automated detection of vessels in retinal images is a useful study to diagnose eye diseases, disease classification and other clinical trials. The shape and structure of the vessels give information about the severity of the disease and the stage of the disease. Automatic and fast detection of vessels allows for a quick diagnosis of the disease and the treatment process to start shortly. ROI detection and vessel extraction methods for retinal image are mentioned in this study. It is shown that the Frangi filter used in image processing can be successfully used in detection and extraction of vessels.

  4. Handbook of particle detection and imaging

    CERN Document Server

    Buvat, Irène

    2012-01-01

    The handbook centers on detection techniques in the field of particle physics, medical imaging and related subjects. It is structured into three parts. The first one is dealing with basic ideas of particle detectors, followed by applications of these devices in high energy physics and other fields. In the last part the large field of medical imaging using similar detection techniques is described. The different chapters of the book are written by world experts in their field. Clear instructions on the detection techniques and principles in terms of relevant operation parameters for scientists and graduate students are given.Detailed tables and diagrams will make this a very useful handbook for the application of these techniques in many different fields like physics, medicine, biology and other areas of natural science.

  5. Handbook of particle detection and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grupen, Claus [Siegen Univ. (Germany). Fachbereich 7 - Physik; Buvat, Irene (eds.) [Paris 7 et 11 Univ., Orsay (France). IMNC-UMR 8165 CNRS

    2012-07-01

    The handbook centers on detection techniques in the field of particle physics, medical imaging and related subjects. It is structured into three parts. The first one is dealing with basic ideas of particle detectors, followed by applications of these devices in high energy physics and other fields. In the last part the large field of medical imaging using similar detection techniques is described. The different chapters of the book are written by world experts in their field. Clear instructions on the detection techniques and principles in terms of relevant operation parameters for scientists and graduate students are given. Detailed tables and diagrams will make this a very useful handbook for the application of these techniques in many different fields like physics, medicine, biology and other areas of natural science. (orig.)

  6. Analysis of the development of missile-borne IR imaging detecting technologies

    Science.gov (United States)

    Fan, Jinxiang; Wang, Feng

    2017-10-01

    Today's infrared imaging guiding missiles are facing many challenges. With the development of targets' stealth, new-style IR countermeasures and penetrating technologies as well as the complexity of the operational environments, infrared imaging guiding missiles must meet the higher requirements of efficient target detection, capability of anti-interference and anti-jamming and the operational adaptability in complex, dynamic operating environments. Missileborne infrared imaging detecting systems are constrained by practical considerations like cost, size, weight and power (SWaP), and lifecycle requirements. Future-generation infrared imaging guiding missiles need to be resilient to changing operating environments and capable of doing more with fewer resources. Advanced IR imaging detecting and information exploring technologies are the key technologies that affect the future direction of IR imaging guidance missiles. Infrared imaging detecting and information exploring technologies research will support the development of more robust and efficient missile-borne infrared imaging detecting systems. Novelty IR imaging technologies, such as Infrared adaptive spectral imaging, are the key to effectively detect, recognize and track target under the complicated operating and countermeasures environments. Innovative information exploring techniques for the information of target, background and countermeasures provided by the detection system is the base for missile to recognize target and counter interference, jamming and countermeasure. Modular hardware and software development is the enabler for implementing multi-purpose, multi-function solutions. Uncooled IRFPA detectors and High-operating temperature IRFPA detectors as well as commercial-off-the-shelf (COTS) technology will support the implementing of low-cost infrared imaging guiding missiles. In this paper, the current status and features of missile-borne IR imaging detecting technologies are summarized. The key

  7. Image edges detection through B-Spline filters

    International Nuclear Information System (INIS)

    Mastropiero, D.G.

    1997-01-01

    B-Spline signal processing was used to detect the edges of a digital image. This technique is based upon processing the image in the Spline transform domain, instead of doing so in the space domain (classical processing). The transformation to the Spline transform domain means finding out the real coefficients that makes it possible to interpolate the grey levels of the original image, with a B-Spline polynomial. There exist basically two methods of carrying out this interpolation, which produces the existence of two different Spline transforms: an exact interpolation of the grey values (direct Spline transform), and an approximated interpolation (smoothing Spline transform). The latter results in a higher smoothness of the gray distribution function defined by the Spline transform coefficients, and is carried out with the aim of obtaining an edge detection algorithm which higher immunity to noise. Finally the transformed image was processed in order to detect the edges of the original image (the gradient method was used), and the results of the three methods (classical, direct Spline transform and smoothing Spline transform) were compared. The results were that, as expected, the smoothing Spline transform technique produced a detection algorithm more immune to external noise. On the other hand the direct Spline transform technique, emphasizes more the edges, even more than the classical method. As far as the consuming time is concerned, the classical method is clearly the fastest one, and may be applied whenever the presence of noise is not important, and whenever edges with high detail are not required in the final image. (author). 9 refs., 17 figs., 1 tab

  8. Animal Detection in Natural Images: Effects of Color and Image Database

    Science.gov (United States)

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  9. Animal detection in natural images: effects of color and image database.

    Directory of Open Access Journals (Sweden)

    Weina Zhu

    Full Text Available The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used.

  10. Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices

    Science.gov (United States)

    Sentana, I. W. B.; Jawas, N.; Asri, S. A.

    2018-01-01

    Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.

  11. Salient man-made structure detection in infrared images

    Science.gov (United States)

    Li, Dong-jie; Zhou, Fu-gen; Jin, Ting

    2013-09-01

    Target detection, segmentation and recognition is a hot research topic in the field of image processing and pattern recognition nowadays, among which salient area or object detection is one of core technologies of precision guided weapon. Many theories have been raised in this paper; we detect salient objects in a series of input infrared images by using the classical feature integration theory and Itti's visual attention system. In order to find the salient object in an image accurately, we present a new method to solve the edge blur problem by calculating and using the edge mask. We also greatly improve the computing speed by improving the center-surround differences method. Unlike the traditional algorithm, we calculate the center-surround differences through rows and columns separately. Experimental results show that our method is effective in detecting salient object accurately and rapidly.

  12. ROI DETECTION AND VESSEL SEGMENTATION IN RETINAL IMAGE

    Directory of Open Access Journals (Sweden)

    F. Sabaz

    2017-11-01

    Full Text Available Diabetes disrupts work by affecting the structure of the eye and afterwards leads to loss of vision. Depending on the stage of disease that called diabetic retinopathy, there are sudden loss of vision and blurred vision problems. Automated detection of vessels in retinal images is a useful study to diagnose eye diseases, disease classification and other clinical trials. The shape and structure of the vessels give information about the severity of the disease and the stage of the disease. Automatic and fast detection of vessels allows for a quick diagnosis of the disease and the treatment process to start shortly. ROI detection and vessel extraction methods for retinal image are mentioned in this study. It is shown that the Frangi filter used in image processing can be successfully used in detection and extraction of vessels.

  13. Cascaded image analysis for dynamic crack detection in material testing

    Science.gov (United States)

    Hampel, U.; Maas, H.-G.

    Concrete probes in civil engineering material testing often show fissures or hairline-cracks. These cracks develop dynamically. Starting at a width of a few microns, they usually cannot be detected visually or in an image of a camera imaging the whole probe. Conventional image analysis techniques will detect fissures only if they show a width in the order of one pixel. To be able to detect and measure fissures with a width of a fraction of a pixel at an early stage of their development, a cascaded image analysis approach has been developed, implemented and tested. The basic idea of the approach is to detect discontinuities in dense surface deformation vector fields. These deformation vector fields between consecutive stereo image pairs, which are generated by cross correlation or least squares matching, show a precision in the order of 1/50 pixel. Hairline-cracks can be detected and measured by applying edge detection techniques such as a Sobel operator to the results of the image matching process. Cracks will show up as linear discontinuities in the deformation vector field and can be vectorized by edge chaining. In practical tests of the method, cracks with a width of 1/20 pixel could be detected, and their width could be determined at a precision of 1/50 pixel.

  14. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    Science.gov (United States)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  15. Peach Flower Monitoring Using Aerial Multispectral Imaging

    Directory of Open Access Journals (Sweden)

    Ryan Horton

    2017-01-01

    Full Text Available One of the tools for optimal crop production is regular monitoring and assessment of crops. During the growing season of fruit trees, the bloom period has increased photosynthetic rates that correlate with the fruiting process. This paper presents the development of an image processing algorithm to detect peach blossoms on trees. Aerial images of peach (Prunus persica trees were acquired from both experimental and commercial peach orchards in the southwestern part of Idaho using an off-the-shelf unmanned aerial system (UAS, equipped with a multispectral camera (near-infrared, green, blue. The image processing algorithm included contrast stretching of the three bands to enhance the image and thresholding segmentation method to detect the peach blossoms. Initial results showed that the image processing algorithm could detect peach blossoms with an average detection rate of 84.3% and demonstrated good potential as a monitoring tool for orchard management.

  16. Robust simultaneous detection of coronary borders in complex images

    International Nuclear Information System (INIS)

    Sonka, M.; Winniford, M.D.; Collins, S.M.

    1995-01-01

    Visual estimation of coronary obstruction severity from angiograms suffers from poor inter- and intraobserver reproducibility and is often inaccurate. In spite of the widely recognized limitations of visual analysis, automated methods have not found widespread clinical use, in part because they too frequently fail to accurately identify vessel borders. The authors have developed a robust method for simultaneous detection of left and right coronary borders that is suitable for analysis of complex images with poor contrast, nearby or overlapping structures, or branching vessels. The reliability of the simultaneous border detection method and that of their previously reported conventional border detection method were tested in 130 complex images, selected because conventional automated border detection might be expected to fail. Conventional analysis failed to yield acceptable borders in 65/130 or 50% of images. Simultaneous border detection was much more robust (p < .001) and failed in only 15/130 or 12% of complex images. Simultaneous border detection identified stenosis diameters that correlated significantly better with observer-derived stenosis diameters than did diameters obtained with conventional border detection (p < 0.001). Simultaneous detection of left and right coronary borders is highly robust and has substantial promise for enhancing the utility of quantitative coronary angiography in the clinical setting

  17. Land cover change detection in West Jilin using ETM+ images

    Institute of Scientific and Technical Information of China (English)

    Edward M.Osei,Jr.; ZHOU Yun-xuan

    2004-01-01

    In order to assess the information content and accuracy ofLandsat ETM+ digital images in land cover change detection,change-detection techniques of image differencing,normalized difference vegetation index,principal components analysis and tasseled-cap transformation were applied to yield 13 images. These images were thresholded into change and no change areas. The thresholded images were then checked in terms of various accuracies. The experiment results show that kappa coefficients of the 13 images range from 48.05 ~78.09. Different images do detect different types of changes. Images associated with changes in the near-infrared-reflectance or greenness detects crop-type changes and changes between vegetative and non-vegetative features. A unique means of using only Landsat imagery without reference data for the assessment of change in arid land are presented. Images of 12th June, 2000 and 2nd June, 2002 are used to validate the means. Analyses of standard accuracy and spatial agreement are performed to compare the new images (hereafter called "change images" ) representing the change between the two dates. Spatial agreement evaluates the conformity in the classified "change pixels" and "no-change pixels" at the same location on different change images and comprehensively examines the different techniques. This method would enable authorities to monitor land degradation efficiently and accurately.

  18. Generative adversarial networks for anomaly detection in images

    OpenAIRE

    Batiste Ros, Guillem

    2018-01-01

    Anomaly detection is used to identify abnormal observations that don t follow a normal pattern. Inthis work, we use the power of Generative Adversarial Networks in sampling from image distributionsto perform anomaly detection with images and to identify local anomalous segments within thisimages. Also, we explore potential application of this method to support pathological analysis ofbiological tissues

  19. PCB Fault Detection Using Image Processing

    Science.gov (United States)

    Nayak, Jithendra P. R.; Anitha, K.; Parameshachari, B. D., Dr.; Banu, Reshma, Dr.; Rashmi, P.

    2017-08-01

    The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment where delivery of 100% defect free PCBs is the expectation. To meet such expectations, identifying various defects and their types becomes the first step. In this PCB inspection system the inspection algorithm mainly focuses on the defect detection using the natural images. Many practical issues like tilt of the images, bad light conditions, height at which images are taken etc. are to be considered to ensure good quality of the image which can then be used for defect detection. Printed circuit board (PCB) fabrication is a multidisciplinary process, and etching is the most critical part in the PCB manufacturing process. The main objective of Etching process is to remove the exposed unwanted copper other than the required circuit pattern. In order to minimize scrap caused by the wrongly etched PCB panel, inspection has to be done in early stage. However, all of the inspections are done after the etching process where any defective PCB found is no longer useful and is simply thrown away. Since etching process costs 0% of the entire PCB fabrication, it is uneconomical to simply discard the defective PCBs. In this paper a method to identify the defects in natural PCB images and associated practical issues are addressed using Software tools and some of the major types of single layer PCB defects are Pattern Cut, Pin hole, Pattern Short, Nick etc., Therefore the defects should be identified before the etching process so that the PCB would be reprocessed. In the present approach expected to improve the efficiency of the system in detecting the defects even in low quality images

  20. Factors influencing detail detectability in radiologic imaging

    International Nuclear Information System (INIS)

    Gurvich, A.M.

    1985-01-01

    The detectability of various details is estimated quantitatively from the essential technical parameters of the imaging system and additional influencing factors including viewing of the image. The analysis implies the formation of the input radiation distribution (contrast formation, influence of kVp). Noise, image contrast (gamma), modulation transfer function and contrast threshold of the observer are of different influence on details of different size. Thus further optimization of imaging systems and their adaption to specific imaging tasks are facilitated

  1. Near-infrared Mueller matrix imaging for colonic cancer detection

    Science.gov (United States)

    Wang, Jianfeng; Zheng, Wei; Lin, Kan; Huang, Zhiwei

    2016-03-01

    Mueller matrix imaging along with polar decomposition method was employed for the colonic cancer detection by polarized light in the near-infrared spectral range (700-1100 nm). A high-speed (colonic tissues (i.e., normal and caner) were acquired. Polar decomposition was further implemented on the 16 images to derive the diattentuation, depolarization, and the retardance images. The decomposed images showed clear margin between the normal and cancerous colon tissue samples. The work shows the potential of near-infrared Mueller matrix imaging for the early diagnosis and detection of malignant lesions in the colon.

  2. Incorporating Spatial Information for Microaneurysm Detection in Retinal Images

    Directory of Open Access Journals (Sweden)

    Mohamed M. Habib

    2017-06-01

    Full Text Available The presence of microaneurysms(MAs in retinal images is a pathognomonic sign of Diabetic Retinopathy (DR. This is one of the leading causes of blindness in the working population worldwide. This paper introduces a novel algorithm that combines information from spatial views of the retina for the purpose of MA detection. Most published research in the literature has addressed the problem of detecting MAs from single retinal images. This work proposes the incorporation of information from two spatial views during the detection process. The algorithm is evaluated using 160 images from 40 patients seen as part of a UK diabetic eye screening programme which contained 207 MAs. An improvement in performance compared to detection from an algorithm that relies on a single image is shown as an increase of 2% ROC score, hence demonstrating the potential of this method.

  3. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    Science.gov (United States)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  4. Sonar Image Enhancements for Improved Detection of Sea Mines

    DEFF Research Database (Denmark)

    Jespersen, Karl; Sørensen, Helge Bjarup Dissing; Zerr, Benoit

    1999-01-01

    In this paper, five methods for enhancing sonar images prior to automatic detection of sea mines are investigated. Two of the methods have previously been published in connection with detection systems and serve as reference. The three new enhancement approaches are variance stabilizing log...... transform, nonlinear filtering, and pixel averaging for speckle reduction. The effect of the enhancement step is tested by using the full prcessing chain i.e. enhancement, detection and thresholding to determine the number of detections and false alarms. Substituting different enhancement algorithms...... in the processing chain gives a precise measure of the performance of the enhancement stage. The test is performed using a sonar image database with images ranging from very simple to very complex. The result of the comparison indicates that the new enhancement approaches improve the detection performance....

  5. Automated image based prominent nucleoli detection.

    Science.gov (United States)

    Yap, Choon K; Kalaw, Emarene M; Singh, Malay; Chong, Kian T; Giron, Danilo M; Huang, Chao-Hui; Cheng, Li; Law, Yan N; Lee, Hwee Kuan

    2015-01-01

    Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings.

  6. Automated image based prominent nucleoli detection

    Directory of Open Access Journals (Sweden)

    Choon K Yap

    2015-01-01

    Full Text Available Introduction: Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Materials and Methods: Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. Results: The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Conclusions: Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings.

  7. Surface regions of illusory images are detected with a slower processing speed than those of luminance-defined images.

    Science.gov (United States)

    Mihaylova, Milena; Manahilov, Velitchko

    2010-11-24

    Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.

  8. Prostate cancer detection from model-free T1-weighted time series and diffusion imaging

    Science.gov (United States)

    Haq, Nandinee F.; Kozlowski, Piotr; Jones, Edward C.; Chang, Silvia D.; Goldenberg, S. Larry; Moradi, Mehdi

    2015-03-01

    The combination of Dynamic Contrast Enhanced (DCE) images with diffusion MRI has shown great potential in prostate cancer detection. The parameterization of DCE images to generate cancer markers is traditionally performed based on pharmacokinetic modeling. However, pharmacokinetic models make simplistic assumptions about the tissue perfusion process, require the knowledge of contrast agent concentration in a major artery, and the modeling process is sensitive to noise and fitting instabilities. We address this issue by extracting features directly from the DCE T1-weighted time course without modeling. In this work, we employed a set of data-driven features generated by mapping the DCE T1 time course to its principal component space, along with diffusion MRI features to detect prostate cancer. The optimal set of DCE features is extracted with sparse regularized regression through a Least Absolute Shrinkage and Selection Operator (LASSO) model. We show that when our proposed features are used within the multiparametric MRI protocol to replace the pharmacokinetic parameters, the area under ROC curve is 0.91 for peripheral zone classification and 0.87 for whole gland classification. We were able to correctly classify 32 out of 35 peripheral tumor areas identified in the data when the proposed features were used with support vector machine classification. The proposed feature set was used to generate cancer likelihood maps for the prostate gland.

  9. Detection of brain metastasis. Comparison of Turbo-FLAIR imaging, T2-weighted imaging and double-dose gadolinium-enhanced MR imaging

    International Nuclear Information System (INIS)

    Okubo, Toshiyuki; Hayashi, Naoto; Shirouzu, Ichiro; Abe, Osamu; Ohtomo, Kuni; Sasaki, Yasuhito; Aoki, Shigeki; Wada, Akihiko

    1998-01-01

    The purpose of this study was to compare Turbo-FLAIR imaging, T 2 -weighted imaging, and double-dose gadolinium-enhanced MR imaging in the detection of brain metastasis. Using the three sequences, 20 consecutive patients with brain metastases were prospectively studied with a 1.5-Tesla system. Three independent, blinded readers assessed the images for the presence, size, number, and location of metastatic lesions. In the detection of large lesions (>0.5 cm), Turbo-FLAIR imaging (38/48, 79%) was not significantly different from gadolinium-enhanced imaging (42/48, 88%) (p=0.273). T 2 -weighted imaging (31/48, 65%), however, was inferior to gadolinium-enhanced imaging (p<0.05). There was no difference between Turbo-FLAIR imaging and gadolinium-enhanced imaging in the accuracy of detecting solitary brain metastasis (4/4, 100%). In conclusion, Turbo-FLAIR imaging is a useful, noninvasive screening modality for brain metastasis. Its use may lead to cost savings in the diagnosis of brain metastases and may impact positively the cost-effectiveness of treatment. (author)

  10. A-Track: Detecting Moving Objects in FITS images

    Science.gov (United States)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2017-04-01

    A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.

  11. A phantom design for assessment of detectability in PET imaging

    International Nuclear Information System (INIS)

    Wollenweber, Scott D.; Alessio, Adam M.; Kinahan, Paul E.

    2016-01-01

    Purpose: The primary clinical role of positron emission tomography (PET) imaging is the detection of anomalous regions of 18 F-FDG uptake, which are often indicative of malignant lesions. The goal of this work was to create a task-configurable fillable phantom for realistic measurements of detectability in PET imaging. Design goals included simplicity, adjustable feature size, realistic size and contrast levels, and inclusion of a lumpy (i.e., heterogeneous) background. Methods: The detection targets were hollow 3D-printed dodecahedral nylon features. The exostructure sphere-like features created voids in a background of small, solid non-porous plastic (acrylic) spheres inside a fillable tank. The features filled at full concentration while the background concentration was reduced due to filling only between the solid spheres. Results: Multiple iterations of feature size and phantom construction were used to determine a configuration at the limit of detectability for a PET/CT system. A full-scale design used a 20 cm uniform cylinder (head-size) filled with a fixed pattern of features at a contrast of approximately 3:1. Known signal-present and signal-absent PET sub-images were extracted from multiple scans of the same phantom and with detectability in a challenging (i.e., useful) range. These images enabled calculation and comparison of the quantitative observer detectability metrics between scanner designs and image reconstruction methods. The phantom design has several advantages including filling simplicity, wall-less contrast features, the control of the detectability range via feature size, and a clinically realistic lumpy background. Conclusions: This phantom provides a practical method for testing and comparison of lesion detectability as a function of imaging system, acquisition parameters, and image reconstruction methods and parameters.

  12. Blind Methods for Detecting Image Fakery

    Czech Academy of Sciences Publication Activity Database

    Mahdian, Babak; Saic, Stanislav

    2010-01-01

    Roč. 25, č. 4 (2010), s. 18-24 ISSN 0885-8985 R&D Projects: GA ČR GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : Image forensics * Image Fakery * Forgery detection * Authentication Subject RIV: BD - Theory of Information Impact factor: 0.179, year: 2010 http://library.utia.cas.cz/separaty/2010/ZOI/saic-0343316.pdf

  13. Camouflage target detection via hyperspectral imaging plus information divergence measurement

    Science.gov (United States)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin

    2016-01-01

    Target detection is one of most important applications in remote sensing. Nowadays accurate camouflage target distinction is often resorted to spectral imaging technique due to its high-resolution spectral/spatial information acquisition ability as well as plenty of data processing methods. In this paper, hyper-spectral imaging technique together with spectral information divergence measure method is used to solve camouflage target detection problem. A self-developed visual-band hyper-spectral imaging device is adopted to collect data cubes of certain experimental scene before spectral information divergences are worked out so as to discriminate target camouflage and anomaly. Full-band information divergences are measured to evaluate target detection effect visually and quantitatively. Information divergence measurement is proved to be a low-cost and effective tool for target detection task and can be further developed to other target detection applications beyond spectral imaging technique.

  14. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  15. On the Use of Normalized Compression Distances for Image Similarity Detection

    Directory of Open Access Journals (Sweden)

    Dinu Coltuc

    2018-01-01

    Full Text Available This paper investigates the usefulness of the normalized compression distance (NCD for image similarity detection. Instead of the direct NCD between images, the paper considers the correlation between NCD based feature vectors extracted for each image. The vectors are derived by computing the NCD between the original image and sequences of translated (rotated versions. Feature vectors for simple transforms (circular translations on horizontal, vertical, diagonal directions and rotations around image center and several standard compressors are generated and tested in a very simple experiment of similarity detection between the original image and two filtered versions (median and moving average. The promising vector configurations (geometric transform, lossless compressor are further tested for similarity detection on the 24 images of the Kodak set subject to some common image processing. While the direct computation of NCD fails to detect image similarity even in the case of simple median and moving average filtering in 3 × 3 windows, for certain transforms and compressors, the proposed approach appears to provide robustness at similarity detection against smoothing, lossy compression, contrast enhancement, noise addition and some robustness against geometrical transforms (scaling, cropping and rotation.

  16. Chimeric mitochondrial peptides from contiguous regular and swinger RNA.

    Science.gov (United States)

    Seligmann, Hervé

    2016-01-01

    Previous mass spectrometry analyses described human mitochondrial peptides entirely translated from swinger RNAs, RNAs where polymerization systematically exchanged nucleotides. Exchanges follow one among 23 bijective transformation rules, nine symmetric exchanges (X ↔ Y, e.g. A ↔ C) and fourteen asymmetric exchanges (X → Y → Z → X, e.g. A → C → G → A), multiplying by 24 DNA's protein coding potential. Abrupt switches from regular to swinger polymerization produce chimeric RNAs. Here, human mitochondrial proteomic analyses assuming abrupt switches between regular and swinger transcriptions, detect chimeric peptides, encoded by part regular, part swinger RNA. Contiguous regular- and swinger-encoded residues within single peptides are stronger evidence for translation of swinger RNA than previously detected, entirely swinger-encoded peptides: regular parts are positive controls matched with contiguous swinger parts, increasing confidence in results. Chimeric peptides are 200 × rarer than swinger peptides (3/100,000 versus 6/1000). Among 186 peptides with > 8 residues for each regular and swinger parts, regular parts of eleven chimeric peptides correspond to six among the thirteen recognized, mitochondrial protein-coding genes. Chimeric peptides matching partly regular proteins are rarer and less expressed than chimeric peptides matching non-coding sequences, suggesting targeted degradation of misfolded proteins. Present results strengthen hypotheses that the short mitogenome encodes far more proteins than hitherto assumed. Entirely swinger-encoded proteins could exist.

  17. Detection of rheumatoid arthritis using infrared imaging

    Science.gov (United States)

    Frize, Monique; Adéa, Cynthia; Payeur, Pierre; Di Primio, Gina; Karsh, Jacob; Ogungbemile, Abiola

    2011-03-01

    Rheumatoid arthritis (RA) is an inflammatory disease causing pain, swelling, stiffness, and loss of function in joints; it is difficult to diagnose in early stages. An early diagnosis and treatment can delay the onset of severe disability. Infrared (IR) imaging offers a potential approach to detect changes in degree of inflammation. In 18 normal subjects and 13 patients diagnosed with Rheumatoid Arthritis (RA), thermal images were collected from joints of hands, wrists, palms, and knees. Regions of interest (ROIs) were manually selected from all subjects and all parts imaged. For each subject, values were calculated from the temperature measurements: Mode/Max, Median/Max, Min/Max, Variance, Max-Min, (Mode-Mean), and Mean/Min. The data sets did not have a normal distribution, therefore non parametric tests (Kruskal-Wallis and Ranksum) were applied to assess if the data from the control group and the patient group were significantly different. Results indicate that: (i) thermal images can be detected on patients with the disease; (ii) the best joints to image are the metacarpophalangeal joints of the 2nd and 3rd fingers and the knees; the difference between the two groups was significant at the 0.05 level; (iii) the best calculations to differentiate between normal subjects and patients with RA are the Mode/Max, Variance, and Max-Min. We concluded that it is possible to reliably detect RA in patients using IR imaging. Future work will include a prospective study of normal subjects and patients that will compare IR results with Magnetic Resonance (MR) analysis.

  18. How to COAAD Images. II. A Coaddition Image that is Optimal for Any Purpose in the Background-dominated Noise Limit

    Energy Technology Data Exchange (ETDEWEB)

    Zackay, Barak; Ofek, Eran O. [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel)

    2017-02-20

    Image coaddition is one of the most basic operations that astronomers perform. In Paper I, we presented the optimal ways to coadd images in order to detect faint sources and to perform flux measurements under the assumption that the noise is approximately Gaussian. Here, we build on these results and derive from first principles a coaddition technique that is optimal for any hypothesis testing and measurement (e.g., source detection, flux or shape measurements, and star/galaxy separation), in the background-noise-dominated case. This method has several important properties. The pixels of the resulting coadded image are uncorrelated. This image preserves all the information (from the original individual images) on all spatial frequencies. Any hypothesis testing or measurement that can be done on all the individual images simultaneously, can be done on the coadded image without any loss of information. The PSF of this image is typically as narrow, or narrower than the PSF of the best image in the ensemble. Moreover, this image is practically indistinguishable from a regular single image, meaning that any code that measures any property on a regular astronomical image can be applied to it unchanged. In particular, the optimal source detection statistic derived in Paper I is reproduced by matched filtering this image with its own PSF. This coaddition process, which we call proper coaddition, can be understood as the maximum signal-to-noise ratio measurement of the Fourier transform of the image, weighted in such a way that the noise in the entire Fourier domain is of equal variance. This method has important implications for multi-epoch seeing-limited deep surveys, weak lensing galaxy shape measurements, and diffraction-limited imaging via speckle observations. The last topic will be covered in depth in future papers. We provide an implementation of this algorithm in MATLAB.

  19. PET regularization by envelope guided conjugate gradients

    International Nuclear Information System (INIS)

    Kaufman, L.; Neumaier, A.

    1996-01-01

    The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations

  20. Automatic detection of regions of interest in mammographic images

    Science.gov (United States)

    Cheng, Erkang; Ling, Haibin; Bakic, Predrag R.; Maidment, Andrew D. A.; Megalooikonomou, Vasileios

    2011-03-01

    This work is a part of our ongoing study aimed at comparing the topology of anatomical branching structures with the underlying image texture. Detection of regions of interest (ROIs) in clinical breast images serves as the first step in development of an automated system for image analysis and breast cancer diagnosis. In this paper, we have investigated machine learning approaches for the task of identifying ROIs with visible breast ductal trees in a given galactographic image. Specifically, we have developed boosting based framework using the AdaBoost algorithm in combination with Haar wavelet features for the ROI detection. Twenty-eight clinical galactograms with expert annotated ROIs were used for training. Positive samples were generated by resampling near the annotated ROIs, and negative samples were generated randomly by image decomposition. Each detected ROI candidate was given a confidences core. Candidate ROIs with spatial overlap were merged and their confidence scores combined. We have compared three strategies for elimination of false positives. The strategies differed in their approach to combining confidence scores by summation, averaging, or selecting the maximum score.. The strategies were compared based upon the spatial overlap with annotated ROIs. Using a 4-fold cross-validation with the annotated clinical galactographic images, the summation strategy showed the best performance with 75% detection rate. When combining the top two candidates, the selection of maximum score showed the best performance with 96% detection rate.

  1. Algorithms for boundary detection in radiographic images

    International Nuclear Information System (INIS)

    Gonzaga, Adilson; Franca, Celso Aparecido de

    1996-01-01

    Edge detecting techniques applied to radiographic digital images are discussed. Some algorithms have been implemented and the results are displayed to enhance boundary or hide details. An algorithm applied in a pre processed image with contrast enhanced is proposed and the results are discussed

  2. Digital Correlation based on Wavelet Transform for Image Detection

    International Nuclear Information System (INIS)

    Barba, L; Vargas, L; Torres, C; Mattos, L

    2011-01-01

    In this work is presented a method for the optimization of digital correlators to improve the characteristic detection on images using wavelet transform as well as subband filtering. It is proposed an approach of wavelet-based image contrast enhancement in order to increase the performance of digital correlators. The multiresolution representation is employed to improve the high frequency content of images taken into account the input contrast measured for the original image. The energy of correlation peaks and discrimination level of several objects are improved with this technique. To demonstrate the potentiality in extracting characteristics using the wavelet transform, small objects inside reference images are detected successfully.

  3. Nonlocal Regularized Algebraic Reconstruction Techniques for MRI: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Xin Li

    2013-01-01

    Full Text Available We attempt to revitalize researchers' interest in algebraic reconstruction techniques (ART by expanding their capabilities and demonstrating their potential in speeding up the process of MRI acquisition. Using a continuous-to-discrete model, we experimentally study the application of ART into MRI reconstruction which unifies previous nonuniform-fast-Fourier-transform- (NUFFT- based and gridding-based approaches. Under the framework of ART, we advocate the use of nonlocal regularization techniques which are leveraged from our previous research on modeling photographic images. It is experimentally shown that nonlocal regularization ART (NR-ART can often outperform their local counterparts in terms of both subjective and objective qualities of reconstructed images. On one real-world k-space data set, we find that nonlocal regularization can achieve satisfactory reconstruction from as few as one-third of samples. We also address an issue related to image reconstruction from real-world k-space data but overlooked in the open literature: the consistency of reconstructed images across different resolutions. A resolution-consistent extension of NR-ART is developed and shown to effectively suppress the artifacts arising from frequency extrapolation. Both source codes and experimental results of this work are made fully reproducible.

  4. Scalable Track Detection in SAR CCD Images

    Energy Technology Data Exchange (ETDEWEB)

    Chow, James G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Quach, Tu-Thach [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988, up fr om 0.907 obtained by the current state-of-the-art method.

  5. An image overall complexity evaluation method based on LSD line detection

    Science.gov (United States)

    Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo

    2017-04-01

    In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.

  6. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  7. Ultrasonic Detection Using Correlation Images (Preprint)

    National Research Council Canada - National Science Library

    Cepel, Raini; Ho, K. C; Rinker, Brett A; Palmer, Donald D; Neal, Steven P

    2006-01-01

    .... In this paper, we describe an amplitude independent approach for imaging and detection based on the similarity of adjacent signals, quantified by the correlation coefficient calculated between A-scans...

  8. Spoofing detection on facial images recognition using LBP and GLCM combination

    Science.gov (United States)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  9. Automatic prostate MR image segmentation with sparse label propagation and domain-specific manifold regularization.

    Science.gov (United States)

    Liao, Shu; Gao, Yaozong; Shi, Yinghuan; Yousuf, Ambereen; Karademir, Ibrahim; Oto, Aytekin; Shen, Dinggang

    2013-01-01

    Automatic prostate segmentation in MR images plays an important role in prostate cancer diagnosis. However, there are two main challenges: (1) Large inter-subject prostate shape variations; (2) Inhomogeneous prostate appearance. To address these challenges, we propose a new hierarchical prostate MR segmentation method, with the main contributions lying in the following aspects: First, the most salient features are learnt from atlases based on a subclass discriminant analysis (SDA) method, which aims to find a discriminant feature subspace by simultaneously maximizing the inter-class distance and minimizing the intra-class variations. The projected features, instead of only voxel-wise intensity, will be served as anatomical signature of each voxel. Second, based on the projected features, a new multi-atlases sparse label fusion framework is proposed to estimate the prostate likelihood of each voxel in the target image from the coarse level. Third, a domain-specific semi-supervised manifold regularization method is proposed to incorporate the most reliable patient-specific information identified by the prostate likelihood map to refine the segmentation result from the fine level. Our method is evaluated on a T2 weighted prostate MR image dataset consisting of 66 patients and compared with two state-of-the-art segmentation methods. Experimental results show that our method consistently achieves the highest segmentation accuracies than other methods under comparison.

  10. Low contrast detectability for color patterns variation of display images

    International Nuclear Information System (INIS)

    Ogura, Akio

    1998-01-01

    In recent years, the radionuclide images are acquired in digital form and displayed with false colors for signal intensity. This color scales for signal intensity have various patterns. In this study, low contrast detectability was compared the performance of gray scale cording with three color scales: the hot color scale, prism color scale and stripe color scale. SPECT images of brain phantom were displayed using four color patterns. These printed images and display images were evaluated with ROC analysis. Display images were indicated higher detectability than printed images. The hot scale and gray scale images indicated better Az of ROC than prism scale images because the prism scale images showed higher false positive rate. (author)

  11. Optic disc detection and boundary extraction in retinal images.

    Science.gov (United States)

    Basit, A; Fraz, Muhammad Moazam

    2015-04-10

    With the development of digital image processing, analysis and modeling techniques, automatic retinal image analysis is emerging as an important screening tool for early detection of ophthalmologic disorders such as diabetic retinopathy and glaucoma. In this paper, a robust method for optic disc detection and extraction of the optic disc boundary is proposed to help in the development of computer-assisted diagnosis and treatment of such ophthalmic disease. The proposed method is based on morphological operations, smoothing filters, and the marker controlled watershed transform. Internal and external markers are used to first modify the gradient magnitude image and then the watershed transformation is applied on this modified gradient magnitude image for boundary extraction. This method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc. The proposed method has optic disc detection success rate of 100%, 100%, 100% and 98.9% for the DRIVE, Shifa, CHASE_DB1, and DIARETDB1 databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 61.88%, 70.96%, 45.61%, and 54.69% for these databases, respectively, which are higher than currents methods.

  12. Comparative study of 201Tl reinjection mycoardial imaging and late imaging after reinjection for detecting myocardial viability

    International Nuclear Information System (INIS)

    Lin Jinghui; Chai Xiaofeng; Zhu Mei

    1997-01-01

    PURPOSE: To compare 201 Tl reinjection imaging with late imaging in detecting myocardial viability. METHODS: 62 patients with myocardial infarction underwent 201 Tl exercise, 3∼5 hours redistribution, 16∼35 minutes and 12∼19 hours post 201 Tl reinjection mycoardial tomography imaging. After imaging, percutaneous transluminal coronary angioplasty (PTCA) were performed in 15 patients, and then exercise-redistribution myocardial imaging were repeated. RESULTS: 62 patients had 126 segments of irreversible defects on stress-redistribution imaging, 48 segments showed radioactive filling at 16∼35 minutes post-reinjection. The detecting rate of myocardial viability was 38.1% (48/126). 51 segments presented redistribution on 12∼19 hours late imaging, the detecting rate of myocardial viability was 40.5% (51/126). There were no significant difference in the detecting rate between them (x 2 0.16, P>0.05). But in combination of both methods, there were 62 segments refilling, thereby detecting rate was enhanced to 49.2% (62/126). In 15 patients who had PTCA, out of 17 segments were discovered to be viable before PTCA. After PTCA 12 segments had an improved perfusion of 201 Tl, the positive predictive accuracy was 70.6%. Out of 11 segments were discovered to be infarcted, 9 segments had non-improved 201 Tl perfusion after PTCA, the negative predictive accuracy was 81.8%. CONCLUSION: There were no significant difference in the detecting rate of myocardial viability between 2 '0 1 Tl reinjection and late imaging. In combination of both methods the detecting rate can be enhanced

  13. Multispectral image feature fusion for detecting land mines

    Energy Technology Data Exchange (ETDEWEB)

    Clark, G.A.; Fields, D.J.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-11-15

    Our system fuses information contained in registered images from multiple sensors to reduce the effect of clutter and improve the the ability to detect surface and buried land mines. The sensor suite currently consists if a camera that acquires images in sixible wavelength bands, du, dual-band infrared (5 micron and 10 micron) and ground penetrating radar. Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a variety of physical properties that are more separate in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, holes made by animals and natural processes, etc.) and some artifacts.

  14. Image recognition on raw and processed potato detection: a review

    Science.gov (United States)

    Qi, Yan-nan; Lü, Cheng-xu; Zhang, Jun-ning; Li, Ya-shuo; Zeng, Zhen; Mao, Wen-hua; Jiang, Han-lu; Yang, Bing-nan

    2018-02-01

    Objective: Chinese potato staple food strategy clearly pointed out the need to improve potato processing, while the bottleneck of this strategy is technology and equipment of selection of appropriate raw and processed potato. The purpose of this paper is to summarize the advanced raw and processed potato detection methods. Method: According to consult research literatures in the field of image recognition based potato quality detection, including the shape, weight, mechanical damage, germination, greening, black heart, scab potato etc., the development and direction of this field were summarized in this paper. Result: In order to obtain whole potato surface information, the hardware was built by the synchronous of image sensor and conveyor belt to achieve multi-angle images of a single potato. Researches on image recognition of potato shape are popular and mature, including qualitative discrimination on abnormal and sound potato, and even round and oval potato, with the recognition accuracy of more than 83%. Weight is an important indicator for potato grading, and the image classification accuracy presents more than 93%. The image recognition of potato mechanical damage focuses on qualitative identification, with the main affecting factors of damage shape and damage time. The image recognition of potato germination usually uses potato surface image and edge germination point. Both of the qualitative and quantitative detection of green potato have been researched, currently scab and blackheart image recognition need to be operated using the stable detection environment or specific device. The image recognition of processed potato mainly focuses on potato chips, slices and fries, etc. Conclusion: image recognition as a food rapid detection tool have been widely researched on the area of raw and processed potato quality analyses, its technique and equipment have the potential for commercialization in short term, to meet to the strategy demand of development potato as

  15. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    Science.gov (United States)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  16. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin; Dai, Wei; Schuster, Gerard T.

    2013-01-01

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity

  17. Color image fusion for concealed weapon detection

    NARCIS (Netherlands)

    Toet, A.

    2003-01-01

    Recent advances in passive and active imaging sensor technology offer the potential to detect weapons that are concealed underneath a person's clothing or carried along in bags. Although the concealed weapons can sometimes easily be detected, it can be difficult to perceive their context, due to the

  18. Canny Edge Detection in Cross-Spectral Fused Images

    Directory of Open Access Journals (Sweden)

    Patricia Suárez

    2017-02-01

    Full Text Available Considering that the images of different spectra provide an ample information that helps a lo in the process of identification and distinction of objects that have unique spectral signatures. In this paper, the use of cross-spectral images in the process of edge detection is evaluated. This study aims to assess the Canny edge detector with two variants. The first relates to the use of merged cross-spectral images and the second the inclusion of morphological filters. To ensure the quality of the data used in this study the GQM (Goal-Question- Metrics, framework, was applied to reduce noise and increase the entropy on images. The metrics obtained in the experiments confirm that the quantity and quality of the detected edges increases significantly after the inclusion of a morphological filter and a channel of near infrared spectrum in the merged images.

  19. High resolution PET breast imager with improved detection efficiency

    Science.gov (United States)

    Majewski, Stanislaw

    2010-06-08

    A highly efficient PET breast imager for detecting lesions in the entire breast including those located close to the patient's chest wall. The breast imager includes a ring of imaging modules surrounding the imaged breast. Each imaging module includes a slant imaging light guide inserted between a gamma radiation sensor and a photodetector. The slant light guide permits the gamma radiation sensors to be placed in close proximity to the skin of the chest wall thereby extending the sensitive region of the imager to the base of the breast. Several types of photodetectors are proposed for use in the detector modules, with compact silicon photomultipliers as the preferred choice, due to its high compactness. The geometry of the detector heads and the arrangement of the detector ring significantly reduce dead regions thereby improving detection efficiency for lesions located close to the chest wall.

  20. DEEP LEARNING AND IMAGE PROCESSING FOR AUTOMATED CRACK DETECTION AND DEFECT MEASUREMENT IN UNDERGROUND STRUCTURES

    Directory of Open Access Journals (Sweden)

    F. Panella

    2018-05-01

    Full Text Available This work presents the combination of Deep-Learning (DL and image processing to produce an automated cracks recognition and defect measurement tool for civil structures. The authors focus on tunnel civil structures and survey and have developed an end to end tool for asset management of underground structures. In order to maintain the serviceability of tunnels, regular inspection is needed to assess their structural status. The traditional method of carrying out the survey is the visual inspection: simple, but slow and relatively expensive and the quality of the output depends on the ability and experience of the engineer as well as on the total workload (stress and tiredness may influence the ability to observe and record information. As a result of these issues, in the last decade there is the desire to automate the monitoring using new methods of inspection. The present paper has the goal of combining DL with traditional image processing to create a tool able to detect, locate and measure the structural defect.

  1. Analytic 3D image reconstruction using all detected events

    International Nuclear Information System (INIS)

    Kinahan, P.E.; Rogers, J.G.

    1988-11-01

    We present the results of testing a previously presented algorithm for three-dimensional image reconstruction that uses all gamma-ray coincidence events detected by a PET volume-imaging scanner. By using two iterations of an analytic filter-backprojection method, the algorithm is not constrained by the requirement of a spatially invariant detector point spread function, which limits normal analytic techniques. Removing this constraint allows the incorporation of all detected events, regardless of orientation, which improves the statistical quality of the final reconstructed image

  2. NEAR REAL-TIME AUTOMATIC MARINE VESSEL DETECTION ON OPTICAL SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    G. Máttyus

    2013-05-01

    Full Text Available Vessel monitoring and surveillance is important for maritime safety and security, environment protection and border control. Ship monitoring systems based on Synthetic-aperture Radar (SAR satellite images are operational. On SAR images the ships made of metal with sharp edges appear as bright dots and edges, therefore they can be well distinguished from the water. Since the radar is independent from the sun light and can acquire images also by cloudy weather and rain, it provides a reliable service. Vessel detection from spaceborne optical images (VDSOI can extend the SAR based systems by providing more frequent revisit times and overcoming some drawbacks of the SAR images (e.g. lower spatial resolution, difficult human interpretation. Optical satellite images (OSI can have a higher spatial resolution thus enabling the detection of smaller vessels and enhancing the vessel type classification. The human interpretation of an optical image is also easier than as of SAR image. In this paper I present a rapid automatic vessel detection method which uses pattern recognition methods, originally developed in the computer vision field. In the first step I train a binary classifier from image samples of vessels and background. The classifier uses simple features which can be calculated very fast. For the detection the classifier is slided along the image in various directions and scales. The detector has a cascade structure which rejects most of the background in the early stages which leads to faster execution. The detections are grouped together to avoid multiple detections. Finally the position, size(i.e. length and width and heading of the vessels is extracted from the contours of the vessel. The presented method is parallelized, thus it runs fast (in minutes for 16000 × 16000 pixels image on a multicore computer, enabling near real-time applications, e.g. one hour from image acquisition to end user.

  3. Near Real-Time Automatic Marine Vessel Detection on Optical Satellite Images

    Science.gov (United States)

    Máttyus, G.

    2013-05-01

    Vessel monitoring and surveillance is important for maritime safety and security, environment protection and border control. Ship monitoring systems based on Synthetic-aperture Radar (SAR) satellite images are operational. On SAR images the ships made of metal with sharp edges appear as bright dots and edges, therefore they can be well distinguished from the water. Since the radar is independent from the sun light and can acquire images also by cloudy weather and rain, it provides a reliable service. Vessel detection from spaceborne optical images (VDSOI) can extend the SAR based systems by providing more frequent revisit times and overcoming some drawbacks of the SAR images (e.g. lower spatial resolution, difficult human interpretation). Optical satellite images (OSI) can have a higher spatial resolution thus enabling the detection of smaller vessels and enhancing the vessel type classification. The human interpretation of an optical image is also easier than as of SAR image. In this paper I present a rapid automatic vessel detection method which uses pattern recognition methods, originally developed in the computer vision field. In the first step I train a binary classifier from image samples of vessels and background. The classifier uses simple features which can be calculated very fast. For the detection the classifier is slided along the image in various directions and scales. The detector has a cascade structure which rejects most of the background in the early stages which leads to faster execution. The detections are grouped together to avoid multiple detections. Finally the position, size(i.e. length and width) and heading of the vessels is extracted from the contours of the vessel. The presented method is parallelized, thus it runs fast (in minutes for 16000 × 16000 pixels image) on a multicore computer, enabling near real-time applications, e.g. one hour from image acquisition to end user.

  4. A wavelet-based regularized reconstruction algorithm for SENSE parallel MRI with applications to neuroimaging

    International Nuclear Information System (INIS)

    Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.

    2011-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)

  5. Inpainting for Fringe Projection Profilometry Based on Geometrically Guided Iterative Regularization.

    Science.gov (United States)

    Budianto; Lun, Daniel P K

    2015-12-01

    Conventional fringe projection profilometry methods often have difficulty in reconstructing the 3D model of objects when the fringe images have the so-called highlight regions due to strong illumination from nearby light sources. Within a highlight region, the fringe pattern is often overwhelmed by the strong reflected light. Thus, the 3D information of the object, which is originally embedded in the fringe pattern, can no longer be retrieved. In this paper, a novel inpainting algorithm is proposed to restore the fringe images in the presence of highlights. The proposed method first detects the highlight regions based on a Gaussian mixture model. Then, a geometric sketch of the missing fringes is made and used as the initial guess of an iterative regularization procedure for regenerating the missing fringes. The simulation and experimental results show that the proposed algorithm can accurately reconstruct the 3D model of objects even when their fringe images have large highlight regions. It significantly outperforms the traditional approaches in both quantitative and qualitative evaluations.

  6. Colorectal cancer detection by hyperspectral imaging using fluorescence excitation scanning

    Science.gov (United States)

    Leavesley, Silas J.; Deal, Joshua; Hill, Shante; Martin, Will A.; Lall, Malvika; Lopez, Carmen; Rider, Paul F.; Rich, Thomas C.; Boudreaux, Carole W.

    2018-02-01

    Hyperspectral imaging technologies have shown great promise for biomedical applications. These techniques have been especially useful for detection of molecular events and characterization of cell, tissue, and biomaterial composition. Unfortunately, hyperspectral imaging technologies have been slow to translate to clinical devices - likely due to increased cost and complexity of the technology as well as long acquisition times often required to sample a spectral image. We have demonstrated that hyperspectral imaging approaches which scan the fluorescence excitation spectrum can provide increased signal strength and faster imaging, compared to traditional emission-scanning approaches. We have also demonstrated that excitation-scanning approaches may be able to detect spectral differences between colonic adenomas and adenocarcinomas and normal mucosa in flash-frozen tissues. Here, we report feasibility results from using excitation-scanning hyperspectral imaging to screen pairs of fresh tumoral and nontumoral colorectal tissues. Tissues were imaged using a novel hyperspectral imaging fluorescence excitation scanning microscope, sampling a wavelength range of 360-550 nm, at 5 nm increments. Image data were corrected to achieve a NIST-traceable flat spectral response. Image data were then analyzed using a range of supervised and unsupervised classification approaches within ENVI software (Harris Geospatial Solutions). Supervised classification resulted in >99% accuracy for single-patient image data, but only 64% accuracy for multi-patient classification (n=9 to date), with the drop in accuracy due to increased false-positive detection rates. Hence, initial data indicate that this approach may be a viable detection approach, but that larger patient sample sizes need to be evaluated and the effects of inter-patient variability studied.

  7. A New Method Based on Two-Stage Detection Mechanism for Detecting Ships in High-Resolution SAR Images

    Directory of Open Access Journals (Sweden)

    Xu Yongli

    2017-01-01

    Full Text Available Ship detection in synthetic aperture radar (SAR remote sensing images, being a fundamental but challenging problem in the field of satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. Aiming at the requirements of ship detection in high-resolution SAR images, the accuracy, the intelligent level, a better real-time operation and processing efficiency, The characteristics of ocean background and ship target in high-resolution SAR images were analyzed, we put forward a ship detection algorithm in high-resolution SAR images. The algorithm consists of two detection stages: The first step designs a pre-training classifier based on improved spectral residual visual model to obtain the visual salient regions containing ship targets quickly, then achieve the purpose of probably detection of ships. In the second stage, considering the Bayesian theory of binary hypothesis detection, a local maximum posterior probability (MAP classifier is designed for the classification of pixels. After the parameter estimation and judgment criterion, the classification of pixels are carried out in the target areas to achieve the classification of two types of pixels in the salient regions. In the paper, several types of satellite image data, such as TerraSAR-X (TS-X, Radarsat-2, are used to evaluate the performance of detection methods. Comparing with classical CFAR detection algorithms, experimental results show that the algorithm can achieve a better effect of suppressing false alarms, which caused by the speckle noise and ocean clutter background inhomogeneity. At the same time, the detection speed is increased by 25% to 45%.

  8. Approaches for improving image quality in magnetic induction tomography

    International Nuclear Information System (INIS)

    Maimaitijiang, Y; Roula, M A; Kahlert, J

    2010-01-01

    Magnetic induction tomography (MIT) is a contactless and non-invasive method for imaging the passive electrical properties of objects. Measuring the weak signal produced by eddy currents within biological soft tissues can be challenging in the presence of noise and the large signals resulting from the direct excitation–detection coil coupling. To detect haemorrhagic stroke in the brain, for instance, high measurement accuracy is required to enable images with enough contrast to differentiate between normal and haemorrhaged brain tissues. The reconstructed images are often very sensitive to inevitable measurement noise from the environment, system instabilities and patient-related artefacts such as movement and sweating. We propose methods for mitigating signal noise and improving image reconstruction. We evaluated and compared the use of a range wavelet transforms for signal denoising. Adaptive regularization methods including L-curve, generalized cross validation (GCV) and noise estimation were also compared. We evaluated all these described methods with measurements of in vitro tissues resembling a peripheral haemorrhagic cerebral stroke created by placing a bio-membrane package filled with 10 ml blood in a swine brain of 100 ml. We show that wavelet packet denoising combined with adaptive regularization can improve the quality of reconstructed images

  9. DWT-SATS Based Detection of Image Region Cloning

    OpenAIRE

    Michael Zimba

    2014-01-01

    A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, ...

  10. Gamma-ray detection and Compton camera image reconstruction with application to hadron therapy

    International Nuclear Information System (INIS)

    Frandes, M.

    2010-09-01

    detection chain from Monte Carlo simulations to reconstruction of individual events, and finally to image reconstruction. A list-mode Maximum-Likelihood Expectation-Maximization (MLEM) algorithm was adopted to perform image reconstruction in conjunction with the imaging response, which has to depict the complex behavior of the detector. Modeling the imaging response requires complex calculations, considering the incident angle, all measured energies, the Compton scatter angle in the first interaction, the direction of scattered electron (when measured). In the simplest form, each event response is described by Compton cone profiles. The shapes of the profiles are approximated by 1D Gaussian distributions. A strong correlation was observed between pattern of the reconstructed high-energy gamma events, and location of the Bragg peak. The performance of the imaging technique illustrated by the HTI is a function of the detector performance in terms of detection efficiency, spatial and energy resolution, acquisition time, and the algorithms used to reconstruct the gamma-ray activity. Thus beside optimizations of the imaging system, the applied imaging algorithm has a high influence on the final reconstructed images. The HTI reconstructed images are corrupted by noise due to the low photon counts recorded, the uncertainties induced by finite energy resolution, Doppler broadening, the limited model used to estimate the imaging response, and the artifacts generated when iterating the MLEM algorithm. This noise is spatially varying and signal-dependent, representing a major obstacle for information extraction. Thus image de-noising techniques were investigated. A Wavelet based multi-resolution strategy of list-mode MLEM Regularization (WREM) was developed to reconstruct Compton images. At each iteration, a threshold-based processing step was integrated. The noise variance was estimated at each scale of the wavelet decomposition as the median value of the coefficients from the high

  11. Detection of Glaucoma Using Image Processing Techniques: A Critique.

    Science.gov (United States)

    Kumar, B Naveen; Chauhan, R P; Dahiya, Nidhi

    2018-01-01

    The primary objective of this article is to present a summary of different types of image processing methods employed for the detection of glaucoma, a serious eye disease. Glaucoma affects the optic nerve in which retinal ganglion cells become dead, and this leads to loss of vision. The principal cause is the increase in intraocular pressure, which occurs in open-angle and angle-closure glaucoma, the two major types affecting the optic nerve. In the early stages of glaucoma, no perceptible symptoms appear. As the disease progresses, vision starts to become hazy, leading to blindness. Therefore, early detection of glaucoma is needed for prevention. Manual analysis of ophthalmic images is fairly time-consuming and accuracy depends on the expertise of the professionals. Automatic analysis of retinal images is an important tool. Automation aids in the detection, diagnosis, and prevention of risks associated with the disease. Fundus images obtained from a fundus camera have been used for the analysis. Requisite pre-processing techniques have been applied to the image and, depending upon the technique, various classifiers have been used to detect glaucoma. The techniques mentioned in the present review have certain advantages and disadvantages. Based on this study, one can determine which technique provides an optimum result.

  12. Computerized detection of lacunar infarcts in brain MR images

    International Nuclear Information System (INIS)

    Uchiyama, Yoshikazu; Matsui, Atsushi; Yokoyama, Ryujiro

    2007-01-01

    Asymptomatic lacunar infarcts are often found in the Brain Dock. The presence of asymptomatic lacunar infarcts increases the risk of serious cerebral infarction. Thus, it is an important task for radiologists and/or neurosurgeons to detect asymptomatic lacunar infarctions in MRI images. However, it is difficult for radiologists and/or neurosurgeons to identify lacunar infarcts correctly in MRI images, because it is hard to distinguish between lacunar infarcts and enlarged Virchow-Robin space. Therefore, the purpose of our study was to develop a computer-aided diagnosis scheme for detection of lacunar infarctions in order to assist radiologists and/or neurosurgeons' interpretation as a ''second opinion.'' Our database consisted of 1143 T2-weighted MR images and 1143 T1-weighted MR images, which were selected from 132 patients. First, we segmented the cerebral parenchyma region by use of a region growing technique. The white-tophat transformation was then applied for enhancement of lacunar infarcts. The multiple-phase binarization was used for identifying initial candidates of lacunar infarcts. For removal of false positives (FPs), 12 features were determined in each of the initial candidates in T2 and T1-weighted MR images. The rule-based schemes and an artificial neural network with these features were used for distinguishing between lacunar infarcts and FPs. The sensitivity of detection of lacunar infarcts was 96.8% (90/93) with 0.69 (737/1063) FP per image. This computerized method may be useful for radiologists and/or neurosurgeons in detecting lacunar infracts in MRI images. (author)

  13. The value of diffusion-weighted imaging in combination with T2-weighted imaging for rectal cancer detection

    International Nuclear Information System (INIS)

    Rao Shengxiang; Zeng Mengsu; Chen Caizhong; Li Renchen; Zhang Shujie; Xu Jianming; Hou Yingyong

    2008-01-01

    Objective: To evaluate the clinical value of diffusion-weighted imaging (DWI) in combination with T 2 -weighted imaging (T 2 WI) for the detection of rectal cancer as compared with T 2 WI alone. Materials and methods: Forty-five patients with rectal cancer and 20 without rectal cancer underwent DWI with parallel imaging and T 2 WI on a 1.5 T scanner. Images were independently reviewed by two readers blinded to the results to determine the detectability of rectal cancer. The detectability of T 2 W imaging without and with DW imaging was assessed by means of receiver operating characteristic analysis. The interobserver agreement between the two readers was calculated with kappa statistics. Results: The ROC analysis showed that each of two readers achieved more accurate results with T 2 W imaging combined with DW imaging than with T 2 W imaging alone significantly. The A z values for the two readers for each T 2 WI and T 2 WI combined with DWI were 0.918 versus 0.991 (p = 0.0494), 0.934 versus 0.997 (p = 0.0475), respectively. The values of kappa were 0.934 for T 2 WI and 0.948 for T 2 WI combined with DWI between the two readers. Conclusion: The addition of DW imaging to conventional T 2 W imaging provides better detection of rectal cancer

  14. Automatic correspondence detection in mammogram and breast tomosynthesis images

    Science.gov (United States)

    Ehrhardt, Jan; Krüger, Julia; Bischof, Arpad; Barkhausen, Jörg; Handels, Heinz

    2012-02-01

    Two-dimensional mammography is the major imaging modality in breast cancer detection. A disadvantage of mammography is the projective nature of this imaging technique. Tomosynthesis is an attractive modality with the potential to combine the high contrast and high resolution of digital mammography with the advantages of 3D imaging. In order to facilitate diagnostics and treatment in the current clinical work-flow, correspondences between tomosynthesis images and previous mammographic exams of the same women have to be determined. In this paper, we propose a method to detect correspondences in 2D mammograms and 3D tomosynthesis images automatically. In general, this 2D/3D correspondence problem is ill-posed, because a point in the 2D mammogram corresponds to a line in the 3D tomosynthesis image. The goal of our method is to detect the "most probable" 3D position in the tomosynthesis images corresponding to a selected point in the 2D mammogram. We present two alternative approaches to solve this 2D/3D correspondence problem: a 2D/3D registration method and a 2D/2D mapping between mammogram and tomosynthesis projection images with a following back projection. The advantages and limitations of both approaches are discussed and the performance of the methods is evaluated qualitatively and quantitatively using a software phantom and clinical breast image data. Although the proposed 2D/3D registration method can compensate for moderate breast deformations caused by different breast compressions, this approach is not suitable for clinical tomosynthesis data due to the limited resolution and blurring effects perpendicular to the direction of projection. The quantitative results show that the proposed 2D/2D mapping method is capable of detecting corresponding positions in mammograms and tomosynthesis images automatically for 61 out of 65 landmarks. The proposed method can facilitate diagnosis, visual inspection and comparison of 2D mammograms and 3D tomosynthesis images for

  15. Gamma-ray detection and Compton camera image reconstruction with application to hadron therapy; Detection des rayons gamma et reconstruction d'images pour la camera Compton: Application a l'hadrontherapie

    Energy Technology Data Exchange (ETDEWEB)

    Frandes, M.

    2010-09-15

    detection chain from Monte Carlo simulations to reconstruction of individual events, and finally to image reconstruction. A list-mode Maximum-Likelihood Expectation-Maximization (MLEM) algorithm was adopted to perform image reconstruction in conjunction with the imaging response, which has to depict the complex behavior of the detector. Modeling the imaging response requires complex calculations, considering the incident angle, all measured energies, the Compton scatter angle in the first interaction, the direction of scattered electron (when measured). In the simplest form, each event response is described by Compton cone profiles. The shapes of the profiles are approximated by 1D Gaussian distributions. A strong correlation was observed between pattern of the reconstructed high-energy gamma events, and location of the Bragg peak. The performance of the imaging technique illustrated by the HTI is a function of the detector performance in terms of detection efficiency, spatial and energy resolution, acquisition time, and the algorithms used to reconstruct the gamma-ray activity. Thus beside optimizations of the imaging system, the applied imaging algorithm has a high influence on the final reconstructed images. The HTI reconstructed images are corrupted by noise due to the low photon counts recorded, the uncertainties induced by finite energy resolution, Doppler broadening, the limited model used to estimate the imaging response, and the artifacts generated when iterating the MLEM algorithm. This noise is spatially varying and signal-dependent, representing a major obstacle for information extraction. Thus image de-noising techniques were investigated. A Wavelet based multi-resolution strategy of list-mode MLEM Regularization (WREM) was developed to reconstruct Compton images. At each iteration, a threshold-based processing step was integrated. The noise variance was estimated at each scale of the wavelet decomposition as the median value of the coefficients from the high

  16. A Review of Imaging Methods for Prostate Cancer Detection

    Directory of Open Access Journals (Sweden)

    Saradwata Sarkar

    2016-01-01

    Full Text Available Imaging is playing an increasingly important role in the detection of prostate cancer (PCa. This review summarizes the key imaging modalities–multiparametric ultrasound (US, multiparametric magnetic resonance imaging (MRI, MRI-US fusion imaging, and positron emission tomography (PET imaging–-used in the diagnosis and localization of PCa. Emphasis is laid on the biological and functional characteristics of tumors that rationalize the use of a specific imaging technique. Changes to anatomical architecture of tissue can be detected by anatomical grayscale US and T2-weighted MRI. Tumors are known to progress through angiogenesis–-a fact exploited by Doppler and contrast-enhanced US and dynamic contrast-enhanced MRI. The increased cellular density of tumors is targeted by elastography and diffusion-weighted MRI. PET imaging employs several different radionuclides to target the metabolic and cellular activities during tumor growth. Results from studies using these various imaging techniques are discussed and compared.

  17. Pedestrian detection in infrared image using HOG and Autoencoder

    Science.gov (United States)

    Chen, Tianbiao; Zhang, Hao; Shi, Wenjie; Zhang, Yu

    2017-11-01

    In order to guarantee the safety of driving at night, vehicle-mounted night vision system was used to detect pedestrian in front of cars and send alarm to prevent the potential dangerous. To decrease the false positive rate (FPR) and increase the true positive rate (TPR), a pedestrian detection method based on HOG and Autoencoder (HOG+Autoencoder) was presented. Firstly, the HOG features of input images were computed and encoded by Autoencoder. Then the encoded features were classified by Softmax. In the process of training, Autoencoder was trained unsupervised. Softmax was trained with supervision. Autoencoder and Softmax were stacked into a model and fine-tuned by labeled images. Experiment was conducted to compare the detection performance between HOG and HOG+Autoencoder, using images collected by vehicle-mounted infrared camera. There were 80000 images for training set and 20000 for the testing set, with a rate of 1:3 between positive and negative images. The result shows that when TPR is 95%, FPR of HOG+Autoencoder is 0.4%, while the FPR of HOG is 5% with the same TPR.

  18. Application of image recognition-based automatic hyphae detection in fungal keratitis.

    Science.gov (United States)

    Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi

    2018-03-01

    The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal

  19. A heuristic approach to edge detection in on-line portal imaging

    International Nuclear Information System (INIS)

    McGee, Kiaran P.; Schultheiss, Timothy E.; Martin, Eric E.

    1995-01-01

    Purpose: Portal field edge detection is an essential component of several postprocessing techniques used in on-line portal imaging, including field shape verification, selective contrast enhancement, and treatment setup error detection. Currently edge detection of successive fractions in a multifraction portal image series involves the repetitive application of the same algorithm. As the number of changes in the field is small compared to the total number of fractions, standard edge detection algorithms essentially recalculate the same field shape numerous times. A heuristic approach to portal edge detection has been developed that takes advantage of the relatively few changes in the portal field shape throughout a fractionation series. Methods and Materials: The routine applies a standard edge detection routine to calculate an initial field edge and saves the edge information. Subsequent fractions are processed by applying an edge detection operator over a small region about each point of the previously defined contour, to determine any shifts in the field shape in the new image. Failure of this edge check indicates that a significant change in the field edge has occurred, and the original edge detection routine is applied to the image. Otherwise the modified edge contour is used to define the new edge. Results: Two hundred and eighty-one portal images collected from an electronic portal imaging device were processed by the edge detection routine. The algorithm accurately calculated each portal field edge, as well as reducing processing time in subsequent fractions of an individual portal field by a factor of up to 14. Conclusions: The heuristic edge detection routine is an accurate and fast method for calculating portal field edges and determining field edge setup errors

  20. Power spectrum weighted edge analysis for straight edge detection in images

    Science.gov (United States)

    Karvir, Hrishikesh V.; Skipper, Julie A.

    2007-04-01

    Most man-made objects provide characteristic straight line edges and, therefore, edge extraction is a commonly used target detection tool. However, noisy images often yield broken edges that lead to missed detections, and extraneous edges that may contribute to false target detections. We present a sliding-block approach for target detection using weighted power spectral analysis. In general, straight line edges appearing at a given frequency are represented as a peak in the Fourier domain at a radius corresponding to that frequency, and a direction corresponding to the orientation of the edges in the spatial domain. Knowing the edge width and spacing between the edges, a band-pass filter is designed to extract the Fourier peaks corresponding to the target edges and suppress image noise. These peaks are then detected by amplitude thresholding. The frequency band width and the subsequent spatial filter mask size are variable parameters to facilitate detection of target objects of different sizes under known imaging geometries. Many military objects, such as trucks, tanks and missile launchers, produce definite signatures with parallel lines and the algorithm proves to be ideal for detecting such objects. Moreover, shadow-casting objects generally provide sharp edges and are readily detected. The block operation procedure offers advantages of significant reduction in noise influence, improved edge detection, faster processing speed and versatility to detect diverse objects of different sizes in the image. With Scud missile launcher replicas as target objects, the method has been successfully tested on terrain board test images under different backgrounds, illumination and imaging geometries with cameras of differing spatial resolution and bit-depth.

  1. Spatio-temporal Hotelling observer for signal detection from image sequences.

    Science.gov (United States)

    Caucci, Luca; Barrett, Harrison H; Rodriguez, Jeffrey J

    2009-06-22

    Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.

  2. Fast and objective detection and analysis of structures in downhole images

    Science.gov (United States)

    Wedge, Daniel; Holden, Eun-Jung; Dentith, Mike; Spadaccini, Nick

    2017-09-01

    Downhole acoustic and optical televiewer images, and formation microimager (FMI) logs are important datasets for structural and geotechnical analyses for the mineral and petroleum industries. Within these data, dipping planar structures appear as sinusoids, often in incomplete form and in abundance. Their detection is a labour intensive and hence expensive task and as such is a significant bottleneck in data processing as companies may have hundreds of kilometres of logs to process each year. We present an image analysis system that harnesses the power of automated image analysis and provides an interactive user interface to support the analysis of televiewer images by users with different objectives. Our algorithm rapidly produces repeatable, objective results. We have embedded it in an interactive workflow to complement geologists' intuition and experience in interpreting data to improve efficiency and assist, rather than replace the geologist. The main contributions include a new image quality assessment technique for highlighting image areas most suited to automated structure detection and for detecting boundaries of geological zones, and a novel sinusoid detection algorithm for detecting and selecting sinusoids with given confidence levels. Further tools are provided to perform rapid analysis of and further detection of structures e.g. as limited to specific orientations.

  3. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  4. Image enhancement using thermal-visible fusion for human detection

    Science.gov (United States)

    Zaihidee, Ezrinda Mohd; Hawari Ghazali, Kamarul; Zuki Saleh, Mohd

    2017-09-01

    An increased interest in detecting human beings in video surveillance system has emerged in recent years. Multisensory image fusion deserves more research attention due to the capability to improve the visual interpretability of an image. This study proposed fusion techniques for human detection based on multiscale transform using grayscale visual light and infrared images. The samples for this study were taken from online dataset. Both images captured by the two sensors were decomposed into high and low frequency coefficients using Stationary Wavelet Transform (SWT). Hence, the appropriate fusion rule was used to merge the coefficients and finally, the final fused image was obtained by using inverse SWT. From the qualitative and quantitative results, the proposed method is more superior than the two other methods in terms of enhancement of the target region and preservation of details information of the image.

  5. Remote sensing image ship target detection method based on visual attention model

    Science.gov (United States)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  6. ARCOCT: Automatic detection of lumen border in intravascular OCT images.

    Science.gov (United States)

    Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos

    2017-11-01

    Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border

  7. Edge-detect interpolation for direct digital periapical images

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows ; 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.

  8. Detection of pulmonary nodules on lung X-ray images. Studies on multi-resolutional filter and energy subtraction images

    International Nuclear Information System (INIS)

    Sawada, Akira; Sato, Yoshinobu; Kido, Shoji; Tamura, Shinichi

    1999-01-01

    The purpose of this work is to prove the effectiveness of an energy subtraction image for the detection of pulmonary nodules and the effectiveness of multi-resolutional filter on an energy subtraction image to detect pulmonary nodules. Also we study influential factors to the accuracy of detection of pulmonary nodules from viewpoints of types of images, types of digital filters and types of evaluation methods. As one type of images, we select an energy subtraction image, which removes bones such as ribs from the conventional X-ray image by utilizing the difference of X-ray absorption ratios at different energy between bones and soft tissue. Ribs and vessels are major causes of CAD errors in detection of pulmonary nodules and many researches have tried to solve this problem. So we select conventional X-ray images and energy subtraction X-ray images as types of images, and at the same time select ∇ 2 G (Laplacian of Guassian) filter, Min-DD (Minimum Directional Difference) filter and our multi-resolutional filter as types of digital filters. Also we select two evaluation methods and prove the effectiveness of an energy subtraction image, the effectiveness of Min-DD filter on a conventional X-ray image and the effectiveness of multi-resolutional filter on an energy subtraction image. (author)

  9. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  10. Application of image processing technology in yarn hairiness detection

    Directory of Open Access Journals (Sweden)

    Guohong ZHANG

    2016-02-01

    Full Text Available Digital image processing technology is one of the new methods for yarn detection, which can realize the digital characterization and objective evaluation of yarn appearance. This paper overviews the current status of development and application of digital image processing technology used for yarn hairiness evaluation, and analyzes and compares the traditional detection methods and this new developed method. Compared with the traditional methods, the image processing technology based method is more objective, fast and accurate, which is the vital development trend of the yarn appearance evaluation.

  11. Brain Stroke Detection by Microwaves Using Prior Information from Clinical Databases

    Directory of Open Access Journals (Sweden)

    Natalia Irishina

    2013-01-01

    Full Text Available Microwave tomographic imaging is an inexpensive, noninvasive modality of media dielectric properties reconstruction which can be utilized as a screening method in clinical applications such as breast cancer and brain stroke detection. For breast cancer detection, the iterative algorithm of structural inversion with level sets provides well-defined boundaries and incorporates an intrinsic regularization, which permits to discover small lesions. However, in case of brain lesion, the inverse problem is much more difficult due to the skull, which causes low microwave penetration and highly noisy data. In addition, cerebral liquid has dielectric properties similar to those of blood, which makes the inversion more complicated. Nevertheless, the contrast in the conductivity and permittivity values in this situation is significant due to blood high dielectric values compared to those of surrounding grey and white matter tissues. We show that using brain MRI images as prior information about brain's configuration, along with known brain dielectric properties, and the intrinsic regularization by structural inversion, allows successful and rapid stroke detection even in difficult cases. The method has been applied to 2D slices created from a database of 3D real MRI phantom images to effectively detect lesions larger than 2.5 × 10−2 m diameter.

  12. Spectral autofluorescence imaging of the retina for drusen detection

    Science.gov (United States)

    Foubister, James J.; Gorman, Alistair; Harvey, Andy; Hemert, Jano van

    2018-02-01

    The presence and characteristics of drusen in retinal images, namely their size, location, and distribution, can be used to aid in the diagnosis and monitoring of Age Related Macular Degeneration (AMD); one of the leading causes for blindness in the elderly population. Current imaging techniques are effective at determining the presence and number of drusen, but fail when it comes to classifying their size and form. These distinctions are important for correctly characterising the disease, especially in the early stages where the development of just one larger drusen can indicate progression. Another challenge for automated detection is in distinguishing them from other retinal features, such as cotton wool spots. We describe the development of a multi-spectral scanning-laser ophthalmoscope that records images of retinal autofluorescence (AF) in four spectral bands. This will offer the potential to detect drusen with improved contrast based on spectral discrimination for automated classification. The resulting improved specificity and sensitivity for their detection offers more reliable characterisation of AMD. We present proof of principle images prior to further system optimisation and clinical trials for assessment of enhanced detection of drusen.

  13. Automatic Detection of Vehicles Using Intensity Laser and Anaglyph Image

    Directory of Open Access Journals (Sweden)

    Hideo Araki

    2006-12-01

    Full Text Available In this work is presented a methodology to automatic car detection motion presents in digital aerial image on urban area using intensity, anaglyph and subtracting images. The anaglyph image is used to identify the motion cars on the expose take, because the cars provide red color due the not homology between objects. An implicit model was developed to provide a digital pixel value that has the specific propriety presented early, using the ratio between the RGB color of car object in the anaglyph image. The intensity image is used to decrease the false positive and to do the processing to work into roads and streets. The subtracting image is applied to decrease the false positives obtained due the markings road. The goal of this paper is automatically detect motion cars presents in digital aerial image in urban areas. The algorithm implemented applies normalization on the left and right images and later form the anaglyph with using the translation. The results show the applicability of proposed method and it potentiality on the automatic car detection and presented the performance of proposed methodology.

  14. NQR: From imaging to explosives and drugs detection

    International Nuclear Information System (INIS)

    Osan, Tristan M.; Cerioni, Lucas M.C.; Forguez, Jose; Olle, Juan M.; Pusiol, Daniel J.

    2007-01-01

    The main aim of this work is to present an overview of the nuclear quadrupole resonance (NQR) spectroscopy capabilities for solid state imaging and detection of illegal substances, such as explosives and drugs. We briefly discuss the evolution of different NQR imaging techniques, in particular those involving spatial encoding which permit conservation of spectroscopic information. It has been shown that plastic explosives and other forbidden substances cannot be easily detected by means of conventional inspection techniques, such as those based on conventional X-ray technology. For this kind of applications, the experimental results show that the information inferred from NQR spectroscopy provides excellent means to perform volumetric and surface detection of dangerous explosive and drug compounds

  15. A Modified Harris Corner Detection for Breast IR Image

    Directory of Open Access Journals (Sweden)

    Chia-Yen Lee

    2014-01-01

    Full Text Available Harris corner detectors, which depend on strong invariance and a local autocorrelation function, display poor detection performance for infrared (IR images with low contrast and nonobvious edges. In addition, feature points detected by Harris corner detectors are clustered due to the numerous nonlocal maxima. This paper proposes a modified Harris corner detector that includes two unique steps for processing IR images in order to overcome the aforementioned problems. Image contrast enhancement based on a generalized form of histogram equalization (HE combined with adjusting the intensity resolution causes false contours on IR images to acquire obvious edges. Adaptive nonmaximal suppression based on eliminating neighboring pixels avoids the clustered features. Preliminary results show that the proposed method can solve the clustering problem and successfully identify the representative feature points of IR breast images.

  16. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    Science.gov (United States)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  17. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  18. Optical motion detection using image partitioning

    International Nuclear Information System (INIS)

    Hessel, K.R.; Stalker, K.T.; McCarthy, A.E.

    1976-08-01

    An optical system for surveillance or intrusion detection, based upon image partitioning, is proposed. The scene of interest is imaged onto a checkerboard pattern of transmissive and reflective areas and the transmitted and reflected light components are measured by detectors. Changes in the scene disturb the light balance and can cause an alarm indication. Several system configurations are proposed. Measurements and computer simulations are used to determine the operating characteristics of the several configurations. Depth of focus problems at the patterned reflector is the primary concern. Noise considerations determine the theoretical limitation of system performance and are analyzed in some detail. Indications are that, under good scene radiance conditions, a change in the scene of approximately one part in 10 3 is detectable with a signal-to-noise ratio sufficient for a false alarm rate of one every few months

  19. Multivariate image analysis of laser-induced photothermal imaging used for detection of caries tooth

    Science.gov (United States)

    El-Sherif, Ashraf F.; Abdel Aziz, Wessam M.; El-Sharkawy, Yasser H.

    2010-08-01

    Time-resolved photothermal imaging has been investigated to characterize tooth for the purpose of discriminating between normal and caries areas of the hard tissue using thermal camera. Ultrasonic thermoelastic waves were generated in hard tissue by the absorption of fiber-coupled Q-switched Nd:YAG laser pulses operating at 1064 nm in conjunction with a laser-induced photothermal technique used to detect the thermal radiation waves for diagnosis of human tooth. The concepts behind the use of photo-thermal techniques for off-line detection of caries tooth features were presented by our group in earlier work. This paper illustrates the application of multivariate image analysis (MIA) techniques to detect the presence of caries tooth. MIA is used to rapidly detect the presence and quantity of common caries tooth features as they scanned by the high resolution color (RGB) thermal cameras. Multivariate principal component analysis is used to decompose the acquired three-channel tooth images into a two dimensional principal components (PC) space. Masking score point clusters in the score space and highlighting corresponding pixels in the image space of the two dominant PCs enables isolation of caries defect pixels based on contrast and color information. The technique provides a qualitative result that can be used for early stage caries tooth detection. The proposed technique can potentially be used on-line or real-time resolved to prescreen the existence of caries through vision based systems like real-time thermal camera. Experimental results on the large number of extracted teeth as well as one of the thermal image panoramas of the human teeth voltanteer are investigated and presented.

  20. Imaging inflammatory acne: lesion detection and tracking

    Science.gov (United States)

    Cula, Gabriela O.; Bargo, Paulo R.; Kollias, Nikiforos

    2010-02-01

    It is known that effectiveness of acne treatment increases when the lesions are detected earlier, before they could progress into mature wound-like lesions, which lead to scarring and discoloration. However, little is known about the evolution of acne from early signs until after the lesion heals. In this work we computationally characterize the evolution of inflammatory acne lesions, based on analyzing cross-polarized images that document acne-prone facial skin over time. Taking skin images over time, and being able to follow skin features in these images present serious challenges, due to change in the appearance of skin, difficulty in repositioning the subject, involuntary movement such as breathing. A computational technique for automatic detection of lesions by separating the background normal skin from the acne lesions, based on fitting Gaussian distributions to the intensity histograms, is presented. In order to track and quantify the evolution of lesions, in terms of the degree of progress or regress, we designed a study to capture facial skin images from an acne-prone young individual, followed over the course of 3 different time points. Based on the behavior of the lesions between two consecutive time points, the automatically detected lesions are classified in four categories: new lesions, resolved lesions (i.e. lesions that disappear completely), lesions that are progressing, and lesions that are regressing (i.e. lesions in the process of healing). The classification our methods achieve correlates well with visual inspection of a trained human grader.

  1. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  2. Novel Fingertip Image-Based Heart Rate Detection Methods for a Smartphone

    Directory of Open Access Journals (Sweden)

    Rifat Zaman

    2017-02-01

    Full Text Available We hypothesize that our fingertip image-based heart rate detection methods using smartphone reliably detect the heart rhythm and rate of subjects. We propose fingertip curve line movement-based and fingertip image intensity-based detection methods, which both use the movement of successive fingertip images obtained from smartphone cameras. To investigate the performance of the proposed methods, heart rhythm and rate of the proposed methods are compared to those of the conventional method, which is based on average image pixel intensity. Using a smartphone, we collected 120 s pulsatile time series data from each recruited subject. The results show that the proposed fingertip curve line movement-based method detects heart rate with a maximum deviation of 0.0832 Hz and 0.124 Hz using time- and frequency-domain based estimation, respectively, compared to the conventional method. Moreover, another proposed fingertip image intensity-based method detects heart rate with a maximum deviation of 0.125 Hz and 0.03 Hz using time- and frequency-based estimation, respectively.

  3. Detection of ossicular chain abnormalities using CT imaging. Comparison of axial and virtual middle ear endoscopic imaging

    International Nuclear Information System (INIS)

    Sakata, Motomichi; Kamagata, Masaki; Harada, Kuniaki; Shirase, Ryuji; Oomoto, Hidechika; Himi, Tetsuo

    2000-01-01

    The purpose of this study was to evaluate the usefulness of axial and three-dimensional imaging (virtual endoscopy) with helical CT for the detection of ossicular chain abnormalities. In 15 patients who had traumatic ossicular dislocation, disruption, and congenital ossicular defect and anomaly, axial helical CT scanning of the temporal bone was performed with GE HSA. Axial and three-dimensional imaging was carried out in normal ears (15 ears) and abnormal ears (10 ears), for the detection of ossicular chain abnormalities. Diagnostic accuracy was evaluated by receiver-operating-characteristic (ROC) curve analysis using a continuous reporting scale. Furthermore, ROC testing was done to determine the sensitivity, specificity, and accuracy of the detection of ossicular chain abnormalities. Diagnostic accuracy in the detection of ossicular chain abnormalities with three-dimensional imaging (A z =0.967, SD=0.022) was not significantly better than that of axial imaging (A z =0.930, SD=0.046); however, the interobserver standard deviation was better for three-dimensional imaging. Three-dimensional imaging resulted in an increase in true positive cases and a decrease in false negatives. Three-dimensional imaging also showed higher sensitivity and accuracy. In the evaluation of ossicular chain abnormalities, three-dimensional imaging (virtual endoscopy) is useful and provides additional information. Three-dimensional imaging may have an important role in diagnostic procedures and/or preoperative evaluation in otology. (author)

  4. Detection of latent prints by Raman imaging

    Science.gov (United States)

    Lewis, Linda Anne [Andersonville, TN; Connatser, Raynella Magdalene [Knoxville, TN; Lewis, Sr., Samuel Arthur

    2011-01-11

    The present invention relates to a method for detecting a print on a surface, the method comprising: (a) contacting the print with a Raman surface-enhancing agent to produce a Raman-enhanced print; and (b) detecting the Raman-enhanced print using a Raman spectroscopic method. The invention is particularly directed to the imaging of latent fingerprints.

  5. Software Analysis of Mining Images for Objects Detection

    Directory of Open Access Journals (Sweden)

    Jan Tomecek

    2013-11-01

    Full Text Available The contribution is dealing with the development of the new module of robust FOTOMNG system for editing images from a video or miningimage from measurements for subsequent improvement of detection of required objects in the 2D image. The generated module allows create a finalhigh-quality picture by combination of multiple images with the search objects. We can combine input data according to the parameters or basedon reference frames. Correction of detected 2D objects is also part of this module. The solution is implemented intoFOTOMNG system and finishedwork has been tested in appropriate frames, which were validated core functionality and usability. Tests confirmed the function of each part of themodule, its accuracy and implications of integration.

  6. Acousto-Mechanical Imaging for Breast Cancer Detection

    National Research Council Canada - National Science Library

    Emelianov, Stanislav Y

    2002-01-01

    The underlying hypothesis of our study is that quantitative breast elasticity imaging is possible and provides unique information, which could increase the detection, characterization and monitoring...

  7. Acousto-Mechanical Imaging for Breast Cancer Detection

    National Research Council Canada - National Science Library

    Emelianov, Stanislav Y

    2003-01-01

    The underlying hypothesis of our study is that quantitative breast elasticity imaging is possible and provides unique information, which could increase the detection, characterization and monitoring...

  8. Early Detection of Diabetic Retinopathy in Fluorescent Angiography Retinal Images Using Image Processing Methods

    Directory of Open Access Journals (Sweden)

    Meysam Tavakoli

    2010-12-01

    Full Text Available Introduction: Diabetic retinopathy (DR is the single largest cause of sight loss and blindness in the working age population of Western countries; it is the most common cause of blindness in adults between 20 and 60 years of age. Early diagnosis of DR is critical for preventing vision loss so early detection of microaneurysms (MAs as the first signs of DR is important. This paper addresses the automatic detection of MAs in fluorescein angiography fundus images, which plays a key role in computer assisted diagnosis of DR, a serious and frequent eye disease. Material and Methods: The algorithm can be divided into three main steps. The first step or pre-processing was for background normalization and contrast enhancement of the image. The second step aimed at detecting landmarks, i.e., all patterns possibly corresponding to vessels and the optic nerve head, which was achieved using a local radon transform. Then, MAs were extracted, which were used in the final step to automatically classify candidates into real MA and other objects. A database of 120 fluorescein angiography fundus images was used to train and test the algorithm. The algorithm was compared to manually obtained gradings of those images. Results: Sensitivity of diagnosis for DR was 94%, with specificity of 75%, and sensitivity of precise microaneurysm localization was 92%, at an average number of 8 false positives per image. Discussion and Conclusion: Sensitivity and specificity of this algorithm make it one of the best methods in this field. Using local radon transform in this algorithm eliminates the noise sensitivity for microaneurysm detection in retinal image analysis.

  9. Change detection of medical images using dictionary learning techniques and PCA

    Science.gov (United States)

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-03-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of MRI scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. In this paper we present the Eigen-Block Change Detection algorithm (EigenBlockCD). It performs local registration and identifies the changes between consecutive MR images of the brain. Blocks of pixels from baseline scan are used to train local dictionaries that are then used to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between L1 and L2 norms as two possible similarity measures in the EigenBlockCD. We show the advantages of L2 norm over L1 norm theoretically and numerically. We also demonstrate the performance of the EigenBlockCD algorithm for detecting changes of MR images and compare our results with those provided in recent literature. Experimental results with both simulated and real MRI scans show that the EigenBlockCD outperforms the previous methods. It detects clinical changes while ignoring the changes due to patient's position and other acquisition artifacts.

  10. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    Science.gov (United States)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  11. Comic image understanding based on polygon detection

    Science.gov (United States)

    Li, Luyuan; Wang, Yongtao; Tang, Zhi; Liu, Dong

    2013-01-01

    Comic image understanding aims to automatically decompose scanned comic page images into storyboards and then identify the reading order of them, which is the key technique to produce digital comic documents that are suitable for reading on mobile devices. In this paper, we propose a novel comic image understanding method based on polygon detection. First, we segment a comic page images into storyboards by finding the polygonal enclosing box of each storyboard. Then, each storyboard can be represented by a polygon, and the reading order of them is determined by analyzing the relative geometric relationship between each pair of polygons. The proposed method is tested on 2000 comic images from ten printed comic series, and the experimental results demonstrate that it works well on different types of comic images.

  12. Automatic crop row detection from UAV images

    DEFF Research Database (Denmark)

    Midtiby, Henrik; Rasmussen, Jesper

    are considered weeds. We have used a Sugar beet field as a case for evaluating the proposed crop detection method. The suggested image processing consists of: 1) locating vegetation regions in the image by thresholding the excess green image derived from the orig- inal image, 2) calculate the Hough transform......Images from Unmanned Aerial Vehicles can provide information about the weed distribution in fields. A direct way is to quantify the amount of vegetation present in different areas of the field. The limitation of this approach is that it includes both crops and weeds in the reported num- bers. To get...... of the segmented image 3) determine the dominating crop row direction by analysing output from the Hough transform and 4) use the found crop row direction to locate crop rows....

  13. From Pixels to Region: A Salient Region Detection Algorithm for Location-Quantification Image

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available Image saliency detection has become increasingly important with the development of intelligent identification and machine vision technology. This process is essential for many image processing algorithms such as image retrieval, image segmentation, image recognition, and adaptive image compression. We propose a salient region detection algorithm for full-resolution images. This algorithm analyzes the randomness and correlation of image pixels and pixel-to-region saliency computation mechanism. The algorithm first obtains points with more saliency probability by using the improved smallest univalue segment assimilating nucleus operator. It then reconstructs the entire saliency region detection by taking these points as reference and combining them with image spatial color distribution, as well as regional and global contrasts. The results for subjective and objective image saliency detection show that the proposed algorithm exhibits outstanding performance in terms of technology indices such as precision and recall rates.

  14. Improved detection probability of low level light and infrared image fusion system

    Science.gov (United States)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  15. Detection of low-contrast images in film-grain noise.

    Science.gov (United States)

    Naderi, F; Sawchuk, A A

    1978-09-15

    When low contrast photographic images are digitized by a very small aperture, extreme film-grain noise almost completely obliterates the image information. Using a large aperture to average out the noise destroys the fine details of the image. In these situations conventional statistical restoration techniques have little effect, and well chosen heuristic algorithms have yielded better results. In this paper we analyze the noisecheating algorithm of Zweig et al. [J. Opt. Soc. Am. 65, 1347 (1975)] and show that it can be justified by classical maximum-likelihood detection theory. A more general algorithm applicable to a broader class of images is then developed by considering the signal-dependent nature of film-grain noise. Finally, a Bayesian detection algorithm with improved performance is presented.

  16. Target Detection Using an AOTF Hyperspectral Imager

    Science.gov (United States)

    Cheng, L-J.; Mahoney, J.; Reyes, F.; Suiter, H.

    1994-01-01

    This paper reports results of a recent field experiment using a prototype system to evaluate the acousto-optic tunable filter polarimetric hyperspectral imaging technology for target detection applications.

  17. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    Science.gov (United States)

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  18. Total variation regularization for a backward time-fractional diffusion problem

    International Nuclear Information System (INIS)

    Wang, Liyan; Liu, Jijun

    2013-01-01

    Consider a two-dimensional backward problem for a time-fractional diffusion process, which can be considered as image de-blurring where the blurring process is assumed to be slow diffusion. In order to avoid the over-smoothing effect for object image with edges and to construct a fast reconstruction scheme, the total variation regularizing term and the data residual error in the frequency domain are coupled to construct the cost functional. The well posedness of this optimization problem is studied. The minimizer is sought approximately using the iteration process for a series of optimization problems with Bregman distance as a penalty term. This iteration reconstruction scheme is essentially a new regularizing scheme with coupling parameter in the cost functional and the iteration stopping times as two regularizing parameters. We give the choice strategy for the regularizing parameters in terms of the noise level of measurement data, which yields the optimal error estimate on the iterative solution. The series optimization problems are solved by alternative iteration with explicit exact solution and therefore the amount of computation is much weakened. Numerical implementations are given to support our theoretical analysis on the convergence rate and to show the significant reconstruction improvements. (paper)

  19. The use of image morphing to improve the detection of tumors in emission imaging

    International Nuclear Information System (INIS)

    Dykstra, C.; Greer, K.; Jaszczak, R.; Celler, A.

    1999-01-01

    Two of the limitations on the utility of SPECT and planar scintigraphy for the non-invasive detection of carcinoma are the small sizes of many tumors and the possible low contrast between tumor uptake and background. This is particularly true for breast imaging. Use of some form of image processing can improve the visibility of tumors which are at the limit of hardware resolution. Smoothing, by some form of image averaging, either during or post-reconstruction, is widely used to reduce noise and thereby improve the detectability of regions of elevated activity. However, smoothing degrades resolution and, by averaging together closely spaced noise, may make noise look like a valid region of increased uptake. Image morphing by erosion and dilation does not average together image values; it instead selectively removes small features and irregularities from an image without changing the larger features. Application of morphing to emission images has shown that it does not, therefore, degrade resolution and does not always degrade contrast. For these reasons it may be a better method of image processing for noise removal in some images. In this paper the authors present a comparison of the effects of smoothing and morphing using breast and liver studies

  20. Spine labeling in MRI via regularized distribution matching.

    Science.gov (United States)

    Hojjat, Seyed-Parsa; Ayed, Ismail; Garvin, Gregory J; Punithakumar, Kumaradevan

    2017-11-01

    This study investigates an efficient (nearly real-time) two-stage spine labeling algorithm that removes the need for an external training while being applicable to different types of MRI data and acquisition protocols. Based solely on the image being labeled (i.e., we do not use training data), the first stage aims at detecting potential vertebra candidates following the optimization of a functional containing two terms: (i) a distribution-matching term that encodes contextual information about the vertebrae via a density model learned from a very simple user input, which amounts to a point (mouse click) on a predefined vertebra; and (ii) a regularization constraint, which penalizes isolated candidates in the solution. The second stage removes false positives and identifies all vertebrae and discs by optimizing a geometric constraint, which embeds generic anatomical information on the interconnections between neighboring structures. Based on generic knowledge, our geometric constraint does not require external training. We performed quantitative evaluations of the algorithm over a data set of 90 mid-sagittal MRI images of the lumbar spine acquired from 45 different subjects. To assess the flexibility of the algorithm, we used both T1- and T2-weighted images for each subject. A total of 990 structures were automatically detected/labeled and compared to ground-truth annotations by an expert. On the T2-weighted data, we obtained an accuracy of 91.6% for the vertebrae and 89.2% for the discs. On the T1-weighted data, we obtained an accuracy of 90.7% for the vertebrae and 88.1% for the discs. Our algorithm removes the need for external training while being applicable to different types of MRI data and acquisition protocols. Based on the current testing data, a subject-specific model density and generic anatomical information, our method can achieve competitive performances when applied to T1- and T2-weighted MRI images.

  1. End-to-End Airplane Detection Using Transfer Learning in Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Zhong Chen

    2018-01-01

    Full Text Available Airplane detection in remote sensing images remains a challenging problem due to the complexity of backgrounds. In recent years, with the development of deep learning, object detection has also obtained great breakthroughs. For object detection tasks in natural images, such as the PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning VOC (Visual Object Classes Challenge, the major trend of current development is to use a large amount of labeled classification data to pre-train the deep neural network as a base network, and then use a small amount of annotated detection data to fine-tune the network for detection. In this paper, we use object detection technology based on deep learning for airplane detection in remote sensing images. In addition to using some characteristics of remote sensing images, some new data augmentation techniques have been proposed. We also use transfer learning and adopt a single deep convolutional neural network and limited training samples to implement end-to-end trainable airplane detection. Classification and positioning are no longer divided into multistage tasks; end-to-end detection attempts to combine them for optimization, which ensures an optimal solution for the final stage. In our experiment, we use remote sensing images of airports collected from Google Earth. The experimental results show that the proposed algorithm is highly accurate and meaningful for remote sensing object detection.

  2. Improving image quality in Electrical Impedance Tomography (EIT using Projection Error Propagation-based Regularization (PEPR technique: A simulation study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-03-01

    Full Text Available A Projection Error Propagation-based Regularization (PEPR method is proposed and the reconstructed image quality is improved in Electrical Impedance Tomography (EIT. A projection error is produced due to the misfit of the calculated and measured data in the reconstruction process. The variation of the projection error is integrated with response matrix in each iterations and the reconstruction is carried out in EIDORS. The PEPR method is studied with the simulated boundary data for different inhomogeneity geometries. Simulated results demonstrate that the PEPR technique improves image reconstruction precision in EIDORS and hence it can be successfully implemented to increase the reconstruction accuracy in EIT.>doi:10.5617/jeb.158 J Electr Bioimp, vol. 2, pp. 2-12, 2011

  3. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  4. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2017-01-01

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  5. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla

    2017-10-25

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  6. Edge Detection on Images of Pseudoimpedance Section Supported by Context and Adaptive Transformation Model Images

    Directory of Open Access Journals (Sweden)

    Kawalec-Latała Ewa

    2014-03-01

    Full Text Available Most of underground hydrocarbon storage are located in depleted natural gas reservoirs. Seismic survey is the most economical source of detailed subsurface information. The inversion of seismic section for obtaining pseudoacoustic impedance section gives the possibility to extract detailed subsurface information. The seismic wavelet parameters and noise briefly influence the resolution. Low signal parameters, especially long signal duration time and the presence of noise decrease pseudoimpedance resolution. Drawing out from measurement or modelled seismic data approximation of distribution of acoustic pseuoimpedance leads us to visualisation and images useful to stratum homogeneity identification goal. In this paper, the improvement of geologic section image resolution by use of minimum entropy deconvolution method before inversion is applied. The author proposes context and adaptive transformation of images and edge detection methods as a way to increase the effectiveness of correct interpretation of simulated images. In the paper, the edge detection algorithms using Sobel, Prewitt, Robert, Canny operators as well as Laplacian of Gaussian method are emphasised. Wiener filtering of image transformation improving rock section structure interpretation pseudoimpedance matrix on proper acoustic pseudoimpedance value, corresponding to selected geologic stratum. The goal of the study is to develop applications of image transformation tools to inhomogeneity detection in salt deposits.

  7. Virus Particle Detection by Convolutional Neural Network in Transmission Electron Microscopy Images.

    Science.gov (United States)

    Ito, Eisuke; Sato, Takaaki; Sano, Daisuke; Utagawa, Etsuko; Kato, Tsuyoshi

    2018-06-01

    A new computational method for the detection of virus particles in transmission electron microscopy (TEM) images is presented. Our approach is to use a convolutional neural network that transforms a TEM image to a probabilistic map that indicates where virus particles exist in the image. Our proposed approach automatically and simultaneously learns both discriminative features and classifier for virus particle detection by machine learning, in contrast to existing methods that are based on handcrafted features that yield many false positives and require several postprocessing steps. The detection performance of the proposed method was assessed against a dataset of TEM images containing feline calicivirus particles and compared with several existing detection methods, and the state-of-the-art performance of the developed method for detecting virus was demonstrated. Since our method is based on supervised learning that requires both the input images and their corresponding annotations, it is basically used for detection of already-known viruses. However, the method is highly flexible, and the convolutional networks can adapt themselves to any virus particles by learning automatically from an annotated dataset.

  8. Automatic Microaneurysm Detection and Characterization Through Digital Color Fundus Images

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Charles; Veras, Rodrigo; Ramalho, Geraldo; Medeiros, Fatima; Ushizima, Daniela

    2008-08-29

    Ocular fundus images can provide information about retinal, ophthalmic, and even systemic diseases such as diabetes. Microaneurysms (MAs) are the earliest sign of Diabetic Retinopathy, a frequently observed complication in both type 1 and type 2 diabetes. Robust detection of MAs in digital color fundus images is critical in the development of automated screening systems for this kind of disease. Automatic grading of these images is being considered by health boards so that the human grading task is reduced. In this paper we describe segmentation and the feature extraction methods for candidate MAs detection.We show that the candidate MAs detected with the methodology have been successfully classified by a MLP neural network (correct classification of 84percent).

  9. Feature Detection of Curve Traffic Sign Image on The Bandung - Jakarta Highway

    Science.gov (United States)

    Naseer, M.; Supriadi, I.; Supangkat, S. H.

    2018-03-01

    Unsealed roadside and problems with the road surface are common causes of road crashes, particularly when those are combined with curves. Curve traffic sign is an important component for giving early warning to driver on traffic, especially on high-speed traffic like on the highway. Traffic sign detection has became a very interesting research now, and in this paper will be discussed about the detection of curve traffic sign. There are two types of curve signs are discussed, namely the curve turn to the left and the curve turn to the right and the all data sample used are the curves taken / recorded from some signs on the Bandung - Jakarta Highway. Feature detection of the curve signs use Speed Up Robust Feature (SURF) method, where the detected scene image is 800x450. From 45 curve turn to the right images, the system can detect the feature well to 35 images, where the success rate is 77,78%, while from the 45 curve turn to the left images, the system can detect the feature well to 34 images and the success rate is 75,56%, so the average accuracy in the detection process is 76,67%. While the average time for the detection process is 0.411 seconds.

  10. Reconstruction of signal in plastic scintillator of PET using Tikhonov regularization.

    Science.gov (United States)

    Raczynski, Lech

    2015-08-01

    The new concept of Time of Flight Positron Emission Tomography (TOF-PET) detection system, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The Jagiellonian-PET (J-PET) detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on idea from the Tikhonov regularization method, is presented. From the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long plastic scintillator strip. It is shown that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction from 1.05 cm to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm.

  11. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    International Nuclear Information System (INIS)

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-01-01

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  12. Knot detection in X-ray images of wood planks using dictionary learning

    DEFF Research Database (Denmark)

    Hansson, Nils Mattias; Enescu, Alexandru; Brandt, Sami Sebastian

    2015-01-01

    This paper considers a novel application of x-ray imaging of planks, for the purpose of detecting knots in high quality furniture wood. X-ray imaging allows the detection of knots invisible from the surface to conventional cameras. Our approach is based on texture analysis, or more specifically, ......, discriminative dictionary learning. Experiments show that the knot detection and segmentation can be accurately performed by our approach. This is a promising result and can be directly applied in industrial processing of furniture wood.......This paper considers a novel application of x-ray imaging of planks, for the purpose of detecting knots in high quality furniture wood. X-ray imaging allows the detection of knots invisible from the surface to conventional cameras. Our approach is based on texture analysis, or more specifically...

  13. Stamp Detection in Color Document Images

    DEFF Research Database (Denmark)

    Micenkova, Barbora; van Beusekom, Joost

    2011-01-01

    , moreover, it can be imprinted with a variable quality and rotation. Previous methods were restricted to detection of stamps of particular shapes or colors. The method presented in the paper includes segmentation of the image by color clustering and subsequent classification of candidate solutions...... by geometrical and color-related features. The approach allows for differentiation of stamps from other color objects in the document such as logos or texts. For the purpose of evaluation, a data set of 400 document images has been collected, annotated and made public. With the proposed method, recall of 83...

  14. Time integration and statistical regulation applied to mobile objects detection in a sequence of images

    International Nuclear Information System (INIS)

    Letang, Jean-Michel

    1993-01-01

    This PhD thesis deals with the detection of moving objects in monocular image sequences. The first section presents the inherent problems of motion analysis in real applications. We propose a method robust to perturbations frequently encountered during acquisition of outdoor scenes. It appears three main directions for investigations, all of them pointing out the importance of the temporal axis, which is a specific dimension for motion analysis. In the first part, the image sequence is considered as a set of temporal signals. The temporal multi-scale decomposition enables the characterization of various dynamical behaviors of the objects being in the scene at a given instant. A second module integrates motion information. This elementary trajectography of moving objects provides a temporal prediction map, giving a confidence level of motion presence. Interactions between both sets of data are expressed within a statistical regularization. Markov random field models supply a formal framework to convey a priori knowledge of the primitives to be evaluated. A calibration method with qualitative boxes is presented to estimate model parameters. Our approach requires only simple computations and leads to a rather fast algorithm, that we evaluate in the last section over various typical sequences. (author) [fr

  15. Trainable Cataloging for Digital Image Libraries with Applications to Volcano Detection

    Science.gov (United States)

    Burl, M. C.; Fayyad, U. M.; Perona, P.; Smyth, P.

    1995-01-01

    Users of digital image libraries are often not interested in image data per se but in derived products such as catalogs of objects of interest. Converting an image database into a usable catalog is typically carried out manually at present. For many larger image databases the purely manual approach is completely impractical. In this paper we describe the development of a trainable cataloging system: the user indicates the location of the objects of interest for a number of training images and the system learns to detect and catalog these objects in the rest of the database. In particular we describe the application of this system to the cataloging of small volcanoes in radar images of Venus. The volcano problem is of interest because of the scale (30,000 images, order of 1 million detectable volcanoes), technical difficulty (the variability of the volcanoes in appearance) and the scientific importance of the problem. The problem of uncertain or subjective ground truth is of fundamental importance in cataloging problems of this nature and is discussed in some detail. Experimental results are presented which quantify and compare the detection performance of the system relative to human detection performance. The paper concludes by discussing the limitations of the proposed system and the lessons learned of general relevance to the development of digital image libraries.

  16. Observer detection of image degradation caused by irreversible data compression processes

    Science.gov (United States)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  17. Performance Analysis of Ship Wake Detection on Sentinel-1 SAR Images

    Directory of Open Access Journals (Sweden)

    Maria Daniela Graziano

    2017-10-01

    Full Text Available A novel technique for ship wake detection has been recently proposed and applied on X-band Synthetic Aperture Radar images provided by COSMO/SkyMed and TerraSAR-X. The approach shows that the vast majority of wake features are correctly detected and validated in critical situations. In this paper, the algorithm was applied to 28 wakes imaged by Sentinel-1 mission with different polarizations and incidence angles with the aim of testing the method’s robustness with reference to radar frequency and resolution. The detection process is properly modified. The results show that the features were correctly classified in 78.5% of cases, whereas false confirmations occur mainly on Kelvin cusps. Finally, the results were compared with the algorithm performance on X-band images, showing that no significant difference arises. In fact, the total false confirmations rate was 15.8% on X-band images and 18.5% on C-band images. Moreover, since the main criticality concerns again the false confirmation of Kelvin cusps, the same empirical criterion suggested for the X-band SAR images yielded a negligible 1.5% of false detection rate.

  18. Flair MR imaging in the Detection of subarachnoid hemorrhage : comparison with CT and T1-weighted MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Min, Soo Hyun; Kim, Soo Youn; Lee, Ghi Jai; Shim, Jae Chan; Oh, Tae Kyung; Kim, Ho Kyun [College of Medicine, Jnje University, Seoul (Korea, Republic of)

    2000-03-01

    To compare the findings of fluid-attenuated inversion recovery (FLAIR) MR imaging in the detection of subarachnoid hemorrhage (SAH), with those of precontrast CT and T1-weighted MR imaging. In 13 patients (14 cases) with SAH, FLAIR MR images were retrospectively analyzed and compared with CT (10 patients, 11 cases) and T1-weighted MR images (9 cases). SAH was confirmed on the basis of high density along the subarachnoid space, as seen on precontrast CT, or lumbar puncture. MR imaging was performed on a 1.0T unit. FLAIR MR and CT images were obtained during the acute stage(less than 3 days after ictus) in 10 and 9 cases, respectively, during the subacute stage (4-14 days after ictus) in two cases and one, respectively, and during the chronic stage (more than 15 days after ictus) in two cases and one, respectively. CT was performed before FLAIR MR imaging, and the interval between CT and FLAIR ranged from 24 hours (6 cases) to 2-3 (2 cases) or 4-7 days (3 cases). In each study, the conspicuity of visualization of SAH was graded as excellent, good, fair, or negative at five locations (sylvian fissure, cortical sulci, anterior basal cistern, posterior basal cistern, and perimesencephalic cistern). In all cases, subarachnoid hemorrhages were demonstrated as high signal intensity areas on FLAIR images. The detection rates for SAH on CT and T1-weighted MR images were 100% (11/11) and 89% (8/9), respectively. FLAIR was superior to T1-weighted imaging in the detection of SAH at all sites except the anterior basal cistern (p less than 0.05) and superior to CT in the detection of SAH at the cortical sulci (p less than 0.05). On FLAIR MR images, subarachnoid hemorrhages at all stages are demonstrated as high signal intensity areas; the FLAIR MR sequence is thus considered useful in the detection of SAH. In particular FLAIR is more sensitive than CT for the detection of SAH in the cortical sulci. (author)

  19. Automated detection of acute haemorrhagic stroke in non-contrasted CT images

    International Nuclear Information System (INIS)

    Meetz, K.; Buelow, T.

    2007-01-01

    An efficient treatment of stroke patients implies a profound differential diagnosis that includes the detection of acute haematoma. The proposed approach provides an automated detection of acute haematoma, assisting the non-stroke expert in interpreting non-contrasted CT images. It consists of two steps: First, haematoma candidates are detected applying multilevel region growing approach based on a typical grey value characteristic. Second, true haematomas are differentiated from partial volume artefacts, relying on spatial features derived from distance-based histograms. This approach achieves a specificity of 77% and a sensitivity of 89.7% in detecting acute haematoma in non-contrasted CT images when applied to a set of 25 non-contrasted CT images. (orig.)

  20. Image change detection systems, methods, and articles of manufacture

    Science.gov (United States)

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  1. Detecting pits in tart cherries by hyperspectral transmission imaging

    Science.gov (United States)

    Qin, Jianwei; Lu, Renfu

    2004-11-01

    The presence of pits in processed cherry products causes safety concerns for consumers and imposes potential liability for the food industry. The objective of this research was to investigate a hyperspectral transmission imaging technique for detecting the pit in tart cherries. A hyperspectral imaging system was used to acquire transmission images from individual cherry fruit for four orientations before and after pits were removed over the spectral region between 450 nm and 1,000 nm. Cherries of three size groups (small, intermediate, and large), each with two color classes (light red and dark red) were used for determining the effect of fruit orientation, size, and color on the pit detection accuracy. Additional cherries were studied for the effect of defect (i.e., bruises) on the pit detection. Computer algorithms were developed using the neural network (NN) method to classify the cherries with and without the pit. Two types of data inputs, i.e., single spectra and selected regions of interest (ROIs), were compared. The spectral region between 690 nm and 850 nm was most appropriate for cherry pit detection. The NN with inputs of ROIs achieved higher pit detection rates ranging from 90.6% to 100%, with the average correct rate of 98.4%. Fruit orientation and color had a small effect (less than 1%) on pit detection. Fruit size and defect affected pit detection and their effect could be minimized by training the NN with properly selected cherry samples.

  2. Detection limits of intraoperative near infrared imaging for tumor resection.

    Science.gov (United States)

    Thurber, Greg M; Figueiredo, Jose-Luiz; Weissleder, Ralph

    2010-12-01

    The application of fluorescent molecular imaging to surgical oncology is a developing field with the potential to reduce morbidity and mortality. However, the detection thresholds and other requirements for successful intervention remain poorly understood. Here we modeled and experimentally validated depth and size of detection of tumor deposits, trade-offs in coverage and resolution of areas of interest, and required pharmacokinetics of probes based on differing levels of tumor target presentation. Three orthotopic tumor models were imaged by widefield epifluorescence and confocal microscopes, and the experimental results were compared with pharmacokinetic models and light scattering simulations to determine detection thresholds. Widefield epifluorescence imaging can provide sufficient contrast to visualize tumor margins and detect tumor deposits 3-5  mm deep based on labeled monoclonal antibodies at low objective magnification. At higher magnification, surface tumor deposits at cellular resolution are detectable at TBR ratios achieved with highly expressed antigens. A widefield illumination system with the capability for macroscopic surveying and microscopic imaging provides the greatest utility for varying surgical goals. These results have implications for system and agent designs, which ultimately should aid complete resection in most surgical beds and provide real-time feedback to obtain clean margins. © 2010 Wiley-Liss, Inc.

  3. Supervised detection of exoplanets in high-contrast imaging sequences

    Science.gov (United States)

    Gomez Gonzalez, C. A.; Absil, O.; Van Droogenbroeck, M.

    2018-06-01

    Context. Post-processing algorithms play a key role in pushing the detection limits of high-contrast imaging (HCI) instruments. State-of-the-art image processing approaches for HCI enable the production of science-ready images relying on unsupervised learning techniques, such as low-rank approximations, for generating a model point spread function (PSF) and subtracting the residual starlight and speckle noise. Aims: In order to maximize the detection rate of HCI instruments and survey campaigns, advanced algorithms with higher sensitivities to faint companions are needed, especially for the speckle-dominated innermost region of the images. Methods: We propose a reformulation of the exoplanet detection task (for ADI sequences) that builds on well-established machine learning techniques to take HCI post-processing from an unsupervised to a supervised learning context. In this new framework, we present algorithmic solutions using two different discriminative models: SODIRF (random forests) and SODINN (neural networks). We test these algorithms on real ADI datasets from VLT/NACO and VLT/SPHERE HCI instruments. We then assess their performances by injecting fake companions and using receiver operating characteristic analysis. This is done in comparison with state-of-the-art ADI algorithms, such as ADI principal component analysis (ADI-PCA). Results: This study shows the improved sensitivity versus specificity trade-off of the proposed supervised detection approach. At the diffraction limit, SODINN improves the true positive rate by a factor ranging from 2 to 10 (depending on the dataset and angular separation) with respect to ADI-PCA when working at the same false-positive level. Conclusions: The proposed supervised detection framework outperforms state-of-the-art techniques in the task of discriminating planet signal from speckles. In addition, it offers the possibility of re-processing existing HCI databases to maximize their scientific return and potentially improve

  4. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  5. Tensor Fukunaga-Koontz transform for small target detection in infrared images

    Science.gov (United States)

    Liu, Ruiming; Wang, Jingzhuo; Yang, Huizhen; Gong, Chenglong; Zhou, Yuanshen; Liu, Lipeng; Zhang, Zhen; Shen, Shuli

    2016-09-01

    Infrared small targets detection plays a crucial role in warning and tracking systems. Some novel methods based on pattern recognition technology catch much attention from researchers. However, those classic methods must reshape images into vectors with the high dimensionality. Moreover, vectorizing breaks the natural structure and correlations in the image data. Image representation based on tensor treats images as matrices and can hold the natural structure and correlation information. So tensor algorithms have better classification performance than vector algorithms. Fukunaga-Koontz transform is one of classification algorithms and it is a vector version method with the disadvantage of all vector algorithms. In this paper, we first extended the Fukunaga-Koontz transform into its tensor version, tensor Fukunaga-Koontz transform. Then we designed a method based on tensor Fukunaga-Koontz transform for detecting targets and used it to detect small targets in infrared images. The experimental results, comparison through signal-to-clutter, signal-to-clutter gain and background suppression factor, have validated the advantage of the target detection based on the tensor Fukunaga-Koontz transform over that based on the Fukunaga-Koontz transform.

  6. Regularization Techniques for ECG Imaging during Atrial Fibrillation: a Computational Study

    Directory of Open Access Journals (Sweden)

    Carlos Figuera

    2016-10-01

    Full Text Available The inverse problem of electrocardiography is usually analyzed during stationary rhythms. However, the performance of the regularization methods under fibrillatory conditions has not been fully studied. In this work, we assessed different regularization techniques during atrial fibrillation (AF for estimating four target parameters, namely, epicardial potentials, dominant frequency (DF, phase maps, and singularity point (SP location. We use a realistic mathematical model of atria and torso anatomy with three different electrical activity patterns (i.e. sinus rhythm, simple AF and complex AF. Body surface potentials (BSP were simulated using Boundary Element Method and corrupted with white Gaussian noise of different powers. Noisy BSPs were used to obtain the epicardial potentials on the atrial surface, using fourteen different regularization techniques. DF, phase maps and SP location were computed from estimated epicardial potentials. Inverse solutions were evaluated using a set of performance metrics adapted to each clinical target. For the case of SP location, an assessment methodology based on the spatial mass function of the SP location and four spatial error metrics was proposed. The role of the regularization parameter for Tikhonov-based methods, and the effect of noise level and imperfections in the knowledge of the transfer matrix were also addressed. Results showed that the Bayes maximum-a-posteriori method clearly outperforms the rest of the techniques but requires a priori information about the epicardial potentials. Among the purely non-invasive techniques, Tikhonov-based methods performed as well as more complex techniques in realistic fibrillatory conditions, with a slight gain between 0.02 and 0.2 in terms of the correlation coefficient. Also, the use of a constant regularization parameter may be advisable since the performance was similar to that obtained with a variable parameter (indeed there was no difference for the zero

  7. Detection of mechanical injury on pickling cucumbers using near-infrared hyperspectral imaging

    Science.gov (United States)

    Ariana, D.; Lu, R.; Guyer, D.

    2005-11-01

    Automated detection of defects on freshly harvested pickling cucumbers will help the pickle industry provide higher quality pickle products and reduce potential economic losses. Research was conducted on using a hyperspectral imaging system for detecting defects on pickling cucumbers caused by mechanical stress. A near-infrared hyperspectral imaging system was used to capture both spatial and spectral information from cucumbers in the spectral region of 900 - 1700 nm. The system consisted of an imaging spectrograph attached to an InGaAs camera with line-light fiber bundles as an illumination source. Cucumber samples were subjected to two forms of mechanical loading, dropping and rolling, to simulate stress caused by mechanical harvesting. Hyperspectral images were acquired from the cucumbers over time periods of 0, 1, 2, 3, and 6 days after mechanical stress. Hyperspectral image processing methods, including principal component analysis and wavelength selection, were developed to separate normal and mechanically injured cucumbers. Results showed that reflectance from normal or non-bruised cucumbers was consistently higher than that from bruised cucumbers. The spectral region between 950 and 1350 nm was found to be most effective for bruise detection. The hyperspectral imaging system detected all mechanically injured cucumbers immediately after they were bruised. The overall detection accuracy was 97% within two hours of bruising and it was lower as time progressed. Lower detection accuracies for the prolonged times after bruising were attributed to the self- healing of the bruised tissue after mechanical injury. This research demonstrated that hyperspectral imaging is useful for detecting mechanical injury on pickling cucumbers.

  8. Fluorescence hyperspectral imaging technique for foreign substance detection on fresh-cut lettuce.

    Science.gov (United States)

    Mo, Changyeun; Kim, Giyoung; Kim, Moon S; Lim, Jongguk; Cho, Hyunjeong; Barnaby, Jinyoung Yang; Cho, Byoung-Kwan

    2017-09-01

    Non-destructive methods based on fluorescence hyperspectral imaging (HSI) techniques were developed to detect worms on fresh-cut lettuce. The optimal wavebands for detecting the worms were investigated using the one-way ANOVA and correlation analyses. The worm detection imaging algorithms, RSI-I (492-626)/492 , provided a prediction accuracy of 99.0%. The fluorescence HSI techniques indicated that the spectral images with a pixel size of 1 × 1 mm had the best classification accuracy for worms. The overall results demonstrate that fluorescence HSI techniques have the potential to detect worms on fresh-cut lettuce. In the future, we will focus on developing a multi-spectral imaging system to detect foreign substances such as worms, slugs and earthworms on fresh-cut lettuce. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  9. Automatic Detection of Optic Disc in Retinal Image by Using Keypoint Detection, Texture Analysis, and Visual Dictionary Techniques

    Directory of Open Access Journals (Sweden)

    Kemal Akyol

    2016-01-01

    Full Text Available With the advances in the computer field, methods and techniques in automatic image processing and analysis provide the opportunity to detect automatically the change and degeneration in retinal images. Localization of the optic disc is extremely important for determining the hard exudate lesions or neovascularization, which is the later phase of diabetic retinopathy, in computer aided eye disease diagnosis systems. Whereas optic disc detection is fairly an easy process in normal retinal images, detecting this region in the retinal image which is diabetic retinopathy disease may be difficult. Sometimes information related to optic disc and hard exudate information may be the same in terms of machine learning. We presented a novel approach for efficient and accurate localization of optic disc in retinal images having noise and other lesions. This approach is comprised of five main steps which are image processing, keypoint extraction, texture analysis, visual dictionary, and classifier techniques. We tested our proposed technique on 3 public datasets and obtained quantitative results. Experimental results show that an average optic disc detection accuracy of 94.38%, 95.00%, and 90.00% is achieved, respectively, on the following public datasets: DIARETDB1, DRIVE, and ROC.

  10. Detection of nuclei in 4D Nomarski DIC microscope images of early Caenorhabditis elegans embryos using local image entropy and object tracking

    Directory of Open Access Journals (Sweden)

    Hamahashi Shugo

    2005-05-01

    Full Text Available Abstract Background The ability to detect nuclei in embryos is essential for studying the development of multicellular organisms. A system of automated nuclear detection has already been tested on a set of four-dimensional (4D Nomarski differential interference contrast (DIC microscope images of Caenorhabditis elegans embryos. However, the system needed laborious hand-tuning of its parameters every time a new image set was used. It could not detect nuclei in the process of cell division, and could detect nuclei only from the two- to eight-cell stages. Results We developed a system that automates the detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. Local image entropy is used to produce regions of the images that have the image texture of the nucleus. From these regions, those that actually detect nuclei are manually selected at the first and last time points of the image set, and an object-tracking algorithm then selects regions that detect nuclei in between the first and last time points. The use of local image entropy makes the system applicable to multiple image sets without the need to change its parameter values. The use of an object-tracking algorithm enables the system to detect nuclei in the process of cell division. The system detected nuclei with high sensitivity and specificity from the one- to 24-cell stages. Conclusion A combination of local image entropy and an object-tracking algorithm enabled highly objective and productive detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. The system will facilitate genomic and computational analyses of C. elegans embryos.

  11. A simple method for detecting tumor in T2-weighted MRI brain images. An image-based analysis

    International Nuclear Information System (INIS)

    Lau, Phooi-Yee; Ozawa, Shinji

    2006-01-01

    The objective of this paper is to present a decision support system which uses a computer-based procedure to detect tumor blocks or lesions in digitized medical images. The authors developed a simple method with a low computation effort to detect tumors on T2-weighted Magnetic Resonance Imaging (MRI) brain images, focusing on the connection between the spatial pixel value and tumor properties from four different perspectives: cases having minuscule differences between two images using a fixed block-based method, tumor shape and size using the edge and binary images, tumor properties based on texture values using spatial pixel intensity distribution controlled by a global discriminate value, and the occurrence of content-specific tumor pixel for threshold images. Measurements of the following medical datasets were performed: different time interval images, and different brain disease images on single and multiple slice images. Experimental results have revealed that our proposed technique incurred an overall error smaller than those in other proposed methods. In particular, the proposed method allowed decrements of false alarm and missed alarm errors, which demonstrate the effectiveness of our proposed technique. In this paper, we also present a prototype system, known as PCB, to evaluate the performance of the proposed methods by actual experiments, comparing the detection accuracy and system performance. (author)

  12. Automated detection of diabetic retinopathy lesions on ultrawidefield pseudocolour images.

    Science.gov (United States)

    Wang, Kang; Jayadev, Chaitra; Nittala, Muneeswar G; Velaga, Swetha B; Ramachandra, Chaithanya A; Bhaskaranand, Malavika; Bhat, Sandeep; Solanki, Kaushal; Sadda, SriniVas R

    2018-03-01

    We examined the sensitivity and specificity of an automated algorithm for detecting referral-warranted diabetic retinopathy (DR) on Optos ultrawidefield (UWF) pseudocolour images. Patients with diabetes were recruited for UWF imaging. A total of 383 subjects (754 eyes) were enrolled. Nonproliferative DR graded to be moderate or higher on the 5-level International Clinical Diabetic Retinopathy (ICDR) severity scale was considered as grounds for referral. The software automatically detected DR lesions using the previously trained classifiers and classified each image in the test set as referral-warranted or not warranted. Sensitivity, specificity and the area under the receiver operating curve (AUROC) of the algorithm were computed. The automated algorithm achieved a 91.7%/90.3% sensitivity (95% CI 90.1-93.9/80.4-89.4) with a 50.0%/53.6% specificity (95% CI 31.7-72.8/36.5-71.4) for detecting referral-warranted retinopathy at the patient/eye levels, respectively; the AUROC was 0.873/0.851 (95% CI 0.819-0.922/0.804-0.894). Diabetic retinopathy (DR) lesions were detected from Optos pseudocolour UWF images using an automated algorithm. Images were classified as referral-warranted DR with a high degree of sensitivity and moderate specificity. Automated analysis of UWF images could be of value in DR screening programmes and could allow for more complete and accurate disease staging. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  13. Lesion detection in ultra-wide field retinal images for diabetic retinopathy diagnosis

    Science.gov (United States)

    Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur

    2018-02-01

    Diabetic retinopathy (DR) leads to irreversible vision loss. Diagnosis and staging of DR is usually based on the presence, number, location and type of retinal lesions. Ultra-wide field (UWF) digital scanning laser technology provides an opportunity for computer-aided DR lesion detection. High-resolution UWF images (3078×2702 pixels) may allow detection of more clinically relevant retinopathy in comparison with conventional retinal images as UWF imaging covers a 200° retinal area, versus 45° by conventional cameras. Current approaches to DR diagnosis that analyze 7-field Early Treatment Diabetic Retinopathy Study (ETDRS) retinal images provide similar results to UWF imaging. However, in 40% of cases, more retinopathy was found outside the 7- field ETDRS fields by UWF and in 10% of cases, retinopathy was reclassified as more severe. The reason is that UWF images examine both the central retina and more peripheral regions. We propose an algorithm for automatic detection and classification of DR lesions such as cotton wool spots, exudates, microaneurysms and haemorrhages in UWF images. The algorithm uses convolutional neural network (CNN) as a feature extractor and classifies the feature vectors extracted from colour-composite UWF images using a support vector machine (SVM). The main contribution includes detection of four types of DR lesions in the peripheral retina for diagnostic purposes. The evaluation dataset contains 146 UWF images. The proposed method for detection of DR lesion subtypes in UWF images using two scenarios for transfer learning achieved AUC ≈ 80%. Data was split at the patient level to validate the proposed algorithm.

  14. Detecting ship targets in spaceborne infrared image based on modeling radiation anomalies

    Science.gov (United States)

    Wang, Haibo; Zou, Zhengxia; Shi, Zhenwei; Li, Bo

    2017-09-01

    Using infrared imaging sensors to detect ship target in the ocean environment has many advantages compared to other sensor modalities, such as better thermal sensitivity and all-weather detection capability. We propose a new ship detection method by modeling radiation anomalies for spaceborne infrared image. The proposed method can be decomposed into two stages, where in the first stage, a test infrared image is densely divided into a set of image patches and the radiation anomaly of each patch is estimated by a Gaussian Mixture Model (GMM), and thereby target candidates are obtained from anomaly image patches. In the second stage, target candidates are further checked by a more discriminative criterion to obtain the final detection result. The main innovation of the proposed method is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous patches among complex background. The experimental result on short wavelength infrared band (1.560 - 2.300 μm) and long wavelength infrared band (10.30 - 12.50 μm) of Landsat-8 satellite shows the proposed method achieves a desired ship detection accuracy with higher recall than other classical ship detection methods.

  15. The values of myocardial tomography imaging and gated cardiac blood pool imaging in detecting left ventricular aneurysm

    International Nuclear Information System (INIS)

    Zhu Mei; Pan Zhongyun; Li Jinhui

    1992-01-01

    The sensitivity and specificity of myocardial tomography imaging and gated cardiac blood-pool imaging in detecting LVA were studied in 36 normal subjects and 68 patients with myocardial infarction. The sensitivities of exercise and rest myocardial imaging in detecting LVA were 85% and 77.3% respectively. The specificity of both is 95.5%. The sensitivity of cinema display, phase analysis and left ventricular phase shift in evaluating LVA were 86.7%, 86.7%, 100% respectively. Their specificity were all 100%. It is concluded that blood pool imaging is of choice for the diagnosis of LVA, and that myocardial imaging could also demonstrate LVA during diagnosing myocardial infarction

  16. A survey on object detection in optical remote sensing images

    Science.gov (United States)

    Cheng, Gong; Han, Junwei

    2016-07-01

    Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey (1) template matching-based object detection methods, (2) knowledge-based object detection methods, (3) object-based image analysis (OBIA)-based object detection methods, (4) machine learning-based object detection methods, and (5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.

  17. Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series

    Science.gov (United States)

    Champion, Nicolas

    2016-06-01

    Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pl

  18. AUTOMATIC DETECTION OF CLOUDS AND SHADOWS USING HIGH RESOLUTION SATELLITE IMAGE TIME SERIES

    Directory of Open Access Journals (Sweden)

    N. Champion

    2016-06-01

    Full Text Available Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8

  19. Ship Detection and Classification on Optical Remote Sensing Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Liu Ying

    2017-01-01

    Full Text Available Ship detection and classification is critical for national maritime security and national defense. Although some SAR (Synthetic Aperture Radar image-based ship detection approaches have been proposed and used, they are not able to satisfy the requirement of real-world applications as the number of SAR sensors is limited, the resolution is low, and the revisit cycle is long. As massive optical remote sensing images of high resolution are available, ship detection and classification on theses images is becoming a promising technique, and has attracted great attention on applications including maritime security and traffic control. Some digital image processing methods have been proposed to detect ships in optical remote sensing images, but most of them face difficulty in terms of accuracy, performance and complexity. Recently, an autoencoder-based deep neural network with extreme learning machine was proposed, but it cannot meet the requirement of real-world applications as it only works with simple and small-scaled data sets. Therefore, in this paper, we propose a novel ship detection and classification approach which utilizes deep convolutional neural network (CNN as the ship classifier. The performance of our proposed ship detection and classification approach was evaluated on a set of images downloaded from Google Earth at the resolution 0.5m. 99% detection accuracy and 95% classification accuracy were achieved. In model training, 75× speedup is achieved on 1 Nvidia Titanx GPU.

  20. Automatic detection of radioactive fixations in oncology PET images

    International Nuclear Information System (INIS)

    Tomei-Le-Digarcher, Sandrine

    2009-01-01

    Therapeutic follow-up of patients with cancer is nowadays of main interest in research. Positron Emission Tomography (PET) appears to become a reference exam for monitoring treatment of cancers, particular in lymphoma. This PhD thus deals on the development of a computer aided detection (CAD) tool focused on hardly visible tumors for whole-body 3D PET images. To achieve such a goal, we proposed an approach based on the combination of two classifiers, the Linear Discriminant Analysis (LDA) and the Support Vector Machines, associated with wavelet image features. Each classifier gives a 3D score map quantifying the probability of its voxels to correspond to a tumor. We proposed a 3D evaluation strategy based on the use of simulated images giving the targeted tumor characteristic gold standard. Such database was developed in this PhD from hundred Monte Carlo simulations of the Zuba phantom. It includes hundred images presenting 375 spherical tumors of calibrated contrasts. Results of the CAD obtained from the binary detection maps are promising. They open the perspective of enriching the binary information generally given to the clinician with parametric indices quantifying the pertinence of each detected tumor. (author)

  1. Detection of hepatic metastasis: Manganese- and ferucarbotran-enhanced MR imaging

    International Nuclear Information System (INIS)

    Choi, Jin-Young; Kim, Myeong-Jin; Kim, Joo Hee; Kim, Seung Hyoung; Ko, Heung-Kyu; Lim, Joon Seok; Oh, Young Taik; Chung, Jae-Joon; Yoo, Hyung Sik; Lee, Jong Tae; Kim, Ki Whang

    2006-01-01

    Purpose: To compare the mangafodipir trisodium (MnDPDP)-enhanced and ferucarbotran-enhanced magnetic resonance imaging (MRI) for the detection of hepatic metastases. Material and methods: Twenty patients with known hepatic metastasis underwent MR imaging using mangafodipir trisodium and ferucarbotran in at least 1-day intervals. Thirty-eight metastases were confirmed either histologically or clinically. Two radiologists independently reviewed the MnDPDP-enhanced and ferucarbotran-enhanced sets in a random order. The sensitivity and accuracy of lesion detection and the ability to distinguish a benign lesion from a malignant lesion were compared by the areas (Az) under the receiver operating characteristic (ROC) curve. The lesion-liver contrast-to-noise ratios (CNR) were compared by paired t-test. Results: The overall accuracy for detecting metastases was not significantly different between the MnDPDP set (Az = 0.912 and 0.913 for reader 1 and 2, respectively) and the SPIO set (Az = 0.920 and 0.950). The CNR at the MnDPDP-enhanced images and the SPIO-enhanced images were not significantly different (P = 0.146). Conclusion: Both MnDPDP- and ferucarbotran-enhanced MRI have a comparable accuracy in detecting hepatic metastasis

  2. Copy-Move Forgery Detection Technique for Forensic Analysis in Digital Images

    Directory of Open Access Journals (Sweden)

    Toqeer Mahmood

    2016-01-01

    Full Text Available Due to the powerful image editing tools images are open to several manipulations; therefore, their authenticity is becoming questionable especially when images have influential power, for example, in a court of law, news reports, and insurance claims. Image forensic techniques determine the integrity of images by applying various high-tech mechanisms developed in the literature. In this paper, the images are analyzed for a particular type of forgery where a region of an image is copied and pasted onto the same image to create a duplication or to conceal some existing objects. To detect the copy-move forgery attack, images are first divided into overlapping square blocks and DCT components are adopted as the block representations. Due to the high dimensional nature of the feature space, Gaussian RBF kernel PCA is applied to achieve the reduced dimensional feature vector representation that also improved the efficiency during the feature matching. Extensive experiments are performed to evaluate the proposed method in comparison to state of the art. The experimental results reveal that the proposed technique precisely determines the copy-move forgery even when the images are contaminated with blurring, noise, and compression and can effectively detect multiple copy-move forgeries. Hence, the proposed technique provides a computationally efficient and reliable way of copy-move forgery detection that increases the credibility of images in evidence centered applications.

  3. Detecting aircrafts from satellite images using saliency and conical ...

    Indian Academy of Sciences (India)

    Samik Banerjee

    automatically detect all kinds of interesting targets in satellite images. .... which is used for text and image categorization, has been also introduced for object ...... 3.4 GHz processor, 32 GB RAM and Windows 7 (64 bit). Operating System. 6.

  4. Edge detection based on computational ghost imaging with structured illuminations

    Science.gov (United States)

    Yuan, Sheng; Xiang, Dong; Liu, Xuemei; Zhou, Xin; Bing, Pibin

    2018-03-01

    Edge detection is one of the most important tools to recognize the features of an object. In this paper, we propose an optical edge detection method based on computational ghost imaging (CGI) with structured illuminations which are generated by an interference system. The structured intensity patterns are designed to make the edge of an object be directly imaged from detected data in CGI. This edge detection method can extract the boundaries for both binary and grayscale objects in any direction at one time. We also numerically test the influence of distance deviations in the interference system on edge extraction, i.e., the tolerance of the optical edge detection system to distance deviation. Hopefully, it may provide a guideline for scholars to build an experimental system.

  5. Airplane detection in remote sensing images using convolutional neural networks

    Science.gov (United States)

    Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei

    2018-03-01

    Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.

  6. Streak detection and analysis pipeline for optical images

    Science.gov (United States)

    Virtanen, J.; Granvik, M.; Torppa, J.; Muinonen, K.; Poikonen, J.; Lehti, J.; Säntti, T.; Komulainen, T.; Flohrer, T.

    2014-07-01

    We describe a novel data processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data to support the development and validation of population models, and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. We focus on the low signal-to-noise (SNR) detection of objects with high angular velocities, resulting in long and faint object trails, or streaks, in the optical images. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and, particularly for satellites, within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a 'track-before-detect' problem, resulting in streaks of arbitrary lengths. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, algorithms are not readily available yet. In the ESA-funded StreakDet (Streak detection and astrometric reduction) project, we develop and evaluate an automated processing pipeline applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The algorithmic

  7. Paediatric CT: the effects of increasing image noise on pulmonary nodule detection

    International Nuclear Information System (INIS)

    Punwani, Shonit; Davies, Warren; Greenhalgh, Rebecca; Humphries, Paul; Zhang, Jie

    2008-01-01

    A radiation dose of any magnitude can produce a detrimental effect manifesting as an increased risk of cancer. Cancer development may be delayed for many years following radiation exposure. Minimizing radiation dose in children is particularly important. However, reducing the dose can reduce image quality and may, therefore, hinder lesion detection. We investigated the effects of reducing the image signal-to-noise ratio (SNR) on CT lung nodule detection for a range of nodule sizes. A simulated nodule was placed at the periphery of the lung on an axial CT slice using image editing software. Multiple copies of the manipulated image were saved with various levels of superimposed noise. The image creation process was repeated for a range of nodule sizes. For a given nodule size, output images were read independently by four Fellows of The Royal College of Radiologists. The overall sensitivities in detecting nodules for the SNR ranges 0.8-0.99, 1-1.49, and 1.5-2.35 were 40.5%, 77.3% and 90.3%, respectively, and the specificities were 47.9%, 73.3% and 75%, respectively. The sensitivity for detecting lung nodules increased with nodule size and increasing SNR. There was 100% sensitivity for the detection of nodules of 4-10 mm in diameter at SNRs greater than 1.5. Reducing medical radiation doses in children is of paramount importance. For chest CT examinations this may be counterbalanced by reduced sensitivity and specificity combined with an increased uncertainty of pulmonary nodule detection. This study demonstrates that pulmonary nodules of 4 mm and greater in diameter can be detected with 100% sensitivity provided that the perceived image SNR is greater than 1.5. (orig.)

  8. Streak detection and analysis pipeline for space-debris optical images

    Science.gov (United States)

    Virtanen, Jenni; Poikonen, Jonne; Säntti, Tero; Komulainen, Tuomo; Torppa, Johanna; Granvik, Mikael; Muinonen, Karri; Pentikäinen, Hanna; Martikainen, Julia; Näränen, Jyri; Lehti, Jussi; Flohrer, Tim

    2016-04-01

    We describe a novel data-processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data, to support the development and validation of population models and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. The ESA-funded StreakDet (streak detection and astrometric reduction) activity has aimed at formulating and discussing suitable approaches for the detection and astrometric reduction of object trails, or streaks, in optical observations. Our two main focuses are objects in lower altitudes and space-based observations (i.e., high angular velocities), resulting in long (potentially curved) and faint streaks in the optical images. In particular, we concentrate on single-image (as compared to consecutive frames of the same field) and low-SNR detection of objects. Particular attention has been paid to the process of extraction of all necessary information from one image (segmentation), and subsequently, to efficient reduction of the extracted data (classification). We have developed an automated streak detection and processing pipeline and demonstrated its performance with an extensive database of semisynthetic images simulating streak observations both from ground-based and space-based observing platforms. The average processing time per image is about 13 s for a typical 2k-by-2k image. For long streaks (length >100 pixels), primary targets of the pipeline, the detection sensitivity (true positives) is about 90% for

  9. Dynamic fluorescence imaging with molecular agents for cancer detection

    Science.gov (United States)

    Kwon, Sun Kuk

    Non-invasive dynamic optical imaging of small animals requires the development of a novel fluorescence imaging modality. Herein, fluorescence imaging is demonstrated with sub-second camera integration times using agents specifically targeted to disease markers, enabling rapid detection of cancerous regions. The continuous-wave fluorescence imaging acquires data with an intensified or an electron-multiplying charge-coupled device. The work presented in this dissertation (i) assessed dose-dependent uptake using dynamic fluorescence imaging and pharmacokinetic (PK) models, (ii) evaluated disease marker availability in two different xenograft tumors, (iii) compared the impact of autofluorescence in fluorescence imaging of near-infrared (NIR) vs. red light excitable fluorescent contrast agents, (iv) demonstrated dual-wavelength fluorescence imaging of angiogenic vessels and lymphatics associated with a xenograft tumor model, and (v) examined dynamic multi-wavelength, whole-body fluorescence imaging with two different fluorescent contrast agents. PK analysis showed that the uptake of Cy5.5-c(KRGDf) in xenograft tumor regions linearly increased with doses of Cy5.5-c(KRGDf) up to 1.5 nmol/mouse. Above 1.5 nmol/mouse, the uptake did not increase with doses, suggesting receptor saturation. Target to background ratio (TBR) and PK analysis for two different tumor cell lines showed that while Kaposi's sarcoma (KS1767) exhibited early and rapid uptake of Cy5.5-c(KRGDf), human melanoma tumors (M21) had non-significant TBR differences and early uptake rates similar to the contralateral normal tissue regions. The differences may be due to different compartment location of the target. A comparison of fluorescence imaging with NIR vs. red light excitable fluorescent dyes demonstrates that NIR dyes are associated with less background signal, enabling rapid tumor detection. In contrast, animals injected with red light excitable fluorescent dyes showed high autofluorescence. Dual

  10. Seven-Tesla Magnetization Transfer Imaging to Detect Multiple Sclerosis White Matter Lesions.

    Science.gov (United States)

    Chou, I-Jun; Lim, Su-Yin; Tanasescu, Radu; Al-Radaideh, Ali; Mougin, Olivier E; Tench, Christopher R; Whitehouse, William P; Gowland, Penny A; Constantinescu, Cris S

    2018-03-01

    Fluid-attenuated inversion recovery (FLAIR) imaging at 3 Tesla (T) field strength is the most sensitive modality for detecting white matter lesions in multiple sclerosis. While 7T FLAIR is effective in detecting cortical lesions, it has not been fully optimized for visualization of white matter lesions and thus has not been used for delineating lesions in quantitative magnetic resonance imaging (MRI) studies of the normal appearing white matter in multiple sclerosis. Therefore, we aimed to evaluate the sensitivity of 7T magnetization-transfer-weighted (MT w ) images in the detection of white matter lesions compared with 3T-FLAIR. Fifteen patients with clinically isolated syndrome, 6 with multiple sclerosis, and 10 healthy participants were scanned with 7T 3-dimensional (D) MT w and 3T-2D-FLAIR sequences on the same day. White matter lesions visible on either sequence were delineated. Of 662 lesions identified on 3T-2D-FLAIR images, 652 were detected on 7T-3D-MT w images (sensitivity, 98%; 95% confidence interval, 97% to 99%). The Spearman correlation coefficient between lesion loads estimated by the two sequences was .910. The intrarater and interrater reliability for 7T-3D-MT w images was good with an intraclass correlation coefficient (ICC) of 98.4% and 81.8%, which is similar to that for 3T-2D-FLAIR images (ICC 96.1% and 96.7%). Seven-Tesla MT w sequences detected most of the white matter lesions identified by FLAIR at 3T. This suggests that 7T-MT w imaging is a robust alternative for detecting demyelinating lesions in addition to 3T-FLAIR. Future studies need to compare the roles of optimized 7T-FLAIR and of 7T-MT w imaging. © 2017 The Authors. Journal of Neuroimaging published by Wiley Periodicals, Inc. on behalf of American Society of Neuroimaging.

  11. Multi-layer cube sampling for liver boundary detection in PET-CT images.

    Science.gov (United States)

    Liu, Xinxin; Yang, Jian; Song, Shuang; Song, Hong; Ai, Danni; Zhu, Jianjun; Jiang, Yurong; Wang, Yongtian

    2018-06-01

    Liver metabolic information is considered as a crucial diagnostic marker for the diagnosis of fever of unknown origin, and liver recognition is the basis of automatic diagnosis of metabolic information extraction. However, the poor quality of PET and CT images is a challenge for information extraction and target recognition in PET-CT images. The existing detection method cannot meet the requirement of liver recognition in PET-CT images, which is the key problem in the big data analysis of PET-CT images. A novel texture feature descriptor called multi-layer cube sampling (MLCS) is developed for liver boundary detection in low-dose CT and PET images. The cube sampling feature is proposed for extracting more texture information, which uses a bi-centric voxel strategy. Neighbour voxels are divided into three regions by the centre voxel and the reference voxel in the histogram, and the voxel distribution information is statistically classified as texture feature. Multi-layer texture features are also used to improve the ability and adaptability of target recognition in volume data. The proposed feature is tested on the PET and CT images for liver boundary detection. For the liver in the volume data, mean detection rate (DR) and mean error rate (ER) reached 95.15 and 7.81% in low-quality PET images, and 83.10 and 21.08% in low-contrast CT images. The experimental results demonstrated that the proposed method is effective and robust for liver boundary detection.

  12. Visual detectability of elastic contrast in real-time ultrasound images

    Science.gov (United States)

    Miller, Naomi R.; Bamber, Jeffery C.; Doyley, Marvin M.; Leach, Martin O.

    1997-04-01

    Elasticity imaging (EI) has recently been proposed as a technique for imaging the mechanical properties of soft tissue. However, dynamic features, known as compressibility and mobility, are already employed to distinguish between different tissue types in ultrasound breast examination. This method, which involves the subjective interpretation of tissue motion seen in real-time B-mode images during palpation, is hereafter referred to as differential motion imaging (DMI). The purpose of this study was to develop the methodology required to perform a series of perception experiments to measure elastic lesion detectability by means of DMI and to obtain preliminary results for elastic contrast thresholds for different lesion sizes. Simulated sequences of real-time B-scans of tissue moving in response to an applied force were generated. A two-alternative forced choice (2-AFC) experiment was conducted and the measured contrast thresholds were compared with published results for lesions detected by EI. Although the trained observer was found to be quite skilled at the task of differential motion perception, it would appear that lesion detectability is improved when motion information is detected by computer processing and converted to gray scale before presentation to the observer. In particular, for lesions containing fewer than eight speckle cells, a signal detection rate of 100% could not be achieved even when the elastic contrast was very high.

  13. Edge detection of solid motor' CT image based on gravitation model

    International Nuclear Information System (INIS)

    Yu Guanghui; Lu Hongyi; Zhu Min; Liu Xudong; Hou Zhiqiang

    2012-01-01

    In order to detect the edge of solid motor' CT image much better, a new edge detection operator base on gravitation model was put forward. The edge of CT image is got by the new operator. The superiority turned out by comparing the edge got by ordinary operator. The comparison among operators with different size shows that higher quality CT images need smaller size operator while the lower need the larger. (authors)

  14. EDGE DETECTION OF THE SCOLIOTIC VERTEBRAE USING X-RAY IMAGES

    Directory of Open Access Journals (Sweden)

    P. MOHANKUMAR

    2016-02-01

    Full Text Available Bones act as a mineral storage reservoir for calcium and phosphorus. Proper well grown bones give a perfect posture to the human body. In other case, if the bone has an improper growth, it might lead to an abnormal posture or an awkward posture. Scoliosis is a condition where the scoliotic vertebrae are wedge shaped and differ with the shape of normal vertebrae. Treatment for scoliosis depends on Cobb angle which can be measured using spine X-rays. Recent development in the medical imaging techniques brought us to a new research area in image processing which includes medical image enhancement, detailed visualization of internal organs & tissues and edge detection. Bone edges are important feature in an X-ray image. The purpose of application of segmentation in medical imaging is to develop a detailed framework on human anatomy, whose primary objective is to outline the anatomical structures. Whereas edge detection is a technique which extracts vital features like corners, lines, angles and curves from an image. In this study, we are going to deal with the edge detection technique on scoliotic vertebrae. The objective of this paper is to compare the performance of edge detectors using filters and operators.

  15. COMPARISON OF BACKGROUND SUBTRACTION, SOBEL, ADAPTIVE MOTION DETECTION, FRAME DIFFERENCES, AND ACCUMULATIVE DIFFERENCES IMAGES ON MOTION DETECTION

    Directory of Open Access Journals (Sweden)

    Dara Incam Ramadhan

    2018-02-01

    Full Text Available Nowadays, digital image processing is not only used to recognize motionless objects, but also used to recognize motions objects on video. One use of moving object recognition on video is to detect motion, which implementation can be used on security cameras. Various methods used to detect motion have been developed so that in this research compared some motion detection methods, namely Background Substraction, Adaptive Motion Detection, Sobel, Frame Differences and Accumulative Differences Images (ADI. Each method has a different level of accuracy. In the background substraction method, the result obtained 86.1% accuracy in the room and 88.3% outdoors. In the sobel method the result of motion detection depends on the lighting conditions of the room being supervised. When the room is in bright condition, the accuracy of the system decreases and when the room is dark, the accuracy of the system increases with an accuracy of 80%. In the adaptive motion detection method, motion can be detected with a condition in camera visibility there is no object that is easy to move. In the frame difference method, testing on RBG image using average computation with threshold of 35 gives the best value. In the ADI method, the result of accuracy in motion detection reached 95.12%.

  16. Geospatial Image Mining For Nuclear Proliferation Detection: Challenges and New Opportunities

    Energy Technology Data Exchange (ETDEWEB)

    Vatsavai, Raju [ORNL; Bhaduri, Budhendra L [ORNL; Cheriyadat, Anil M [ORNL; Arrowood, Lloyd [Y-12 National Security Complex; Bright, Eddie A [ORNL; Gleason, Shaun Scott [ORNL; Diegert, Carl [Sandia National Laboratories (SNL); Katsaggelos, Aggelos K [ORNL; Pappas, Thrasos N [ORNL; Porter, Reid [Los Alamos National Laboratory (LANL); Bollinger, Jim [Savannah River National Laboratory (SRNL); Chen, Barry [Lawrence Livermore National Laboratory (LLNL); Hohimer, Ryan [Pacific Northwest National Laboratory (PNNL)

    2010-01-01

    With increasing understanding and availability of nuclear technologies, and increasing persuasion of nuclear technologies by several new countries, it is increasingly becoming important to monitor the nuclear proliferation activities. There is a great need for developing technologies to automatically or semi-automatically detect nuclear proliferation activities using remote sensing. Images acquired from earth observation satellites is an important source of information in detecting proliferation activities. High-resolution remote sensing images are highly useful in verifying the correctness, as well as completeness of any nuclear program. DOE national laboratories are interested in detecting nuclear proliferation by developing advanced geospatial image mining algorithms. In this paper we describe the current understanding of geospatial image mining techniques and enumerate key gaps and identify future research needs in the context of nuclear proliferation.

  17. Multivariate Alteration Detection (MAD) and MAF Postprocessing in Multispectral, Bitemporal Image Data: New Approaches to Change Detection Studies

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Simpson, James J.

    1998-01-01

    type analyses of simple difference images. Case studies with AHVRR and Landsat MSS data using simple linear stretching and masking of the change images show the usefulness of the new MAD and MAF/MAD change detection schemes. Ground truth observations confirm the detected changes. A simple simulation...

  18. Chagas Parasite Detection in Blood Images Using AdaBoost

    Directory of Open Access Journals (Sweden)

    Víctor Uc-Cetina

    2015-01-01

    Full Text Available The Chagas disease is a potentially life-threatening illness caused by the protozoan parasite, Trypanosoma cruzi. Visual detection of such parasite through microscopic inspection is a tedious and time-consuming task. In this paper, we provide an AdaBoost learning solution to the task of Chagas parasite detection in blood images. We give details of the algorithm and our experimental setup. With this method, we get 100% and 93.25% of sensitivity and specificity, respectively. A ROC comparison with the method most commonly used for the detection of malaria parasites based on support vector machines (SVM is also provided. Our experimental work shows mainly two things: (1 Chagas parasites can be detected automatically using machine learning methods with high accuracy and (2 AdaBoost + SVM provides better overall detection performance than AdaBoost or SVMs alone. Such results are the best ones known so far for the problem of automatic detection of Chagas parasites through the use of machine learning, computer vision, and image processing methods.

  19. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany).

    2017-10-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  20. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    International Nuclear Information System (INIS)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc

    2017-01-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  1. Correlation Filters for Detection of Cellular Nuclei in Histopathology Images.

    Science.gov (United States)

    Ahmad, Asif; Asif, Amina; Rajpoot, Nasir; Arif, Muhammad; Minhas, Fayyaz Ul Amir Afsar

    2017-11-21

    Nuclei detection in histology images is an essential part of computer aided diagnosis of cancers and tumors. It is a challenging task due to diverse and complicated structures of cells. In this work, we present an automated technique for detection of cellular nuclei in hematoxylin and eosin stained histopathology images. Our proposed approach is based on kernelized correlation filters. Correlation filters have been widely used in object detection and tracking applications but their strength has not been explored in the medical imaging domain up till now. Our experimental results show that the proposed scheme gives state of the art accuracy and can learn complex nuclear morphologies. Like deep learning approaches, the proposed filters do not require engineering of image features as they can operate directly on histopathology images without significant preprocessing. However, unlike deep learning methods, the large-margin correlation filters developed in this work are interpretable, computationally efficient and do not require specialized or expensive computing hardware. A cloud based webserver of the proposed method and its python implementation can be accessed at the following URL: http://faculty.pieas.edu.pk/fayyaz/software.html#corehist .

  2. Signature detection and matching for document image retrieval.

    Science.gov (United States)

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  3. Automatic detection of retinal exudates in fundus images of diabetic retinopathy patients

    Directory of Open Access Journals (Sweden)

    Mahsa Partovi

    2016-05-01

    Full Text Available Introduction: Diabetic retinopathy (DR is the most frequent microvascular complication of diabetes and can lead to several retinal abnormalities including microaneurysms, exudates, dot and blot hemorrhages, and cotton wool spots. Automated early detection of these abnormalities could limit the severity of the disease and assist ophthalmologists in investigating and treating the disease more efficiently. Segmentation of retinal image features provides the basis for automated assessment. In this study, exudates lesion on retinopathy retinal images was segmented by different image processing techniques. The objective of this study is detection of the exudates regions on retinal images of retinopathy patients by different image processing techniques. Methods: A total of 30 color images from retinopathy patients were selected for this study. The images were taken by Topcon TRC-50 IX mydriatic camera and saves with TIFF format with a resolution of 500 × 752 pixels. The morphological function was applied on intensity components of hue saturation intensity (HSI space. To detect the exudates regions, thresholding was performed on all images and the exudates region was segmented. To optimize the detection efficiency, the binary morphological functions were applied. Finally, the exudates regions were quantified and evaluated for further statistical purposes. Results: The average of sensitivity of 76%, specificity of 98%, and accuracy of 97% was obtained. Conclusion: The results showed that our approach can identify the exudate regions in retinopathy images.

  4. Infrared photothermal imaging spectroscopy for detection of trace explosives on surfaces.

    Science.gov (United States)

    Kendziora, Christopher A; Furstenberg, Robert; Papantonakis, Michael; Nguyen, Viet; Byers, Jeff; Andrew McGill, R

    2015-11-01

    We are developing a technique for the standoff detection of trace explosives on relevant substrate surfaces using photothermal infrared (IR) imaging spectroscopy (PT-IRIS). This approach leverages one or more compact IR quantum cascade lasers, which are tuned to strong absorption bands in the analytes and directed to illuminate an area on a surface of interest. An IR focal plane array is used to image the surface and detect increases in thermal emission upon laser illumination. The PT-IRIS signal is processed as a hyperspectral image cube comprised of spatial, spectral, and temporal dimensions as vectors within a detection algorithm. The ability to detect trace analytes at standoff on relevant substrates is critical for security applications but is complicated by the optical and thermal analyte/substrate interactions. This manuscript describes a series of PT-IRIS experimental results and analysis for traces of RDX, TNT, ammonium nitrate, and sucrose on steel, polyethylene, glass, and painted steel panels. We demonstrate detection at surface mass loadings comparable with fingerprint depositions ( 10μg/cm2 to 100μg/cm2) from an area corresponding to a single pixel within the thermal image.

  5. Image edge detection based tool condition monitoring with morphological component analysis.

    Science.gov (United States)

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Contour Propagation With Riemannian Elasticity Regularization

    DEFF Research Database (Denmark)

    Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.

    2011-01-01

    Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...

  7. Effective Waterline Detection of Unmanned Surface Vehicles Based on Optical Images

    Directory of Open Access Journals (Sweden)

    Yangjie Wei

    2016-09-01

    Full Text Available Real-time and accurate detection of the sailing or water area will help realize unmanned surface vehicle (USV systems. Although there are some methods for using optical images in USV-oriented environmental modeling, both the robustness and precision of these published waterline detection methods are comparatively low for a real USV system moving in a complicated environment. This paper proposes an efficient waterline detection method based on structure extraction and texture analysis with respect to optical images and presents a practical application to a USV system for validation. First, the basic principles of local binary patterns (LBPs and gray level co-occurrence matrix (GLCM were analyzed, and their advantages were integrated to calculate the texture information of river images. Then, structure extraction was introduced to preprocess the original river images so that the textures resulting from USV motion, wind, and illumination are removed. In the practical application, the waterlines of many images captured by the USV system moving along an inland river were detected with the proposed method, and the results were compared with those of edge detection and super pixel segmentation. The experimental results showed that the proposed algorithm is effective and robust. The average error of the proposed method was 1.84 pixels, and the mean square deviation was 4.57 pixels.

  8. Fast Image Edge Detection based on Faber Schauder Wavelet and Otsu Threshold

    Directory of Open Access Journals (Sweden)

    Assma Azeroual

    2017-12-01

    Full Text Available Edge detection is a critical stage in many computer vision systems, such as image segmentation and object detection. As it is difficult to detect image edges with precision and with low complexity, it is appropriate to find new methods for edge detection. In this paper, we take advantage of Faber Schauder Wavelet (FSW and Otsu threshold to detect edges in a multi-scale way with low complexity, since the extrema coefficients of this wavelet are located on edge points and contain only arithmetic operations. First, the image is smoothed using bilateral filter depending on noise estimation. Second, the FSW extrema coefficients are selected based on Otsu threshold. Finally, the edge points are linked using a predictive edge linking algorithm to get the image edges. The effectiveness of the proposed method is supported by the experimental results which prove that our method is faster than many competing state-of-the-art approaches and can be used in real-time applications.

  9. Wear Detection of Drill Bit by Image-based Technique

    Science.gov (United States)

    Sukeri, Maziyah; Zulhilmi Paiz Ismadi, Mohd; Rahim Othman, Abdul; Kamaruddin, Shahrul

    2018-03-01

    Image processing for computer vision function plays an essential aspect in the manufacturing industries for the tool condition monitoring. This study proposes a dependable direct measurement method to measure the tool wear using image-based analysis. Segmentation and thresholding technique were used as the means to filter and convert the colour image to binary datasets. Then, the edge detection method was applied to characterize the edge of the drill bit. By using cross-correlation method, the edges of original and worn drill bits were correlated to each other. Cross-correlation graphs were able to detect the difference of the worn edge despite small difference between the graphs. Future development will focus on quantifying the worn profile as well as enhancing the sensitivity of the technique.

  10. Neutron imaging integrated circuit and method for detecting neutrons

    Science.gov (United States)

    Nagarkar, Vivek V.; More, Mitali J.

    2017-12-05

    The present disclosure provides a neutron imaging detector and a method for detecting neutrons. In one example, a method includes providing a neutron imaging detector including plurality of memory cells and a conversion layer on the memory cells, setting one or more of the memory cells to a first charge state, positioning the neutron imaging detector in a neutron environment for a predetermined time period, and reading a state change at one of the memory cells, and measuring a charge state change at one of the plurality of memory cells from the first charge state to a second charge state less than the first charge state, where the charge state change indicates detection of neutrons at said one of the memory cells.

  11. Body diffusion-weighted MR imaging of uterine endometrial cancer: Is it helpful in the detection of cancer in nonenhanced MR imaging?

    Energy Technology Data Exchange (ETDEWEB)

    Inada, Yuki [Department of Radiology, Osaka Medical College, 2-7 Daigaku-machi, Takatsuki City, Osaka 569-8686 (Japan)], E-mail: rad068@poh.osaka-med.ac.jp; Matsuki, Mitsuru; Nakai, Go; Tatsugami, Fuminari; Tanikake, Masato; Narabayashi, Isamu [Department of Radiology, Osaka Medical College, 2-7 Daigaku-machi, Takatsuki City, Osaka 569-8686 (Japan); Yamada, Takashi; Tsuji, Motomu [Department of Pathology, Osaka Medical College, 2-7 Daigaku-machi, Takatsuki City, Osaka 569-8686 (Japan)

    2009-04-15

    Objective: In this study, the authors discussed the feasibility and value of diffusion-weighted (DW) MR imaging in the detection of uterine endometrial cancer in addition to conventional nonenhanced MR images. Methods and materials: DW images of endometrial cancer in 23 patients were examined by using a 1.5-T MR scanner. This study investigated whether or not DW images offer additional incremental value to conventional nonenhanced MR imaging in comparison with histopathological results. Moreover, the apparent diffusion coefficient (ADC) values were measured in the regions of interest within the endometrial cancer and compared with those of normal endometrium and myometrium in 31 volunteers, leiomyoma in 14 patients and adenomyosis in 10 patients. The Wilcoxon rank sum test was used, with a p < 0.05 considered statistically significant. Results: In 19 of 23 patients, endometrial cancers were detected only on T2-weighted images. In the remaining 4 patients, of whom two had coexisting leiomyoma, no cancer was detected on T2-weighted images. This corresponds to an 83% detection sensitivity for the carcinomas. When DW images and fused DW images/T2-weighted images were used in addition to the T2-weighted images, cancers were identified in 3 of the remaining 4 patients in addition to the 19 patients (overall detection sensitivity of 96%). The mean ADC value of endometrial cancer (n = 22) was (0.97 {+-} 0.19) x 10{sup -3} mm{sup 2}/s, which was significantly lower than those of the normal endometrium, myometrium, leiomyoma and adenomyosis (p < 0.05). Conclusion: DW imaging can be helpful in the detection of uterine endometrial cancer in nonenhanced MR imaging.

  12. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Combining Image and Non-Image Data for Automatic Detection of Retina Disease in a Telemedicine Network

    Energy Technology Data Exchange (ETDEWEB)

    Aykac, Deniz [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK); Fox, Karen [Delta Health Alliance; Garg, Seema [University of North Carolina; Giancardo, Luca [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Nichols, Trent L [ORNL; Tobin Jr, Kenneth William [ORNL

    2011-01-01

    A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion/anomaly detection is a low-cost way of achieving broad-based screening for diabetic retinopathy (DR) and other eye diseases. In the process of a routine eye-screening examination, other non-image data is often available which may be useful in automated diagnosis of disease. In this work, we report on the results of combining this non-image data with image data, using the protocol and processing steps of a prototype system for automated disease diagnosis of retina examinations from a telemedicine network. The system includes quality assessments, automated physiology detection, and automated lesion detection to create an archive of known cases. Non-image data such as diabetes onset date and hemoglobin A1c (HgA1c) for each patient examination are included as well, and the system is used to create a content-based image retrieval engine capable of automated diagnosis of disease into 'normal' and 'abnormal' categories. The system achieves a sensitivity and specificity of 91.2% and 71.6% using hold-one-out validation testing.

  14. Multi- and hyperspectral remote sensing change detection with generalized difference images by the IR-MAD method

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2005-01-01

    -based method for determining thresholds for differentiating between change and no-change in the difference images, and for estimating the variance of the no-change observations. This variance is used to establish a single change/no-change image based on the general multivariate difference image. The resulting....../no-change image can be used to establish both change regions and to extract observations based on which a fully automated orthogonal regression analysis based normalization of the multivariate data between the two points in time can be developed. Also, regularization issues typically important in connection...

  15. Gravitational lensing by a regular black hole

    International Nuclear Information System (INIS)

    Eiroa, Ernesto F; Sendra, Carlos M

    2011-01-01

    In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.

  16. Gravitational lensing by a regular black hole

    Energy Technology Data Exchange (ETDEWEB)

    Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)

    2011-04-21

    In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.

  17. Spectral differential imaging detection of planets about nearby stars

    International Nuclear Information System (INIS)

    Smith, W.H.

    1987-01-01

    Direct ground-based optical imaging of planets in orbit about nearby stars may be accomplished by spectral differential imaging using multiple passband acoustooptic filters with a CCD. This technique provides two essential results. First, it provides a means to modulate the stellar flux reflected from a planet while leaving the flux from the star and other sources in the same field of view unmodulated. Second, spectral differential imaging enables the CCD detector to achieve a sufficiently high dynamic range to locate planets near a star in spite of an integrated brightness differential of 5 x 10 8 . Spectral differential imaging at nearby diffraction limited imaging conditions with telescope apodization can reduce the time to conduct a sensitive planetary search to a few hours in some cases. The feasibility of this idea is discussed here and shown to provide, in principle, the discrimination and sensitivity to detect a Jovian-class planet about stars at distances of about 10 parsecs. The detection of brown dwarfs is shown to be feasible as well. 31 references

  18. Mechanical Damage Detection of Indonesia Local Citrus Based on Fluorescence Imaging

    Science.gov (United States)

    Siregar, T. H.; Ahmad, U.; Sutrisno; Maddu, A.

    2018-05-01

    Citrus experienced physical damage in peel will produce essential oils that contain polymethoxylated flavone. Polymethoxylated flavone is fluorescence substance; thus can be detected by fluorescence imaging. This study aims to study the fluorescence spectra characteristic and to determine the damage region in citrus peel based on fluorescence image. Pulung citrus from Batu district, East Java, as a famous citrus production area in Indonesia, was used in the experiment. It was observed that the image processing could detect the mechanical damage region. Fluorescence imaging can be used to classify the citrus into two categories, sound and defect citruses.

  19. Lymphoscintigraphy for sentinel lymph node detection in breast cancer: usefulness of image truncation

    International Nuclear Information System (INIS)

    Carrier, P.; Remp, H.J.; Chaborel, J.P.; Lallement, M.; Bussiere, F.; Darcourt, J.; Lallement, M.; Leblanc-Talent, P.; Machiavello, J.C.; Ettore, F.

    2004-01-01

    The sentinel lymph node (SNL) detection in breast cancer has been recently validated. It allows the reduction of the number of axillary dissections and their corresponding side effects. We tested a simple method of image truncation in order to improve the sensitivity of lymphoscintigraphy. This approach is justified by the magnitude of uptake difference between the injection site and the SNL. We prospectively investigated SNL detection using a triple method (lymphoscintigraphy, blue dye and surgical radio detection) in 130 patients. SNL was identified in 104 of the 132 patients (80%) using the standard images and in 126 of them (96, 9%) using the truncated images. Blue dye detection and surgical radio detection had a sensitivity of 76,9% and 98,5% respectively. The false negative rate was 10,3%. 288 SNL were dissected, 31 were metastatic. Among the 19 patients with metastatic SNL and more than one SNL detected, the metastatic SNL was not the hottest in 9 of them. 28 metastatic SNL were detected Y on truncated images versus only 19 on standard images. Truncation which dramatically increases the sensitivity of lymphoscintigraphy allows to increase the number of dissected SNL and probably reduces the false negative rate. (author)

  20. Extended image differencing for change detection in UAV video mosaics

    Science.gov (United States)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  1. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  2. Learning Rich Features from RGB-D Images for Object Detection and Segmentation

    OpenAIRE

    Gupta, Saurabh; Girshick, Ross; Arbeláez, Pablo; Malik, Jitendra

    2014-01-01

    In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an av...

  3. Standoff alpha radiation detection for hot cell imaging and crime scene investigation

    Science.gov (United States)

    Kerst, Thomas; Sand, Johan; Ihantola, Sakari; Peräjärvi, Kari; Nicholl, Adrian; Hrnecek, Erich; Toivonen, Harri; Toivonen, Juha

    2018-02-01

    This paper presents the remote detection of alpha contamination in a nuclear facility. Alpha-active material in a shielded nuclear radiation containment chamber has been localized by optical means. Furthermore, sources of radiation danger have been identified in a staged crime scene setting. For this purpose, an electron-multiplying charge-coupled device camera was used to capture photons generated by alpha-induced air scintillation (radioluminescence). The detected radioluminescence was superimposed with a regular photograph to reveal the origin of the light and thereby the alpha radioactive material. The experimental results show that standoff detection of alpha contamination is a viable tool in radiation threat detection. Furthermore, the radioluminescence spectrum in the air is spectrally analyzed. Possibilities of camera-based alpha threat detection under various background lighting conditions are discussed.

  4. Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.

    Science.gov (United States)

    Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi

    2017-06-08

    A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.

  5. Image Processing of Porous Silicon Microarray in Refractive Index Change Detection

    Directory of Open Access Journals (Sweden)

    Zhiqing Guo

    2017-06-01

    Full Text Available A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR, a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.

  6. Transmission environmental scanning electron microscope with scintillation gaseous detection device.

    Science.gov (United States)

    Danilatos, Gerasimos; Kollia, Mary; Dracopoulos, Vassileios

    2015-03-01

    A transmission environmental scanning electron microscope with use of a scintillation gaseous detection device has been implemented. This corresponds to a transmission scanning electron microscope but with addition of a gaseous environment acting both as environmental and detection medium. A commercial type of low vacuum machine has been employed together with appropriate modifications to the detection configuration. This involves controlled screening of various emitted signals in conjunction with a scintillation gaseous detection device already provided with the machine for regular surface imaging. Dark field and bright field imaging has been obtained along with other detection conditions. With a progressive series of modifications and tests, the theory and practice of a novel type of microscopy is briefly shown now ushering further significant improvements and developments in electron microscopy as a whole. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  8. Local region power spectrum-based unfocused ship detection method in synthetic aperture radar images

    Science.gov (United States)

    Wei, Xiangfei; Wang, Xiaoqing; Chong, Jinsong

    2018-01-01

    Ships on synthetic aperture radar (SAR) images will be severely defocused and their energy will disperse into numerous resolution cells under long SAR integration time. Therefore, the image intensity of ships is weak and sometimes even overwhelmed by sea clutter on SAR image. Consequently, it is hard to detect the ships from SAR intensity images. A ship detection method based on local region power spectrum of SAR complex image is proposed. Although the energies of the ships are dispersed on SAR intensity images, their spectral energies are rather concentrated or will cause the power spectra of local areas of SAR images to deviate from that of sea surface background. Therefore, the key idea of the proposed method is to detect ships via the power spectra distortion of local areas of SAR images. The local region power spectrum of a moving target on SAR image is analyzed and the way to obtain the detection threshold through the probability density function (pdf) of the power spectrum is illustrated. Numerical P- and L-band airborne SAR ocean data are utilized and the detection results are also illustrated. Results show that the proposed method can well detect the unfocused ships, with a detection rate of 93.6% and a false-alarm rate of 8.6%. Moreover, by comparing with some other algorithms, it indicates that the proposed method performs better under long SAR integration time. Finally, the applicability of the proposed method and the way of parameters selection are also discussed.

  9. SparseBeads data: benchmarking sparsity-regularized computed tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Coban, Sophia B.; Lionheart, William R. B.

    2017-01-01

    -regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels...

  10. Potential uses of terahertz pulse imaging in dentistry: caries and erosion detection

    Science.gov (United States)

    Longbottom, Christopher; Crawley, David A.; Cole, Bryan E.; Arnone, Donald D.; Wallace, Vincent P.; Pepper, Michael

    2002-06-01

    TeraHertz Pulse Imaging (TPI) is a relatively new imaging modality for medical and dental imaging. The aim of the present study was to make a preliminary assessment of the potential uses of TPI in clinical dentistry, particularly in relation to caries detection and the detection and monitoring of erosion. Images were obtained in vitro using a new TPI system developed by TeraView Ltd. We present data showing that TPI in vitro images of approximal surfaces of whole teeth demonstrate a distinctive shadowing in the presence of natural carious lesions in enamel. The thickness of this enamel shadowing appears to be related to lesion depth. The use of non-ionizing radiation to image such lesions non-destructively in vitro represents a significant step towards such measurements in vivo. In addition, data is presented which indicates that TPI may have a potential role in the detection and monitoring of enamel erosion. In vitro experiments on whole incisor teeth show that TPI is capable of detecting relatively small artificially induced changes in the buccal or palatal surface of the enamel of these teeth. Imaging of enamel thickness at such a resolution without ionizing radiation would represent a significant breakthrough if applicable in vivo.

  11. Pedestrian detection from thermal images: A sparse representation based approach

    Science.gov (United States)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  12. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  13. Cephalometric landmark detection in dental x-ray images using convolutional neural networks

    Science.gov (United States)

    Lee, Hansang; Park, Minseok; Kim, Junmo

    2017-03-01

    In dental X-ray images, an accurate detection of cephalometric landmarks plays an important role in clinical diagnosis, treatment and surgical decisions for dental problems. In this work, we propose an end-to-end deep learning system for cephalometric landmark detection in dental X-ray images, using convolutional neural networks (CNN). For detecting 19 cephalometric landmarks in dental X-ray images, we develop a detection system using CNN-based coordinate-wise regression systems. By viewing x- and y-coordinates of all landmarks as 38 independent variables, multiple CNN-based regression systems are constructed to predict the coordinate variables from input X-ray images. First, each coordinate variable is normalized by the length of either height or width of an image. For each normalized coordinate variable, a CNN-based regression system is trained on training images and corresponding coordinate variable, which is a variable to be regressed. We train 38 regression systems with the same CNN structure on coordinate variables, respectively. Finally, we compute 38 coordinate variables with these trained systems from unseen images and extract 19 landmarks by pairing the regressed coordinates. In experiments, the public database from the Grand Challenges in Dental X-ray Image Analysis in ISBI 2015 was used and the proposed system showed promising performance by successfully locating the cephalometric landmarks within considerable margins from the ground truths.

  14. Molecular Ultrasound Imaging for the Detection of Neural Inflammation

    Science.gov (United States)

    Volz, Kevin R.

    Molecular imaging is a form of nanotechnology that enables the noninvasive examination of biological processes in vivo. Radiopharmaceutical agents are used to selectively target biochemical markers, which permits their detection and evaluation. Early visualization of molecular variations indicative of pathophysiological processes can aid in patient diagnoses and management decisions. Molecular imaging is performed by introducing molecular probes into the body. Molecular probes are often contrast agents that have been nanoengineered to selectively target and tether to molecules, enabling their radiologic identification. Ultrasound contrast agents have been demonstrated as an effective method of detecting perfusion at the tissue level. Through a nanoengineering process, ultrasound contrast agents can be targeted to specific molecules, thereby extending ultrasound's capabilities from the tissue to molecular level. Molecular ultrasound, or targeted contrast enhanced ultrasound (TCEUS), has recently emerged as a popular molecular imaging technique due to its ability to provide real-time anatomical and functional information in the absence of ionizing radiation. However, molecular ultrasound represents a novel form of molecular imaging, and consequently remains largely preclinical. A review of the TCEUS literature revealed multiple preclinical studies demonstrating its success in detecting inflammation in a variety of tissues. Although, a gap was identified in the existing evidence, as TCEUS effectiveness for detection of neural inflammation in the spinal cord was unable to be uncovered. This gap in knowledge, coupled with the profound impacts that this TCEUS application could have clinically, provided rationale for its exploration, and use as contributory evidence for the molecular ultrasound body of literature. An animal model that underwent a contusive spinal cord injury was used to establish preclinical evidence of TCEUS to detect neural inflammation. Imaging was

  15. Novelty detection of foreign objects in food using multi-modal X-ray imaging

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur; Emerson, Monica Jane; Clemmensen, Line Katrine Harder

    2016-01-01

    In this paper we demonstrate a method for novelty detection of foreign objects in food products using grating-based multimodal X-ray imaging. With this imaging technique three modalities are available with pixel correspondence, enhancing organic materials such as wood chips, insects and soft...... plastics not detectable by conventional X-ray absorption radiography. We conduct experiments, where several food products are imaged with common foreign objects typically found in the food processing industry. To evaluate the benefit from using this multi-contrast X-ray technique over conventional X......-ray absorption imaging, a novelty detection scheme based on well known image- and statistical analysis techniques is proposed. The results show that the presented method gives superior recognition results and highlights the advantage of grating-based imaging....

  16. Regularization destriping of remote sensing imagery

    Science.gov (United States)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  17. Multi-sensor radiation detection, imaging, and fusion

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, Kai [Department of Nuclear Engineering, University of California, Berkeley, CA 94720 (United States); Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-01-01

    Glenn Knoll was one of the leaders in the field of radiation detection and measurements and shaped this field through his outstanding scientific and technical contributions, as a teacher, his personality, and his textbook. His Radiation Detection and Measurement book guided me in my studies and is now the textbook in my classes in the Department of Nuclear Engineering at UC Berkeley. In the spirit of Glenn, I will provide an overview of our activities at the Berkeley Applied Nuclear Physics program reflecting some of the breadth of radiation detection technologies and their applications ranging from fundamental studies in physics to biomedical imaging and to nuclear security. I will conclude with a discussion of our Berkeley Radwatch and Resilient Communities activities as a result of the events at the Dai-ichi nuclear power plant in Fukushima, Japan more than 4 years ago. - Highlights: • .Electron-tracking based gamma-ray momentum reconstruction. • .3D volumetric and 3D scene fusion gamma-ray imaging. • .Nuclear Street View integrates and associates nuclear radiation features with specific objects in the environment. • Institute for Resilient Communities combines science, education, and communities to minimize impact of disastrous events.

  18. Regularization iteration imaging algorithm for electrical capacitance tomography

    Science.gov (United States)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  19. Detecting culprit vessel of coronary artery disease with SPECT 99Tcm-MIBI myocardial imaging

    International Nuclear Information System (INIS)

    Luan Zhaosheng; Zhou Wen; Peng Yong; Su Yuwen; Tian Jianhe; Gai lue; Sun Zhijun

    2002-01-01

    Objective: To assess the value of detecting culprit vessel of coronary artery disease (CAD) with SPECT 99 Tc m -MIBI myocardial imaging. Methods: Forty-six patients with CAD were studied. Every patients had multiple-vessel lesion showed by coronary arteriography and was treated by revascularization as percutaneous transluminal angioplasty (PTCA), coronary artery bypass graft (CABG) or laser holing. Exercise (EX), rest (RE) and intravenous infusion of nitroglycerine (NTG) SPECT 99 Tc m -MIBI myocardial imagings were performed before revascularization. Exercise and rest images revealed the myocardial ischemia. NTG images revealed myocardial viability. Culprit vessels were detected according to the defects showed by above mentioned images. The veracity of detected culprit vessels was tested with the outcome of the reperfusion therapy. Results: In this group, the coronary arteriography revealed 107 lesioned coronary arteries. Myocardial imaging detected 46 culprit vessels including 23 left anterior descending (LAD), 19 left circumflex coronary artery (LCX) and 4 right coronary artery (RCA). All 46 culprit vessels underwent revascularization and had nice outcome. The veracity of 99 Tc m -MIBI myocardial imaging detected culprit vessels was high according to patients' outcome. Conclusion: Exercise, rest and NTG 99 Tc m -MIBI myocardial imaging is a great method for detecting culprit vessels in multivessel coronary disease

  20. Detection of jet contrails from satellite images

    Science.gov (United States)

    Meinert, Dieter

    1994-02-01

    In order to investigate the influence of modern technology on the world climate it is important to have automatic detection methods for man-induced parameters. In this case the influence of jet contrails on the greenhouse effect shall be investigated by means of images from polar orbiting satellites. Current methods of line recognition and amplification cannot distinguish between contrails and rather sharp edges of natural cirrus or noise. They still rely on human control. Through the combination of different methods from cloud physics, image comparison, pattern recognition, and artificial intelligence we try to overcome this handicap. Here we will present the basic methods applied to each image frame, and list preliminary results derived this way.

  1. Reinventing Image Detective: An Evidence-Based Approach to Citizen Science Online

    Science.gov (United States)

    Romano, C.; Graff, P. V.; Runco, S.

    2017-12-01

    Usability studies demonstrate that web users are notoriously impatient, spending as little as 15 seconds on a home page. How do you get users to stay long enough to understand a citizen science project? How do you get users to complete complex citizen science tasks online?Image Detective, a citizen science project originally developed by scientists and science engagement specialists at the NASA Johnson Space center to engage the public in the analysis of images taken from space by astronauts to help enhance NASA's online database of astronaut imagery, partnered with the CosmoQuest citizen science platform to modernize, offering new and improved options for participation in Image Detective. The challenge: to create a web interface that builds users' skills and knowledge, creating engagement while learning complex concepts essential to the accurate completion of tasks. The project team turned to usability testing for an objective understanding of how users perceived Image Detective and the steps required to complete required tasks. A group of six users was recruited online for unmoderated and initial testing. The users followed a think-aloud protocol while attempting tasks, and were recorded on video and audio. The usability test examined users' perception of four broad areas: the purpose of and context for Image Detective; the steps required to successfully complete the analysis (differentiating images of Earth's surface from those showing outer space and identifying common surface features); locating the image center point on a map of Earth; and finally, naming geographic locations or natural events seen in the image.Usability test findings demonstrated that the following best practices can increase participation in Image Detective and can be applied to the successful implementation of any citizen science project:• Concise explanation of the project, its context, and its purpose;• Including a mention of the funding agency (in this case, NASA);• A preview of

  2. MR imaging for detection of trampoline injuries in children.

    Science.gov (United States)

    Hauth, E; Jaeger, H; Luckey, P; Beer, M

    2017-01-18

    The recreational use of trampolines is an increasingly popular activity among children and adolescents. Several studies reported about radiological findings in trampoline related injuries in children. The following publication presents our experience with MRI for detection of trampoline injuries in children. 20 children (mean 9.2 years, range: 4-15 years) who had undergone an MRI study for detection of suspected trampoline injuries within one year were included. 9/20 (45%) children had a radiograph as the first imaging modality in conjunction with primary care. In 11/20 (55%) children MR imaging was performed as the first modality. MR imaging was performed on two 1.5 T scanners with 60 and 70 cm bore design respectively without sedation. In 9/20 (45%) children the injury mechanism was a collision with another child. 7/20 (35%) children experienced leg pain several hours to one day after using the trampoline without acute accident and 4/20 (20%) children described a fall from the trampoline to the ground. All plain radiographs were performed in facilities outside the study centre and all were classified as having no pathological findings. In contrast, MR imaging detected injuries in 15/20 (75%) children. Lower extremity injuries were the most common findings, observed in 12/15 (80%) children. Amongst these, injuries of the ankle and foot were diagnosed in 7/15 (47%) patients. Fractures of the proximal tibial metaphysis were observed in 3/15 children. One child had developed a thoracic vertebral fracture. The two remaining children experienced injuries to the sacrum and a soft tissue injury of the thumb respectively. Seven children described clinical symptoms without an overt accident. Here, fractures of the proximal tibia were observed in 2 children, a hip joint effusion in another 2, and an injury of the ankle and foot in 1 child. There were no associated spinal cord injuries, no fracture dislocations, no vascular injuries and no head and neck injuries. In the

  3. Computer-assisted image processing to detect spores from the fungus Pandora neoaphidis.

    Science.gov (United States)

    Korsnes, Reinert; Westrum, Karin; Fløistad, Erling; Klingen, Ingeborg

    2016-01-01

    This contribution demonstrates an example of experimental automatic image analysis to detect spores prepared on microscope slides derived from trapping. The application is to monitor aerial spore counts of the entomopathogenic fungus Pandora neoaphidis which may serve as a biological control agent for aphids. Automatic detection of such spores can therefore play a role in plant protection. The present approach for such detection is a modification of traditional manual microscopy of prepared slides, where autonomous image recording precedes computerised image analysis. The purpose of the present image analysis is to support human visual inspection of imagery data - not to replace it. The workflow has three components:•Preparation of slides for microscopy.•Image recording.•Computerised image processing where the initial part is, as usual, segmentation depending on the actual data product. Then comes identification of blobs, calculation of principal axes of blobs, symmetry operations and projection on a three parameter egg shape space.

  4. Hot spot detection for breast cancer in Ki-67 stained slides: image dependent filtering approach

    Science.gov (United States)

    Niazi, M. Khalid Khan; Downs-Kelly, Erinn; Gurcan, Metin N.

    2014-03-01

    We present a new method to detect hot spots from breast cancer slides stained for Ki67 expression. It is common practice to use centroid of a nucleus as a surrogate representation of a cell. This often requires the detection of individual nuclei. Once all the nuclei are detected, the hot spots are detected by clustering the centroids. For large size images, nuclei detection is computationally demanding. Instead of detecting the individual nuclei and treating hot spot detection as a clustering problem, we considered hot spot detection as an image filtering problem where positively stained pixels are used to detect hot spots in breast cancer images. The method first segments the Ki-67 positive pixels using the visually meaningful segmentation (VMS) method that we developed earlier. Then, it automatically generates an image dependent filter to generate a density map from the segmented image. The smoothness of the density image simplifies the detection of local maxima. The number of local maxima directly corresponds to the number of hot spots in the breast cancer image. The method was tested on 23 different regions of interest images extracted from 10 different breast cancer slides stained with Ki67. To determine the intra-reader variability, each image was annotated twice for hot spots by a boardcertified pathologist with a two-week interval in between her two readings. A computer-generated hot spot region was considered a true-positive if it agrees with either one of the two annotation sets provided by the pathologist. While the intra-reader variability was 57%, our proposed method can correctly detect hot spots with 81% precision.

  5. Improved Ordinary Measure and Image Entropy Theory based intelligent Copy Detection Method

    Directory of Open Access Journals (Sweden)

    Dengpan Ye

    2011-10-01

    Full Text Available Nowadays, more and more multimedia websites appear in social network. It brings some security problems, such as privacy, piracy, disclosure of sensitive contents and so on. Aiming at copyright protection, the copy detection technology of multimedia contents becomes a hot topic. In our previous work, a new computer-based copyright control system used to detect the media has been proposed. Based on this system, this paper proposes an improved media feature matching measure and an entropy based copy detection method. The Levenshtein Distance was used to enhance the matching degree when using for feature matching measure in copy detection. For entropy based copy detection, we make a fusion of the two features of entropy matrix of the entropy feature we extracted. Firstly,we extract the entropy matrix of the image and normalize it. Then, we make a fusion of the eigenvalue feature and the transfer matrix feature of the entropy matrix. The fused features will be used for image copy detection. The experiments show that compared to use these two kinds of features for image detection singly, using feature fusion matching method is apparent robustness and effectiveness. The fused feature has a high detection for copy images which have been received some attacks such as noise, compression, zoom, rotation and so on. Comparing with referred methods, the method proposed is more intelligent and can be achieved good performance.

  6. Polarized near-infrared autofluorescence imaging combined with near-infrared diffuse reflectance imaging for improving colonic cancer detection.

    Science.gov (United States)

    Shao, Xiaozhuo; Zheng, Wei; Huang, Zhiwei

    2010-11-08

    We evaluate the diagnostic feasibility of the integrated polarized near-infrared (NIR) autofluorescence (AF) and NIR diffuse reflectance (DR) imaging technique developed for colonic cancer detection. A total of 48 paired colonic tissue specimens (normal vs. cancer) were measured using the integrated NIR DR (850-1100 nm) and NIR AF imaging at the 785 nm laser excitation. The results showed that NIR AF intensities of cancer tissues are significantly lower than those of normal tissues (ppolarization conditions gives a higher diagnostic accuracy (of ~92-94%) compared to non-polarized NIR AF imaging or NIR DR imaging. Further, the ratio imaging of NIR DR to NIR AF with polarization provides the best diagnostic accuracy (of ~96%) among the NIR AF and NIR DR imaging techniques. This work suggests that the integrated NIR AF/DR imaging under polarization condition has the potential to improve the early diagnosis and detection of malignant lesions in the colon.

  7. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    Science.gov (United States)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  8. Regularization based on steering parameterized Gaussian filters and a Bhattacharyya distance functional

    Science.gov (United States)

    Lopes, Emerson P.

    2001-08-01

    Template regularization embeds the problem of class separability. In the machine vision perspective, this problem is critical when a textural classification procedure is applied to non-stationary pattern mosaic images. These applications often present low accuracy performance due to disturbance of the classifiers produced by exogenous or endogenous signal regularity perturbations. Natural scene imaging, where the images present certain degree of homogeneity in terms of texture element size or shape (primitives) shows a variety of behaviors, especially varying the preferential spatial directionality. The space-time image pattern characterization is only solved if classification procedures are designed considering the most robust tools within a parallel and hardware perspective. The results to be compared in this paper are obtained using a framework based on multi-resolution, frame and hypothesis approach. Two strategies for the bank of Gabor filters applications are considered: adaptive strategy using the KL transform and fix configuration strategy. The regularization under discussion is accomplished in the pyramid building system instance. The filterings are steering Gaussians controlled by free parameters which are adjusted in accordance with a feedback process driven by hints obtained from sequence of frames interaction functionals pos-processed in the training process and including classification of training set samples as examples. Besides these adjustments there is continuous input data sensitive adaptiveness. The experimental result assessments are focused on two basic issues: Bhattacharyya distance as pattern characterization feature and the combination of KL transform as feature selection and adaptive criterion with the regularization of the pattern Bhattacharyya distance functional (BDF) behavior, using the BDF state separability and symmetry as the main indicators of an optimum framework parameter configuration.

  9. Various imaging methods in the detection of small hepatomas

    International Nuclear Information System (INIS)

    Nakatsuka, Haruki; Kaminou, Toshio; Takemoto, Kazumasa; Takashima, Sumio; Kobayashi, Nobuyuki; Nakamura, Kenji; Onoyama, Yasuto; Kurioka, Naruto

    1985-01-01

    Fifty-one patients with small hepatomas under 5 cm in diameter were studied to compare the detectability of various imaging methods. Positive finding was obtained in 50 % of the patients by scintigraphy, in 74 % by ultrasonography and in 79 % by CT during screening tests. Rate of detection in retrospective analysis, after the site of the tumor had been known, were 73 %, 93 % and 87 % respectively. Rate of detection was 92 % by celiac arteriography and 98 % by selective hepatic arteriography. In 21 patients, who had the tumor under 3 cm, the rate was 32 % for scintigraphy, 74 % for ultrasonography and 65 % for CT during screening, whereas it was 58 %, 84 % and 75 % retrospectively. By celiac arteriography, it was 85 %, and by hepatic arteriography, 95 %. Rate of detection of small hepatomas in screening tests differed remarkably from that in retrospective analysis. No single method of imaging can disclose reliably the presense of small hepatoma, therefore more than one method should be used in screening. (author)

  10. In vivo tumor detection with combined MR–Photoacoustic-Thermoacoustic imaging

    Directory of Open Access Journals (Sweden)

    Lin Huang

    2016-09-01

    Full Text Available Here, we report a new method using combined magnetic resonance (MR–Photoacoustic (PA–Thermoacoustic (TA imaging techniques, and demonstrate its unique ability for in vivo cancer detection using tumor-bearing mice. Circular scanning TA and PA imaging systems were used to recover the dielectric and optical property distributions of three colon carcinoma bearing mice While a 7.0-T magnetic resonance imaging (MRI unit with a mouse body volume coil was utilized for high resolution structural imaging of the same mice. Three plastic tubes filled with soybean sauce were used as fiducial markers for the co-registration of MR, PA and TA images. The resulting fused images provided both enhanced tumor margin and contrast relative to the surrounding normal tissues. In particular, some finger-like protrusions extending into the surrounding tissues were revealed in the MR/TA infused images. These results show that the tissue functional optical and dielectric properties provided by PA and TA images along with the anatomical structure by MRI in one picture make accurate tumor identification easier. This combined MR–PA–TA-imaging strategy has the potential to offer a clinically useful triple-modality tool for accurate cancer detection and for intraoperative surgical navigation.

  11. Automatic food detection in egocentric images using artificial intelligence technology.

    Science.gov (United States)

    Jia, Wenyan; Li, Yuecheng; Qu, Ruowei; Baranowski, Thomas; Burke, Lora E; Zhang, Hong; Bai, Yicheng; Mancino, Juliet M; Xu, Guizhi; Mao, Zhi-Hong; Sun, Mingui

    2018-03-26

    To develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable device, called eButton, from free-living individuals. Three thousand nine hundred images containing real-world activities, which formed eButton data set 1, were manually selected from thirty subjects. eButton data set 2 contained 29 515 images acquired from a research participant in a week-long unrestricted recording. They included both food- and non-food-related real-life activities, such as dining at both home and restaurants, cooking, shopping, gardening, housekeeping chores, taking classes, gym exercise, etc. All images in these data sets were classified as food/non-food images based on their tags generated by a convolutional neural network. A cross data-set test was conducted on eButton data set 1. The overall accuracy of food detection was 91·5 and 86·4 %, respectively, when one-half of data set 1 was used for training and the other half for testing. For eButton data set 2, 74·0 % sensitivity and 87·0 % specificity were obtained if both 'food' and 'drink' were considered as food images. Alternatively, if only 'food' items were considered, the sensitivity and specificity reached 85·0 and 85·8 %, respectively. The AI technology can automatically detect foods from low-quality, wearable camera-acquired real-world egocentric images with reasonable accuracy, reducing both the burden of data processing and privacy concerns.

  12. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  13. Two-stage Keypoint Detection Scheme for Region Duplication Forgery Detection in Digital Images.

    Science.gov (United States)

    Emam, Mahmoud; Han, Qi; Zhang, Hongli

    2018-01-01

    In digital image forensics, copy-move or region duplication forgery detection became a vital research topic recently. Most of the existing keypoint-based forgery detection methods fail to detect the forgery in the smooth regions, rather than its sensitivity to geometric changes. To solve these problems and detect points which cover all the regions, we proposed two steps for keypoint detection. First, we employed the scale-invariant feature operator to detect the spatially distributed keypoints from the textured regions. Second, the keypoints from the missing regions are detected using Harris corner detector with nonmaximal suppression to evenly distribute the detected keypoints. To improve the matching performance, local feature points are described using Multi-support Region Order-based Gradient Histogram descriptor. Based on precision-recall rates and commonly tested dataset, comprehensive performance evaluation is performed. The results demonstrated that the proposed scheme has better detection and robustness against some geometric transformation attacks compared with state-of-the-art methods. © 2017 American Academy of Forensic Sciences.

  14. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David; Gereige, Issam; Gourgon, Cé cile

    2013-01-01

    patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications

  15. Community detection for fluorescent lifetime microscopy image segmentation

    Science.gov (United States)

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Achilefu, Samuel; Nussinov, Zohar

    2014-03-01

    Multiresolution community detection (CD) method has been suggested in a recent work as an efficient method for performing unsupervised segmentation of fluorescence lifetime (FLT) images of live cell images containing fluorescent molecular probes.1 In the current paper, we further explore this method in FLT images of ex vivo tissue slices. The image processing problem is framed as identifying clusters with respective average FLTs against a background or "solvent" in FLT imaging microscopy (FLIM) images derived using NIR fluorescent dyes. We have identified significant multiresolution structures using replica correlations in these images, where such correlations are manifested by information theoretic overlaps of the independent solutions ("replicas") attained using the multiresolution CD method from different starting points. In this paper, our method is found to be more efficient than a current state-of-the-art image segmentation method based on mixture of Gaussian distributions. It offers more than 1:25 times diversity based on Shannon index than the latter method, in selecting clusters with distinct average FLTs in NIR FLIM images.

  16. Intelligent Image Segment for Material Composition Detection

    Directory of Open Access Journals (Sweden)

    Liang Xiaodan

    2017-01-01

    Full Text Available In the process of material composition detection, the image analysis is an inevitable problem. Multilevel thresholding based OTSU method is one of the most popular image segmentation techniques. How, with the increase of the number of thresholds, the computing time increases exponentially. To overcome this problem, this paper proposed an artificial bee colony algorithm with a two-level topology. This improved artificial bee colony algorithm can quickly find out the suitable thresholds and nearly no trap into local optimal. The test results confirm it good performance.

  17. Joint Segmentation and Shape Regularization with a Generalized Forward Backward Algorithm.

    Science.gov (United States)

    Stefanoiu, Anca; Weinmann, Andreas; Storath, Martin; Navab, Nassir; Baust, Maximilian

    2016-05-11

    This paper presents a method for the simultaneous segmentation and regularization of a series of shapes from a corresponding sequence of images. Such series arise as time series of 2D images when considering video data, or as stacks of 2D images obtained by slicewise tomographic reconstruction. We first derive a model where the regularization of the shape signal is achieved by a total variation prior on the shape manifold. The method employs a modified Kendall shape space to facilitate explicit computations together with the concept of Sobolev gradients. For the proposed model, we derive an efficient and computationally accessible splitting scheme. Using a generalized forward-backward approach, our algorithm treats the total variation atoms of the splitting via proximal mappings, whereas the data terms are dealt with by gradient descent. The potential of the proposed method is demonstrated on various application examples dealing with 3D data. We explain how to extend the proposed combined approach to shape fields which, for instance, arise in the context of 3D+t imaging modalities, and show an application in this setup as well.

  18. Close-range hyperspectral image analysis for the early detection of stress responses in individual plants in a high-throughput phenotyping platform

    Science.gov (United States)

    Mohd Asaari, Mohd Shahrimie; Mishra, Puneet; Mertens, Stien; Dhondt, Stijn; Inzé, Dirk; Wuyts, Nathalie; Scheunders, Paul

    2018-04-01

    The potential of close-range hyperspectral imaging (HSI) as a tool for detecting early drought stress responses in plants grown in a high-throughput plant phenotyping platform (HTPPP) was explored. Reflectance spectra from leaves in close-range imaging are highly influenced by plant geometry and its specific alignment towards the imaging system. This induces high uninformative variability in the recorded signals, whereas the spectral signature informing on plant biological traits remains undisclosed. A linear reflectance model that describes the effect of the distance and orientation of each pixel of a plant with respect to the imaging system was applied. By solving this model for the linear coefficients, the spectra were corrected for the uninformative illumination effects. This approach, however, was constrained by the requirement of a reference spectrum, which was difficult to obtain. As an alternative, the standard normal variate (SNV) normalisation method was applied to reduce this uninformative variability. Once the envisioned illumination effects were eliminated, the remaining differences in plant spectra were assumed to be related to changes in plant traits. To distinguish the stress-related phenomena from regular growth dynamics, a spectral analysis procedure was developed based on clustering, a supervised band selection, and a direct calculation of a spectral similarity measure against a reference. To test the significance of the discrimination between healthy and stressed plants, a statistical test was conducted using a one-way analysis of variance (ANOVA) technique. The proposed analysis techniques was validated with HSI data of maize plants (Zea mays L.) acquired in a HTPPP for early detection of drought stress in maize plant. Results showed that the pre-processing of reflectance spectra with the SNV effectively reduces the variability due to the expected illumination effects. The proposed spectral analysis method on the normalized spectra successfully

  19. Comparative study of protoporphyrin IX fluorescence image enhancement methods to improve an optical imaging system for oral cancer detection

    Science.gov (United States)

    Jiang, Ching-Fen; Wang, Chih-Yu; Chiang, Chun-Ping

    2011-07-01

    Optoelectronics techniques to induce protoporphyrin IX fluorescence with topically applied 5-aminolevulinic acid on the oral mucosa have been developed to noninvasively detect oral cancer. Fluorescence imaging enables wide-area screening for oral premalignancy, but the lack of an adequate fluorescence enhancement method restricts the clinical imaging application of these techniques. This study aimed to develop a reliable fluorescence enhancement method to improve PpIX fluorescence imaging systems for oral cancer detection. Three contrast features, red-green-blue reflectance difference, R/B ratio, and R/G ratio, were developed first based on the optical properties of the fluorescence images. A comparative study was then carried out with one negative control and four biopsy confirmed clinical cases to validate the optimal image processing method for the detection of the distribution of malignancy. The results showed the superiority of the R/G ratio in terms of yielding a better contrast between normal and neoplastic tissue, and this method was less prone to errors in detection. Quantitative comparison with the clinical diagnoses in the four neoplastic cases showed that the regions of premalignancy obtained using the proposed method accorded with the expert's determination, suggesting the potential clinical application of this method for the detection of oral cancer.

  20. Retrieval-based Face Annotation by Weak Label Regularized Local Coordinate Coding.

    Science.gov (United States)

    Wang, Dayong; Hoi, Steven C H; He, Ying; Zhu, Jianke; Mei, Tao; Luo, Jiebo

    2013-08-02

    Retrieval-based face annotation is a promising paradigm of mining massive web facial images for automated face annotation. This paper addresses a critical problem of such paradigm, i.e., how to effectively perform annotation by exploiting the similar facial images and their weak labels which are often noisy and incomplete. In particular, we propose an effective Weak Label Regularized Local Coordinate Coding (WLRLCC) technique, which exploits the principle of local coordinate coding in learning sparse features, and employs the idea of graph-based weak label regularization to enhance the weak labels of the similar facial images. We present an efficient optimization algorithm to solve the WLRLCC task. We conduct extensive empirical studies on two large-scale web facial image databases: (i) a Western celebrity database with a total of $6,025$ persons and $714,454$ web facial images, and (ii)an Asian celebrity database with $1,200$ persons and $126,070$ web facial images. The encouraging results validate the efficacy of the proposed WLRLCC algorithm. To further improve the efficiency and scalability, we also propose a PCA-based approximation scheme and an offline approximation scheme (AWLRLCC), which generally maintains comparable results but significantly saves much time cost. Finally, we show that WLRLCC can also tackle two existing face annotation tasks with promising performance.

  1. Automated detection of a prostate Ni-Ti stent in electronic portal images.

    Science.gov (United States)

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane; Lund, Bente; Larsen, Erik Hoejkjaer

    2006-12-01

    Planning target volumes (PTV) in fractionated radiotherapy still have to be outlined with wide margins to the clinical target volume due to uncertainties arising from daily shift of the prostate position. A recently proposed new method of visualization of the prostate is based on insertion of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study. Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7 mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67 of 71 pairs of images. The method is fast, has a high success rate, good accuracy, and has a potential for unsupervised localization of the prostate before radiotherapy, which would enable automated repositioning before treatment and allow for the use of very tight PTV margins.

  2. Automated detection of a prostate Ni-Ti stent in electronic portal images

    International Nuclear Information System (INIS)

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane; Lund, Bente; Larsen, Erik Hoejkjaer

    2006-01-01

    Planning target volumes (PTV) in fractionated radiotherapy still have to be outlined with wide margins to the clinical target volume due to uncertainties arising from daily shift of the prostate position. A recently proposed new method of visualization of the prostate is based on insertion of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study. Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7 mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67 of 71 pairs of images. The method is fast, has a high success rate, good accuracy, and has a potential for unsupervised localization of the prostate before radiotherapy, which would enable automated repositioning before treatment and allow for the use of very tight PTV margins

  3. Ultrasound Imaging Methods for Breast Cancer Detection

    NARCIS (Netherlands)

    Ozmen, N.

    2014-01-01

    The main focus of this thesis is on modeling acoustic wavefield propagation and implementing imaging algorithms for breast cancer detection using ultrasound. As a starting point, we use an integral equation formulation, which can be used to solve both the forward and inverse problems. This thesis

  4. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David

    2013-12-01

    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  5. Near field ice detection using infrared based optical imaging technology

    Science.gov (United States)

    Abdel-Moati, Hazem; Morris, Jonathan; Zeng, Yousheng; Corie, Martin Wesley; Yanni, Victor Garas

    2018-02-01

    If not detected and characterized, icebergs can potentially pose a hazard to oil and gas exploration, development and production operations in arctic environments as well as commercial shipping channels. In general, very large bergs are tracked and predicted using models or satellite imagery. Small and medium bergs are detectable using conventional marine radar. As icebergs decay they shed bergy bits and growlers, which are much smaller and more difficult to detect. Their low profile above the water surface, in addition to occasional relatively high seas, makes them invisible to conventional marine radar. Visual inspection is the most common method used to detect bergy bits and growlers, but the effectiveness of visual inspections is reduced by operator fatigue and low light conditions. The potential hazard from bergy bits and growlers is further increased by short detection range (<1 km). As such, there is a need for robust and autonomous near-field detection of such smaller icebergs. This paper presents a review of iceberg detection technology and explores applications for infrared imagers in the field. Preliminary experiments are performed and recommendations are made for future work, including a proposed imager design which would be suited for near field ice detection.

  6. SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images

    International Nuclear Information System (INIS)

    Qiu, J; Yang, D

    2015-01-01

    Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets, and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from

  7. SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, J [Washington University in St Louis, Taian, Shandong (China); Yang, D [Washington University School of Medicine, St Louis, MO (United States)

    2015-06-15

    Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets, and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from

  8. The impact of different imaging modalities of 67Ga scintigraphy on the image quality and the ability in detection of lesions

    International Nuclear Information System (INIS)

    Liu Xiuqin; Li Wenchan; Zhang Jianfei; Yao Zhiming

    2009-01-01

    Objective: 67 Ga scintigraphy is an important method in detection of active sarcoidosis. The aim of this research was to study the influence of planar and tomography. with and without CT attenuation correction (AC and NAC), on 67 Ga images on the image quality and the ability in detection of lesions. Methods: Thirty one patients (13 male, 18 female, age range: 33-87 years)with sarcoidosis underwent 67 Ga planar and tomographic scans. AC and NAC.The imaging quality and the ability in detection of hyper-radioactive lymph nodes in lung hilar and mediastinal(1esion) among the planar, AC and NAC images were compared. The paired t-test and χ 2 -test were used for data analysis with SPSS 10.0 software. Results: From planar, NAC to AC, the image quality was better and better in proper order (χ 2 = 25.88, P 67 Ga tomographic scintigraphy can impmve the ability in detection of hyperradioactive lung hilar and mediastinal lymph nodes compared with planar image does. CT AC for 67 Ga tomography can improve the tomographic imaging quality. (authors)

  9. CLOUD DETECTION OF OPTICAL SATELLITE IMAGES USING SUPPORT VECTOR MACHINE

    Directory of Open Access Journals (Sweden)

    K.-Y. Lee

    2016-06-01

    Full Text Available Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012 uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+ and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate

  10. Cloud Detection of Optical Satellite Images Using Support Vector Machine

    Science.gov (United States)

    Lee, Kuan-Yi; Lin, Chao-Hung

    2016-06-01

    Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection

  11. Fast Automatic Airport Detection in Remote Sensing Images Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Fen Chen

    2018-03-01

    Full Text Available Fast and automatic detection of airports from remote sensing images is useful for many military and civilian applications. In this paper, a fast automatic detection method is proposed to detect airports from remote sensing images based on convolutional neural networks using the Faster R-CNN algorithm. This method first applies a convolutional neural network to generate candidate airport regions. Based on the features extracted from these proposals, it then uses another convolutional neural network to perform airport detection. By taking the typical elongated linear geometric shape of airports into consideration, some specific improvements to the method are proposed. These approaches successfully improve the quality of positive samples and achieve a better accuracy in the final detection results. Experimental results on an airport dataset, Landsat 8 images, and a Gaofen-1 satellite scene demonstrate the effectiveness and efficiency of the proposed method.

  12. A fiducial detection algorithm for real-time image guided IMRT based on simultaneous MV and kV imaging.

    Science.gov (United States)

    Mao, Weihua; Riaz, Nadeem; Lee, Louis; Wiersma, Rodney; Xing, Lei

    2008-08-01

    The advantage of highly conformal dose techniques such as 3DCRT and IMRT is limited by intrafraction organ motion. A new approach to gain near real-time 3D positions of internally implanted fiducial markers is to analyze simultaneous onboard kV beam and treatment MV beam images (from fluoroscopic or electronic portal image devices). Before we can use this real-time image guidance for clinical 3DCRT and IMRT treatments, four outstanding issues need to be addressed. (1) How will fiducial motion blur the image and hinder tracking fiducials? kV and MV images are acquired while the tumor is moving at various speeds. We find that a fiducial can be successfully detected at a maximum linear speed of 1.6 cm/s. (2) How does MV beam scattering affect kV imaging? We investigate this by varying MV field size and kV source to imager distance, and find that common treatment MV beams do not hinder fiducial detection in simultaneous kV images. (3) How can one detect fiducials on images from 3DCRT and IMRT treatment beams when the MV fields are modified by a multileaf collimator (MLC)? The presented analysis is capable of segmenting a MV field from the blocking MLC and detecting visible fiducials. This enables the calculation of nearly real-time 3D positions of markers during a real treatment. (4) Is the analysis fast enough to track fiducials in nearly real time? Multiple methods are adopted to predict marker positions and reduce search regions. The average detection time per frame for three markers in a 1024 x 768 image was reduced to 0.1 s or less. Solving these four issues paves the way to tracking moving fiducial markers throughout a 3DCRT or IMRT treatment. Altogether, these four studies demonstrate that our algorithm can track fiducials in real time, on degraded kV images (MV scatter), in rapidly moving tumors (fiducial blurring), and even provide useful information in the case when some fiducials are blocked from view by the MLC. This technique can provide a gating signal or

  13. The patterning of retinal horizontal cells: normalizing the regularity index enhances the detection of genomic linkage

    Directory of Open Access Journals (Sweden)

    Patrick W. Keeley

    2014-10-01

    Full Text Available Retinal neurons are often arranged as non-random distributions called mosaics, as their somata minimize proximity to neighboring cells of the same type. The horizontal cells serve as an example of such a mosaic, but little is known about the developmental mechanisms that underlie their patterning. To identify genes involved in this process, we have used three different spatial statistics to assess the patterning of the horizontal cell mosaic across a panel of genetically distinct recombinant inbred strains. To avoid the confounding effect cell density, which varies two-fold across these different strains, we computed the real/random regularity ratio, expressing the regularity of a mosaic relative to a randomly distributed simulation of similarly sized cells. To test whether this latter statistic better reflects the variation in biological processes that contribute to horizontal cell spacing, we subsequently compared the genetic linkage for each of these two traits, the regularity index and the real/random regularity ratio, each computed from the distribution of nearest neighbor (NN distances and from the Voronoi domain (VD areas. Finally, we compared each of these analyses with another index of patterning, the packing factor. Variation in the regularity indexes, as well as their real/random regularity ratios, and the packing factor, mapped quantitative trait loci (QTL to the distal ends of Chromosomes 1 and 14. For the NN and VD analyses, we found that the degree of linkage was greater when using the real/random regularity ratio rather than the respective regularity index. Using informatic resources, we narrow the list of prospective genes positioned at these two intervals to a small collection of six genes that warrant further investigation to determine their potential role in shaping the patterning of the horizontal cell mosaic.

  14. Bright Retinal Lesions Detection using Colour Fundus Images Containing Reflective Features

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Karnowski, Thomas Paul [ORNL; Chaum, Edward [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK)

    2009-01-01

    In the last years the research community has developed many techniques to detect and diagnose diabetic retinopathy with retinal fundus images. This is a necessary step for the implementation of a large scale screening effort in rural areas where ophthalmologists are not available. In the United States of America, the incidence of diabetes is worryingly increasing among the young population. Retina fundus images of patients younger than 20 years old present a high amount of reflection due to the Nerve Fibre Layer (NFL), the younger the patient the more these reflections are visible. To our knowledge we are not aware of algorithms able to explicitly deal with this type of reflection artefact. This paper presents a technique to detect bright lesions also in patients with a high degree of reflective NFL. First, the candidate bright lesions are detected using image equalization and relatively simple histogram analysis. Then, a classifier is trained using texture descriptor (Multi-scale Local Binary Patterns) and other features in order to remove the false positives in the lesion detection. Finally, the area of the lesions is used to diagnose diabetic retinopathy. Our database consists of 33 images from a telemedicine network currently developed. When determining moderate to high diabetic retinopathy using the bright lesions detected the algorithm achieves a sensitivity of 100% at a specificity of 100% using hold-one-out testing.

  15. Edge detection of optical subaperture image based on improved differential box-counting method

    Science.gov (United States)

    Li, Yi; Hui, Mei; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-01-01

    Optical synthetic aperture imaging technology is an effective approach to improve imaging resolution. Compared with monolithic mirror system, the image of optical synthetic aperture system is often more complex at the edge, and as a result of the existence of gap between segments, which makes stitching becomes a difficult problem. So it is necessary to extract the edge of subaperture image for achieving effective stitching. Fractal dimension as a measure feature can describe image surface texture characteristics, which provides a new approach for edge detection. In our research, an improved differential box-counting method is used to calculate fractal dimension of image, then the obtained fractal dimension is mapped to grayscale image to detect edges. Compared with original differential box-counting method, this method has two improvements as follows: by modifying the box-counting mechanism, a box with a fixed height is replaced by a box with adaptive height, which solves the problem of over-counting the number of boxes covering image intensity surface; an image reconstruction method based on super-resolution convolutional neural network is used to enlarge small size image, which can solve the problem that fractal dimension can't be calculated accurately under the small size image, and this method may well maintain scale invariability of fractal dimension. The experimental results show that the proposed algorithm can effectively eliminate noise and has a lower false detection rate compared with the traditional edge detection algorithms. In addition, this algorithm can maintain the integrity and continuity of image edge in the case of retaining important edge information.

  16. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    Science.gov (United States)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  17. Gastric cancer target detection using near-infrared hyperspectral imaging with chemometrics

    Science.gov (United States)

    Yi, Weisong; Zhang, Jian; Jiang, Houmin; Zhang, Niya

    2014-09-01

    Gastric cancer is one of the leading causes of cancer death in the world due to its high morbidity and mortality. Hyperspectral imaging (HSI) is an emerging, non-destructive, cutting edge analytical technology that combines conventional imaging and spectroscopy in one single system. The manuscript has investigated the application of near-infrared hyperspectral imaging (900-1700 nm) (NIR-HSI) for gastric cancer detection with algorithms. Major spectral differences were observed in three regions (950-1050, 1150-1250, and 1400-1500 nm). By inspecting cancerous mean spectrum three major absorption bands were observed around 975, 1215 and 1450 nm. Furthermore, the cancer target detection results are consistent and conformed with histopathological examination results. These results suggest that NIR-HSI is a simple, feasible and sensitive optical diagnostic technology for gastric cancer target detection with chemometrics.

  18. Evaluation of radiographic imaging techniques in lung nodule detection

    International Nuclear Information System (INIS)

    Ho, J.T.; Kruger, R.A.

    1989-01-01

    Dual-energy radiography appears to be the most effective technique to address bone superposition that compromises conventional chest radiography. A dual-energy, single-exposure, film-based technique was compared with a dual-energy, dual-exposure technique and conventional chest radiography in a simulated lung nodule detection study. Observers detected more nodules on images produced by dual-energy techniques than on images produced by conventional chest radiography. The difference between dual-energy and conventional chest radiography is statistically significant and the difference between dual-energy, dual-exposure and single-exposure techniques is statistically insignificant. The single-exposure technique has the potential to replace the dual-exposure technique in future clinical application

  19. Low-resolution ship detection from high-altitude aerial images

    Science.gov (United States)

    Qi, Shengxiang; Wu, Jianmin; Zhou, Qing; Kang, Minyang

    2018-02-01

    Ship detection from optical images taken by high-altitude aircrafts such as unmanned long-endurance airships and unmanned aerial vehicles has broad applications in marine fishery management, ship monitoring and vessel salvage. However, the major challenge is the limited capability of information processing on unmanned high-altitude platforms. Furthermore, in order to guarantee the wide detection range, unmanned aircrafts generally cruise at high altitudes, resulting in imagery with low-resolution targets and strong clutters suffered by heavy clouds. In this paper, we propose a low-resolution ship detection method to extract ships from these high-altitude optical images. Inspired by a recent research on visual saliency detection indicating that small salient signals could be well detected by a gradient enhancement operation combined with Gaussian smoothing, we propose the facet kernel filtering to rapidly suppress cluttered backgrounds and delineate candidate target regions from the sea surface. Then, the principal component analysis (PCA) is used to compute the orientation of the target axis, followed by a simplified histogram of oriented gradient (HOG) descriptor to characterize the ship shape property. Finally, support vector machine (SVM) is applied to discriminate real targets and false alarms. Experimental results show that the proposed method actually has high efficiency in low-resolution ship detection.

  20. Spectral imaging for contamination detection in food

    DEFF Research Database (Denmark)

    Carstensen, Jens Michael

    application of the technique is finding anomalies I supposedly homogeneous matter or homogeneous mixtures. This application occurs frequently in the food industry when different types of contamination are to be detected. Contaminants could be e.g. foreign matter, process-induced toxins, and microbiological...... spoilage. Many of these contaminants may be detected in the wavelength range visible to normal silicium-based camera sensors i.e. 350-1050 nm with proper care during sample preparation, sample presentation, image acquisition and analysis. This presentation will give an introduction to the techniques behind...

  1. System and method for automated object detection in an image

    Science.gov (United States)

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  2. Sensitive elemental detection using microwave-assisted laser-induced breakdown imaging

    Science.gov (United States)

    Iqbal, Adeel; Sun, Zhiwei; Wall, Matthew; Alwahabi, Zeyad T.

    2017-10-01

    This study reports a sensitive spectroscopic method for quantitative elemental detection by manipulating the temporal and spatial parameters of laser-induced plasma. The method was tested for indium detection in solid samples, in which laser ablation was used to generate a tiny plasma. The lifetime of the laser-induced plasma can be extended to hundreds of microseconds using microwave injection to remobilize the electrons. In this novel method, temporal integrated signal of indium emission was significantly enhanced. Meanwhile, the projected detectable area of the excited indium atoms was also significantly improved using an interference-, instead of diffraction-, based technique, achieved by directly imaging microwave-enhanced plasma through a novel narrow-bandpass filter, exactly centered at the indium emission line. Quantitative laser-induce breakdown spectroscopy was also recorded simultaneously with the new imaging method. The intensities recorded from both methods exhibit very good mutual linear relationship. The detection intensity was improved to 14-folds because of the combined improvements in the plasma lifetime and the area of detection.

  3. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    International Nuclear Information System (INIS)

    Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T; Cooper, Benjamin J; Keall, Paul J; Kuncic, Zdenka

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp–Davis–Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and

  4. Image Registration-Based Bolt Loosening Detection of Steel Joints

    Science.gov (United States)

    2018-01-01

    Self-loosening of bolts caused by repetitive loads and vibrations is one of the common defects that can weaken the structural integrity of bolted steel joints in civil structures. Many existing approaches for detecting loosening bolts are based on physical sensors and, hence, require extensive sensor deployment, which limit their abilities to cost-effectively detect loosened bolts in a large number of steel joints. Recently, computer vision-based structural health monitoring (SHM) technologies have demonstrated great potential for damage detection due to the benefits of being low cost, easy to deploy, and contactless. In this study, we propose a vision-based non-contact bolt loosening detection method that uses a consumer-grade digital camera. Two images of the monitored steel joint are first collected during different inspection periods and then aligned through two image registration processes. If the bolt experiences rotation between inspections, it will introduce differential features in the registration errors, serving as a good indicator for bolt loosening detection. The performance and robustness of this approach have been validated through a series of experimental investigations using three laboratory setups including a gusset plate on a cross frame, a column flange, and a girder web. The bolt loosening detection results are presented for easy interpretation such that informed decisions can be made about the detected loosened bolts. PMID:29597264

  5. Image Registration-Based Bolt Loosening Detection of Steel Joints.

    Science.gov (United States)

    Kong, Xiangxiong; Li, Jian

    2018-03-28

    Self-loosening of bolts caused by repetitive loads and vibrations is one of the common defects that can weaken the structural integrity of bolted steel joints in civil structures. Many existing approaches for detecting loosening bolts are based on physical sensors and, hence, require extensive sensor deployment, which limit their abilities to cost-effectively detect loosened bolts in a large number of steel joints. Recently, computer vision-based structural health monitoring (SHM) technologies have demonstrated great potential for damage detection due to the benefits of being low cost, easy to deploy, and contactless. In this study, we propose a vision-based non-contact bolt loosening detection method that uses a consumer-grade digital camera. Two images of the monitored steel joint are first collected during different inspection periods and then aligned through two image registration processes. If the bolt experiences rotation between inspections, it will introduce differential features in the registration errors, serving as a good indicator for bolt loosening detection. The performance and robustness of this approach have been validated through a series of experimental investigations using three laboratory setups including a gusset plate on a cross frame, a column flange, and a girder web. The bolt loosening detection results are presented for easy interpretation such that informed decisions can be made about the detected loosened bolts.

  6. Regularized Adaptive Notch Filters for Acoustic Howling Suppression

    DEFF Research Database (Denmark)

    Gil-Cacho, Pepe; van Waterschoot, Toon; Moonen, Marc

    2009-01-01

    In this paper, a method for the suppression of acoustic howling is developed, based on adaptive notch filters (ANF) with regularization (RANF). The method features three RANFs working in parallel to achieve frequency tracking, howling detection and suppression. The ANF-based approach to howling...

  7. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    Science.gov (United States)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  8. Novelty Detection Classifiers in Weed Mapping: Silybum marianum Detection on UAV Multispectral Images.

    Science.gov (United States)

    Alexandridis, Thomas K; Tamouridou, Afroditi Alexandra; Pantazi, Xanthoula Eirini; Lagopodi, Anastasia L; Kashefi, Javid; Ovakoglou, Georgios; Polychronos, Vassilios; Moshou, Dimitrios

    2017-09-01

    In the present study, the detection and mapping of Silybum marianum (L.) Gaertn. weed using novelty detection classifiers is reported. A multispectral camera (green-red-NIR) on board a fixed wing unmanned aerial vehicle (UAV) was employed for obtaining high-resolution images. Four novelty detection classifiers were used to identify S. marianum between other vegetation in a field. The classifiers were One Class Support Vector Machine (OC-SVM), One Class Self-Organizing Maps (OC-SOM), Autoencoders and One Class Principal Component Analysis (OC-PCA). As input features to the novelty detection classifiers, the three spectral bands and texture were used. The S. marianum identification accuracy using OC-SVM reached an overall accuracy of 96%. The results show the feasibility of effective S. marianum mapping by means of novelty detection classifiers acting on multispectral UAV imagery.

  9. A method for real-time memory efficient implementation of blob detection in large images

    Directory of Open Access Journals (Sweden)

    Petrović Vladimir L.

    2017-01-01

    Full Text Available In this paper we propose a method for real-time blob detection in large images with low memory cost. The method is suitable for implementation on the specialized parallel hardware such as multi-core platforms, FPGA and ASIC. It uses parallelism to speed-up the blob detection. The input image is divided into blocks of equal sizes to which the maximally stable extremal regions (MSER blob detector is applied in parallel. We propose the usage of multiresolution analysis for detection of large blobs which are not detected by processing the small blocks. This method can find its place in many applications such as medical imaging, text recognition, as well as video surveillance or wide area motion imagery (WAMI. We explored the possibilities of usage of detected blobs in the feature-based image alignment as well. When large images are processed, our approach is 10 to over 20 times more memory efficient than the state of the art hardware implementation of the MSER.

  10. Detection of microaneurysms in retinal images using an ensemble classifier

    Directory of Open Access Journals (Sweden)

    M.M. Habib

    2017-01-01

    Full Text Available This paper introduces, and reports on the performance of, a novel combination of algorithms for automated microaneurysm (MA detection in retinal images. The presence of MAs in retinal images is a pathognomonic sign of Diabetic Retinopathy (DR which is one of the leading causes of blindness amongst the working age population. An extensive survey of the literature is presented and current techniques in the field are summarised. The proposed technique first detects an initial set of candidates using a Gaussian Matched Filter and then classifies this set to reduce the number of false positives. A Tree Ensemble classifier is used with a set of 70 features (the most commons features in the literature. A new set of 32 MA groundtruth images (with a total of 256 labelled MAs based on images from the MESSIDOR dataset is introduced as a public dataset for benchmarking MA detection algorithms. We evaluate our algorithm on this dataset as well as another public dataset (DIARETDB1 v2.1 and compare it against the best available alternative. Results show that the proposed classifier is superior in terms of eliminating false positive MA detection from the initial set of candidates. The proposed method achieves an ROC score of 0.415 compared to 0.2636 achieved by the best available technique. Furthermore, results show that the classifier model maintains consistent performance across datasets, illustrating the generalisability of the classifier and that overfitting does not occur.

  11. S-CNN-BASED SHIP DETECTION FROM HIGH-RESOLUTION REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    R. Zhang

    2016-06-01

    Full Text Available Reliable ship detection plays an important role in both military and civil fields. However, it makes the task difficult with high-resolution remote sensing images with complex background and various types of ships with different poses, shapes and scales. Related works mostly used gray and shape features to detect ships, which obtain results with poor robustness and efficiency. To detect ships more automatically and robustly, we propose a novel ship detection method based on the convolutional neural networks (CNNs, called SCNN, fed with specifically designed proposals extracted from the ship model combined with an improved saliency detection method. Firstly we creatively propose two ship models, the “V” ship head model and the “||” ship body one, to localize the ship proposals from the line segments extracted from a test image. Next, for offshore ships with relatively small sizes, which cannot be efficiently picked out by the ship models due to the lack of reliable line segments, we propose an improved saliency detection method to find these proposals. Therefore, these two kinds of ship proposals are fed to the trained CNN for robust and efficient detection. Experimental results on a large amount of representative remote sensing images with different kinds of ships with varied poses, shapes and scales demonstrate the efficiency and robustness of our proposed S-CNN-Based ship detector.

  12. Near-infrared imaging spectroscopy for counterfeit drug detection

    Science.gov (United States)

    Arnold, Thomas; De Biasio, Martin; Leitner, Raimund

    2011-06-01

    Pharmaceutical counterfeiting is a significant issue in the healthcare community as well as for the pharmaceutical industry worldwide. The use of counterfeit medicines can result in treatment failure or even death. A rapid screening technique such as near infrared (NIR) spectroscopy could aid in the search for and identification of counterfeit drugs. This work presents a comparison of two laboratory NIR imaging systems and the chemometric analysis of the acquired spectroscopic image data. The first imaging system utilizes a NIR liquid crystal tuneable filter and is designed for the investigation of stationary objects. The second imaging system utilizes a NIR imaging spectrograph and is designed for the fast analysis of moving objects on a conveyor belt. Several drugs in form of tablets and capsules were analyzed. Spectral unmixing techniques were applied to the mixed reflectance spectra to identify constituent parts of the investigated drugs. The results show that NIR spectroscopic imaging can be used for contact-less detection and identification of a variety of counterfeit drugs.

  13. Challenges in the Design of Microwave Imaging Systems for Breast Cancer Detection

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy

    2011-01-01

    community. This paper presents the survey of the ongoing research in the field of microwave imaging of biological tissues, with major focus on the breast tumor detection application. The existing microwave imaging systems are categorized on the basis of the employed measurement concepts. The advantages......Among the various breast imaging modalities for breast cancer detection, microwave imaging is attractive due to the high contrast in dielectric properties between the cancerous and normal tissue. Due to this reason, this modality has received a significant interest and attention from the microwave...... and disadvantages of the implemented imaging techniques are discussed. The fundamental tradeoffs between the various system requirements are indicated. Some strategies to overcome these limitations are outlined....

  14. Aerial Images and Convolutional Neural Network for Cotton Bloom Detection.

    Science.gov (United States)

    Xu, Rui; Li, Changying; Paterson, Andrew H; Jiang, Yu; Sun, Shangpeng; Robertson, Jon S

    2017-01-01

    Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of -4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.

  15. Face detection on distorted images using perceptual quality-aware features

    Science.gov (United States)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  16. Chondroitin sulfate iron colloid-enhanced MR imaging of hepatocellular carcinoma; Correlation between histologic grade and detectability

    Energy Technology Data Exchange (ETDEWEB)

    Kamba, Masayuki; Suto, Yuji; Kodama, Fumiko; Kato, Terumi; Ohta, Yoshio; Horie, Yasushi; Hamazoe, Ryuichi; Kawasaki, Hironaka (Tottori Univ., Yonago (Japan). School of Medicine)

    1994-03-01

    We applied chondroitin sulfate iron colloid (CSIC) as an MR contrast agent to detect hepatocellular carcinoma (HCC). The MR and pathologic findings of 25 HCCs in 21 patients were analyzed. MR imaging was performed with a superconducting system operating at 1.5 T. Proton density-weighted (PDW), T[sub 2]-weighted (T[sub 2]W) and T[sub 1]-weighted (T[sub 1]W) images were obtained before and after an intravenous injection of 23.6 [mu]mol Fe/kg of CSIC. In moderately to poorly differentiated and moderately differentiated HCCs (n=15), all the lesions except a 5-mm satellite nodule were detectable with unenhanced T[sub 2]W images as well as CSIC-enhanced PDW, T[sub 2]W and T[sub 1]W images. In well to moderately differentiated HCCs (n=6), two to four lesions were detectable with unenhanced images. All the lesions except a 3-mm satellite nodule were detectable with CSIC-enhanced PDW, T[sub 2]W and T[sub 1]W images. In well differentiated HCCs (n=4), one or two lesions were detectable with unenhanced images. All the lesions were detectable with CSIC-enhanced T[sub 1]W images, while only two lesions were detectable with CSIC-enhanced PDW or T[sub 2]W images. CSIC administration improves detection rates, and is especially useful in detecting small foci of well to moderately or well differentiated HCC. (author).

  17. Detection of colorectal hepatic metastases using MnDPDP MR imaging and diffusion-weighted imaging (DWI) alone and in combination

    International Nuclear Information System (INIS)

    Koh, D.M.; Brown, G.; Riddell, A.M.; Scurr, E.; Allen, S.D.; Husband, J.E.; Collins, D.J.; Souza, N.M. de; Leach, M.O.; Chau, I.; Cunningham, D.

    2008-01-01

    To compare the diagnostic accuracy of MnDPDP MR imaging and diffusion-weighted imaging (DWI), alone and in combination, for detecting colorectal liver metastases in patients with suspected metastatic disease. Thirty-three consecutive patients with suspected colorectal liver metastases underwent MR imaging. Three image sets (MnDPDP, DWI and combined MnDPDP and DWI) were reviewed independently by two observers. Lesions were scored on a five-point scale for malignancy and the areas (Az) under the receiver operating characteristic curves were calculated for each observer and image set. The sensitivity and specificity for lesion detection were calculated for each image set and compared. There were 83 metastases, 49 cysts and 1 haemangioma. Using the combined set resulted in the highest diagnostic accuracy for both observers (Az = 0.94 and 0.96), with improved averaged sensitivity of lesion detection compared with the DWI set (p = 0.01), and a trend towards improved sensitivity compared with the MnDPDP set (p = 0.06). There was no difference in the averaged specificity using any of the three image sets (p > 0.5). Combination of MnDPDP MR imaging and DWI resulted in the highest diagnostic accuracy and can increase sensitivity without loss in specificity. (orig.)

  18. Automatic food detection in egocentric images using artificial intelligence technology

    Science.gov (United States)

    Our objective was to develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable devic...

  19. A bio-image sensor for simultaneous detection of multi-neurotransmitters.

    Science.gov (United States)

    Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki

    2018-03-01

    We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H + ) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H + sensor to minimize the undesired signal overlap by H + diffusion. Using this bio-image sensor, we can obtain H + diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Transmission environmental scanning electron microscope with scintillation gaseous detection device

    International Nuclear Information System (INIS)

    Danilatos, Gerasimos; Kollia, Mary; Dracopoulos, Vassileios

    2015-01-01

    A transmission environmental scanning electron microscope with use of a scintillation gaseous detection device has been implemented. This corresponds to a transmission scanning electron microscope but with addition of a gaseous environment acting both as environmental and detection medium. A commercial type of low vacuum machine has been employed together with appropriate modifications to the detection configuration. This involves controlled screening of various emitted signals in conjunction with a scintillation gaseous detection device already provided with the machine for regular surface imaging. Dark field and bright field imaging has been obtained along with other detection conditions. With a progressive series of modifications and tests, the theory and practice of a novel type of microscopy is briefly shown now ushering further significant improvements and developments in electron microscopy as a whole. - Highlights: • Novel scanning transmission electron microscopy (STEM) with an environmental scanning electron microscope (ESEM) called TESEM. • Use of the gaseous detection device (GDD) in scintillation mode that allows high resolution bright and dark field imaging in the TESEM. • Novel approach towards a unification of both vacuum and environmental conditions in both bulk/surface and transmission mode of electron microscopy

  1. Automatic detection of the macula in retinal fundus images using seeded mode tracking approach.

    Science.gov (United States)

    Wong, Damon W K; Liu, Jiang; Tan, Ngan-Meng; Yin, Fengshou; Cheng, Xiangang; Cheng, Ching-Yu; Cheung, Gemmy C M; Wong, Tien Yin

    2012-01-01

    The macula is the part of the eye responsible for central high acuity vision. Detection of the macula is an important task in retinal image processing as a landmark for subsequent disease assessment, such as for age-related macula degeneration. In this paper, we have presented an approach to automatically determine the macula centre in retinal fundus images. First contextual information on the image is combined with a statistical model to obtain an approximate macula region of interest localization. Subsequently, we propose the use of a seeded mode tracking technique to locate the macula centre. The proposed approach is tested on a large dataset composed of 482 normal images and 162 glaucoma images from the ORIGA database and an additional 96 AMD images. The results show a ROI detection of 97.5%, and 90.5% correct detection of the macula within 1/3DD from a manual reference, which outperforms other current methods. The results are promising for the use of the proposed approach to locate the macula for the detection of macula diseases from retinal images.

  2. Off-site evaluation of liver lesion detection by Gd-BOPTA-enhanced MR imaging

    International Nuclear Information System (INIS)

    Gehl, H.B.; Bourne, M.; Grazioli, L.; Moeller, A.; Lodemann, K.P.

    2001-01-01

    The aim of this study was to determine the efficacy of Gd-BOPTA-enhanced MRI in liver lesion detection in comparison with unenhanced MRI and dynamic CT. The image sets of 148 of 151 patients enrolled in a multicenter German phase-III trial were evaluated by two independent radiologists unaffiliated with the investigating centers. Patients underwent a routine MRI protocol comprising T2- and T1-weighted spin-echo and T1-weighted gradient-echo (GE) sequences pre and 1 h post 0.1 mmol/kg Gd-BOPTA (Bracco-Byk Gulden, Konstanz, Germany). Additionally, a serial T1-weighted GE scan was performed after administration of the first half of the dose. All patients underwent dynamic contrast-enhanced CT. The evaluation was performed with regard to the number and size of lesions detected per patient by each modality or sequence. Furthermore, all pre CM and pre + post CM image sets were analyzed for number of lesions per patient. Both readers detected significantly more lesions in the contrast-enhanced image set compared with the unenhanced image set (32 and 39 %, respectively; p < 0.0001). While contrast-enhanced CT detected a similar number of lesions to unenhanced MRI, it was clearly inferior to contrast-enhanced MRI (reader 1: p = 0.0117; reader 2: p = 0.0225). Of the T1-weighted scans performed, the dynamic and late T1-weighted GE exams contributed most to the increased lesion detection rate (reader 1: p = 0.0007; reader 2: p = 0.0037). The size of the smallest lesion detected by means of MRI was significantly larger in the pre-CM image sets than in the pre + post CM image sets (reader 1: p = 0.001; reader 2: p < 0.0001). Gd-BOPTA-enhanced MRI detected significantly smaller lesions than contrast-enhanced CT (reader 1: p = 0.0117; reader 2: p = 0.0925). Gd-BOPTA-enhanced MR imaging improves liver lesion detection significantly over unenhanced MRI and dynamic CT. (orig.)

  3. Off-site evaluation of liver lesion detection by Gd-BOPTA-enhanced MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gehl, H.B. [Inst. of Diagnostic Radiology, Medical Univ. of Luebeck (Germany); Bourne, M. [Dept. of Radiology, Univ. Hospital of Wales, Cardiff (United Kingdom); Grazioli, L. [Dept. of Radiology, Univ. of Brescia (Italy); Moeller, A. [MEDIDATA GmbH, Konstanz (Germany); Lodemann, K.P. [BRACCO-BYK GULDEN GmbH, Konstanz (Germany)

    2001-02-01

    The aim of this study was to determine the efficacy of Gd-BOPTA-enhanced MRI in liver lesion detection in comparison with unenhanced MRI and dynamic CT. The image sets of 148 of 151 patients enrolled in a multicenter German phase-III trial were evaluated by two independent radiologists unaffiliated with the investigating centers. Patients underwent a routine MRI protocol comprising T2- and T1-weighted spin-echo and T1-weighted gradient-echo (GE) sequences pre and 1 h post 0.1 mmol/kg Gd-BOPTA (Bracco-Byk Gulden, Konstanz, Germany). Additionally, a serial T1-weighted GE scan was performed after administration of the first half of the dose. All patients underwent dynamic contrast-enhanced CT. The evaluation was performed with regard to the number and size of lesions detected per patient by each modality or sequence. Furthermore, all pre CM and pre + post CM image sets were analyzed for number of lesions per patient. Both readers detected significantly more lesions in the contrast-enhanced image set compared with the unenhanced image set (32 and 39 %, respectively; p < 0.0001). While contrast-enhanced CT detected a similar number of lesions to unenhanced MRI, it was clearly inferior to contrast-enhanced MRI (reader 1: p = 0.0117; reader 2: p = 0.0225). Of the T1-weighted scans performed, the dynamic and late T1-weighted GE exams contributed most to the increased lesion detection rate (reader 1: p = 0.0007; reader 2: p = 0.0037). The size of the smallest lesion detected by means of MRI was significantly larger in the pre-CM image sets than in the pre + post CM image sets (reader 1: p = 0.001; reader 2: p < 0.0001). Gd-BOPTA-enhanced MRI detected significantly smaller lesions than contrast-enhanced CT (reader 1: p = 0.0117; reader 2: p = 0.0925). Gd-BOPTA-enhanced MR imaging improves liver lesion detection significantly over unenhanced MRI and dynamic CT. (orig.)

  4. Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.

    Science.gov (United States)

    Zhong, Jiandan; Lei, Tao; Yao, Guangle

    2017-11-24

    Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.

  5. Cardiac tumours: non invasive detection and assessment by gated cardiac blood pool radionuclide imaging

    International Nuclear Information System (INIS)

    Pitcher, D.; Wainwright, R.; Brennand-Roper, D.; Deverall, P.; Sowton, E.; Maisey, M.

    1980-01-01

    Four patients with cardiac tumours were investigated by gated cardiac blood pool radionuclide imaging and echocardiography. Contrast angiocardiography was performed in three of the cases. Two left atrial tumours were detected by all three techniques. In one of these cases echocardiography alone showed additional mitral valve stenosis, but isotope imaging indicated tumour size more accurately. A large septal mass was detected by all three methods. In this patient echocardiography showed evidence of left ventricular outflow obstruction, confirmed at cardiac catheterisation, but gated isotope imaging provided a more detailed assessment of the abnormal cardiac anatomy. In the fourth case gated isotope imaging detected a large right ventricular tumour which had not been identified by echocardiography. Gated cardiac blood pool isotope imaging is a complementary technique to echocardiography for the non-invasive detection and assessment of cardiac tumours. (author)

  6. Detection of electrophysiology catheters in noisy fluoroscopy images.

    Science.gov (United States)

    Franken, Erik; Rongen, Peter; van Almsick, Markus; ter Haar Romeny, Bart

    2006-01-01

    Cardiac catheter ablation is a minimally invasive medical procedure to treat patients with heart rhythm disorders. It is useful to know the positions of the catheters and electrodes during the intervention, e.g. for the automatization of cardiac mapping. Our goal is therefore to develop a robust image analysis method that can detect the catheters in X-ray fluoroscopy images. Our method uses steerable tensor voting in combination with a catheter-specific multi-step extraction algorithm. The evaluation on clinical fluoroscopy images shows that especially the extraction of the catheter tip is successful and that the use of tensor voting accounts for a large increase in performance.

  7. Enhancement and denoising of mammographic images for breast disease detection

    International Nuclear Information System (INIS)

    Yazdani, S.; Yusof, R.; Karimian, A.; Hematian, A.; Yousefi, M.

    2012-01-01

    In these two decades breast cancer is one of the leading cause of death among women. In breast cancer research, Mammographic Image is being assessed as a potential tool for detecting breast disease and investigating response to chemotherapy. In first stage of breast disease discovery, the density measurement of the breast in mammographic images provides very useful information. Because of the importance of the role of mammographic images the need for accurate and robust automated image enhancement techniques is becoming clear. Mammographic images have some disadvantages such as, the high dependence of contrast upon the way the image is acquired, weak distinction in splitting cyst from tumor, intensity non uniformity, the existence of noise, etc. These limitations make problem to detect the typical signs such as masses and microcalcifications. For this reason, denoising and enhancing the quality of mammographic images is very important. The method which is used in this paper is in spatial domain which its input includes high, intermediate and even very low contrast mammographic images based on specialist physician's view, while its output is processed images that show the input images with higher quality, more contrast and more details. In this research, 38 mammographic images have been used. The result of purposed method shows details of abnormal zones and the areas with defects so that specialist could explore these zones more accurately and it could be deemed as an index for cancer diagnosis. In this study, mammographic images are initially converted into digital images and then to increase spatial resolution power, their noise is reduced and consequently their contrast is improved. The results demonstrate effectiveness and efficiency of the proposed methods. (authors)

  8. Detection of High-Density Crowds in Aerial Images Using Texture Classification

    Directory of Open Access Journals (Sweden)

    Oliver Meynberg

    2016-06-01

    Full Text Available Automatic crowd detection in aerial images is certainly a useful source of information to prevent crowd disasters in large complex scenarios of mass events. A number of publications employ regression-based methods for crowd counting and crowd density estimation. However, these methods work only when a correct manual count is available to serve as a reference. Therefore, it is the objective of this paper to detect high-density crowds in aerial images, where counting– or regression–based approaches would fail. We compare two texture–classification methodologies on a dataset of aerial image patches which are grouped into ranges of different crowd density. These methodologies are: (1 a Bag–of–words (BoW model with two alternative local features encoded as Improved Fisher Vectors and (2 features based on a Gabor filter bank. Our results show that a classifier using either BoW or Gabor features can detect crowded image regions with 97% classification accuracy. In our tests of four classes of different crowd-density ranges, BoW–based features have a 5%–12% better accuracy than Gabor.

  9. Recursive estimation techniques for detection of small objects in infrared image data

    Science.gov (United States)

    Zeidler, J. R.; Soni, T.; Ku, W. H.

    1992-04-01

    This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.

  10. Lung Nodule Detection in CT Images using Neuro Fuzzy Classifier

    Directory of Open Access Journals (Sweden)

    M. Usman Akram

    2013-07-01

    Full Text Available Automated lung cancer detection using computer aided diagnosis (CAD is an important area in clinical applications. As the manual nodule detection is very time consuming and costly so computerized systems can be helpful for this purpose. In this paper, we propose a computerized system for lung nodule detection in CT scan images. The automated system consists of two stages i.e. lung segmentation and enhancement, feature extraction and classification. The segmentation process will result in separating lung tissue from rest of the image, and only the lung tissues under examination are considered as candidate regions for detecting malignant nodules in lung portion. A feature vector for possible abnormal regions is calculated and regions are classified using neuro fuzzy classifier. It is a fully automatic system that does not require any manual intervention and experimental results show the validity of our system.

  11. Segmentation of the Clustered Cells with Optimized Boundary Detection in Negative Phase Contrast Images.

    Science.gov (United States)

    Wang, Yuliang; Zhang, Zaicheng; Wang, Huimin; Bi, Shusheng

    2015-01-01

    Cell image segmentation plays a central role in numerous biology studies and clinical applications. As a result, the development of cell image segmentation algorithms with high robustness and accuracy is attracting more and more attention. In this study, an automated cell image segmentation algorithm is developed to get improved cell image segmentation with respect to cell boundary detection and segmentation of the clustered cells for all cells in the field of view in negative phase contrast images. A new method which combines the thresholding method and edge based active contour method was proposed to optimize cell boundary detection. In order to segment clustered cells, the geographic peaks of cell light intensity were utilized to detect numbers and locations of the clustered cells. In this paper, the working principles of the algorithms are described. The influence of parameters in cell boundary detection and the selection of the threshold value on the final segmentation results are investigated. At last, the proposed algorithm is applied to the negative phase contrast images from different experiments. The performance of the proposed method is evaluated. Results show that the proposed method can achieve optimized cell boundary detection and highly accurate segmentation for clustered cells.

  12. Pornographic image detection with Gabor filters

    Science.gov (United States)

    Durrell, Kevan; Murray, Daniel J. C.

    2002-04-01

    across on the Internet. Identifying this type of pornographic pictures of low image quality poses particular challenges for any detection software. This paper will address some of the challenges and hurdles we faced in designing and carrying out our experiments. The paper will also discuss the main results of our experiments, as well as some confounds that, at present, limit the effectiveness of our approach to identifying pornographic images, and some directions that may be taken in future research.

  13. Scintillator Based Coded-Aperture Imaging for Neutron Detection

    International Nuclear Information System (INIS)

    Hayes, Sean-C.; Gamage, Kelum-A-A.

    2013-06-01

    In this paper we are going to assess the variations of neutron images using a series of Monte Carlo simulations. We are going to study neutron images of the same neutron source with different source locations, using a scintillator based coded-aperture system. The Monte Carlo simulations have been conducted making use of the EJ-426 neutron scintillator detector. This type of detector has a low sensitivity to gamma rays and is therefore of particular use in a system with a source that emits a mixed radiation field. From the use of different source locations, several neutron images have been produced, compared both qualitatively and quantitatively for each case. This allows conclusions to be drawn on how suited the scintillator based coded-aperture neutron imaging system is to detecting various neutron source locations. This type of neutron imaging system can be easily used to identify and locate nuclear materials precisely. (authors)

  14. Detection of Fusarium in single wheat kernels using spectral Imaging

    NARCIS (Netherlands)

    Polder, G.; Heijden, van der G.W.A.M.; Waalwijk, C.; Young, I.T.

    2005-01-01

    Fusarium head blight (FHB) is a harmful fungal disease that occurs in small grains. Non-destructive detection of this disease is traditionally done using spectroscopy or image processing. In this paper the combination of these two in the form of spectral imaging is evaluated. Transmission spectral

  15. The ship edge feature detection based on high and low threshold for remote sensing image

    Science.gov (United States)

    Li, Xuan; Li, Shengyang

    2018-05-01

    In this paper, a method based on high and low threshold is proposed to detect the ship edge feature due to the low accuracy rate caused by the noise. Analyze the relationship between human vision system and the target features, and to determine the ship target by detecting the edge feature. Firstly, using the second-order differential method to enhance the quality of image; Secondly, to improvement the edge operator, we introduction of high and low threshold contrast to enhancement image edge and non-edge points, and the edge as the foreground image, non-edge as a background image using image segmentation to achieve edge detection, and remove the false edges; Finally, the edge features are described based on the result of edge features detection, and determine the ship target. The experimental results show that the proposed method can effectively reduce the number of false edges in edge detection, and has the high accuracy of remote sensing ship edge detection.

  16. Dual-energy CT of the brain: Comparison between DECT angiography-derived virtual unenhanced images and true unenhanced images in the detection of intracranial haemorrhage.

    Science.gov (United States)

    Bonatti, Matteo; Lombardo, Fabio; Zamboni, Giulia A; Pernter, Patrizia; Pozzi Mucelli, Roberto; Bonatti, Giampietro

    2017-07-01

    To evaluate the diagnostic performance of virtual non-contrast (VNC) images in detecting intracranial haemorrhages (ICHs). Sixty-seven consecutive patients with and 67 without ICH who underwent unenhanced brain CT and DECT angiography were included. Two radiologists independently evaluated VNC and true non-contrast (TNC) images for ICH presence and type. Inter-observer agreement for VNC and TNC image evaluation was calculated. Sensitivity and specificity of VNC images for ICH detection were calculated using Fisher's exact test. VNC and TNC images were compared for ICH extent (qualitatively and quantitatively) and conspicuity assessment. On TNC images 116 different haemorrhages were detected in 67 patients. Inter-observer agreement ranged from 0.98-1.00 for TNC images and from 0.86-1.00 for VNC images. VNC sensitivity ranged from 0.90-1, according to the different ICH types, and specificity from 0.97-1. Qualitatively, ICH extent was underestimated on VNC images in 11.9% of cases. Haemorrhage volume did not show statistically significant differences between VNC and TNC images. Mean haemorrhage conspicuity was significantly lower on VNC images than on TNC images for both readers (p < 0.001). VNC images are accurate for ICH detection. Haemorrhages are less conspicuous on VNC images and their extent may be underestimated. • VNC images represent a reproducible tool for detecting ICH. • ICH can be identified on VNC images with high sensitivity and specificity. • Intracranial haemorrhages are less conspicuous on VNC images than on TNC images. • Intracranial haemorrhages extent may be underestimated on VNC images.

  17. Analysis of image heterogeneity using 2D Minkowski functionals detects tumor responses to treatment.

    Science.gov (United States)

    Larkin, Timothy J; Canuto, Holly C; Kettunen, Mikko I; Booth, Thomas C; Hu, De-En; Krishnan, Anant S; Bohndiek, Sarah E; Neves, André A; McLachlan, Charles; Hobson, Michael P; Brindle, Kevin M

    2014-01-01

    The acquisition of ever increasing volumes of high resolution magnetic resonance imaging (MRI) data has created an urgent need to develop automated and objective image analysis algorithms that can assist in determining tumor margins, diagnosing tumor stage, and detecting treatment response. We have shown previously that Minkowski functionals, which are precise morphological and structural descriptors of image heterogeneity, can be used to enhance the detection, in T1 -weighted images, of a targeted Gd(3+) -chelate-based contrast agent for detecting tumor cell death. We have used Minkowski functionals here to characterize heterogeneity in T2 -weighted images acquired before and after drug treatment, and obtained without contrast agent administration. We show that Minkowski functionals can be used to characterize the changes in image heterogeneity that accompany treatment of tumors with a vascular disrupting agent, combretastatin A4-phosphate, and with a cytotoxic drug, etoposide. Parameterizing changes in the heterogeneity of T2 -weighted images can be used to detect early responses of tumors to drug treatment, even when there is no change in tumor size. The approach provides a quantitative and therefore objective assessment of treatment response that could be used with other types of MR image and also with other imaging modalities. Copyright © 2013 Wiley Periodicals, Inc.

  18. Moving object detection using dynamic motion modelling from UAV aerial images.

    Science.gov (United States)

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  19. DETECTION OF BARCHAN DUNES IN HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    M. A. Azzaoui

    2016-06-01

    Full Text Available Barchan dunes are the fastest moving sand dunes in the desert. We developed a process to detect barchans dunes on High resolution satellite images. It consisted of three steps, we first enhanced the image using histogram equalization and noise reduction filters. Then, the second step proceeds to eliminate the parts of the image having a texture different from that of the barchans dunes. Using supervised learning, we tested a coarse to fine textural analysis based on Kolomogorov Smirnov test and Youden’s J-statistic on co-occurrence matrix. As an output we obtained a mask that we used in the next step to reduce the search area. In the third step we used a gliding window on the mask and check SURF features with SVM to get barchans dunes candidates. Detected barchans dunes were considered as the fusion of overlapping candidates. The results of this approach were very satisfying in processing time and precision.

  20. Recent development of fluorescent imaging for specific detection of tumors

    International Nuclear Information System (INIS)

    Nakata, Eiji; Morii, Takashi; Uto, Yoshihiro; Hori, Hitoshi

    2011-01-01

    Increasing recent studies on fluorescent imaging for specific detection of tumors are described here on strategies of molecular targeting, metabolic specificity and hypoxic circumstance. There is described an instance of a conjugate of antibody and pH-activable fluorescent ligand, which specifically binds to the tumor cells, is internalized in the cellular lysozomes where their pH is low, and then is activated to become fluorescent only in viable tumor cells. For the case of metabolic specificity, excessive loading of the precursor (5-aminolevulinic acid) of protoporphyrin IX (ppIX), due to their low activity to convert ppIX to heme B, results in making tumors observable in red as ppIX emits fluorescence (red, 585 nm) when excited by blue ray of 410 nm. Similarly, imaging with indocyanine green which is accumulated in hepatoma cells is reported in success in detection of small lesion and metastasis when the dye is administered during operation. Reductive reactions exceed in tumor hypoxic conditions, of which feature is usable for imaging. Conjugates of nitroimidazole and fluorescent dye are reported to successfully image tumors by nitro reduction. Authors' UTX-12 is a non-fluorescent nitroaromatic derivative of pH-sensitive fluorescent dye seminaphtharhodafluor (SNARF), and is designed for the nitro group, the hypoxia-responding sensor, to be reduced in tumor hypoxic conditions and then for the aromatic moiety to be cleaved to release free SNARF. Use of hypoxia-inducible factor-1 (HIF-1) for imaging has been also reported in many. As above, studies on fluorescent imaging for specific detection of tumors are mostly at fundamental step but its future is conceivably promising along with advances in other technology like fluorescent endoscopy and multimodal imaging. (author)

  1. Change detection of medical images using dictionary learning techniques and principal component analysis.

    Science.gov (United States)

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-07-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of magnetic resonance imaging (MRI) scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are being used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. We present an improved version of the EigenBlockCD algorithm, named the EigenBlockCD-2. The EigenBlockCD-2 algorithm performs an initial global registration and identifies the changes between serial MR images of the brain. Blocks of pixels from a baseline scan are used to train local dictionaries to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between [Formula: see text] and [Formula: see text] norms as two possible similarity measures in the improved EigenBlockCD-2 algorithm. We show the advantages of the [Formula: see text] norm over the [Formula: see text] norm both theoretically and numerically. We also demonstrate the performance of the new EigenBlockCD-2 algorithm for detecting changes of MR images and compare our results with those provided in the recent literature. Experimental results with both simulated and real MRI scans show that our improved EigenBlockCD-2 algorithm outperforms the previous methods. It detects clinical changes while ignoring the changes due to the patient's position and other acquisition artifacts.

  2. Change detection in multitemporal synthetic aperture radar images using dual-channel convolutional neural network

    Science.gov (United States)

    Liu, Tao; Li, Ying; Cao, Ying; Shen, Qiang

    2017-10-01

    This paper proposes a model of dual-channel convolutional neural network (CNN) that is designed for change detection in SAR images, in an effort to acquire higher detection accuracy and lower misclassification rate. This network model contains two parallel CNN channels, which can extract deep features from two multitemporal SAR images. For comparison and validation, the proposed method is tested along with other change detection algorithms on both simulated SAR images and real-world SAR images captured by different sensors. The experimental results demonstrate that the presented method outperforms the state-of-the-art techniques by a considerable margin.

  3. Foreign Object Detection by Sub-Terahertz Quasi-Bessel Beam Imaging

    Directory of Open Access Journals (Sweden)

    Hyang Sook Chun

    2012-12-01

    Full Text Available Food quality monitoring, particularly foreign object detection, has recently become a critical issue for the food industry. In contrast to X-ray imaging, terahertz imaging can provide a safe and ionizing-radiation-free nondestructive inspection method for foreign object sensing. In this work, a quasi-Bessel beam (QBB known to be nondiffracting was generated by a conical dielectric lens to detect foreign objects in food samples. Using numerical evaluation via the finite-difference time-domain (FDTD method, the beam profiles of a QBB were evaluated and compared with the results obtained via analytical calculation and experimental characterization (knife edge method, point scanning method. The FDTD method enables a more precise estimation of the beam profile. Foreign objects in food samples, namely crickets, were then detected with the QBB, which had a deep focus and a high spatial resolution at 210 GHz. Transmitted images using a Gaussian beam obtained with a conventional lens were compared in the sub-terahertz frequency experimentally with those using a QBB generated using an axicon.

  4. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  5. In Vivo Dual Fluorescence Imaging to Detect Joint Destruction.

    Science.gov (United States)

    Cho, Hongsik; Bhatti, Fazal-Ur-Rehman; Lee, Sangmin; Brand, David D; Yi, Ae-Kyung; Hasty, Karen A

    2016-10-01

    Diagnosis of cartilage damage in early stages of arthritis is vital to impede the progression of disease. In this regard, considerable progress has been made in near-infrared fluorescence (NIRF) optical imaging technique. Arthritis can develop due to various mechanisms but one of the main contributors is the production of matrix metalloproteinases (MMPs), enzymes that can degrade components of the extracellular matrix. Especially, MMP-1 and MMP-13 have main roles in rheumatoid arthritis and osteoarthritis because they enhance collagen degradation in the process of arthritis. We present here a novel NIRF imaging strategy that can be used to determine the activity of MMPs and cartilage damage simultaneously by detection of exposed type II collagen in cartilage tissue. In this study, retro-orbital injection of mixed fluorescent dyes, MMPSense 750 FAST (MMP750) dye and Alexa Fluor 680 conjugated monoclonal mouse antibody immune-reactive to type II collagen, was administered in the arthritic mice. Both dyes were detected with different intensity according to degree of joint destruction in the animal. Thus, our dual fluorescence imaging method can be used to detect cartilage damage as well as MMP activity simultaneously in early stage arthritis. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  6. CT Image Reconstruction in a Low Dimensional Manifold

    OpenAIRE

    Cong, Wenxiang; Wang, Ge; Yang, Qingsong; Hsieh, Jiang; Li, Jia; Lai, Rongjie

    2017-01-01

    Regularization methods are commonly used in X-ray CT image reconstruction. Different regularization methods reflect the characterization of different prior knowledge of images. In a recent work, a new regularization method called a low-dimensional manifold model (LDMM) is investigated to characterize the low-dimensional patch manifold structure of natural images, where the manifold dimensionality characterizes structural information of an image. In this paper, we propose a CT image reconstruc...

  7. A novel ship CFAR detection algorithm based on adaptive parameter enhancement and wake-aided detection in SAR images

    Science.gov (United States)

    Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun

    2018-03-01

    Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.

  8. Dynamical scene analysis with a moving camera: mobile targets detection system

    International Nuclear Information System (INIS)

    Hennebert, Christine

    1996-01-01

    This thesis work deals with the detection of moving objects in monocular image sequences acquired with a mobile camera. We propose a method able to detect small moving objects in visible or infrared images of real outdoor scenes. In order to detect objects of very low apparent motion, we consider an analysis on a large temporal interval. We have chosen to compensate for the dominant motion due to the camera displacement for several consecutive images in order to form a sub-sequence of images for which the camera seems virtually static. We have also developed a new approach allowing to extract the different layers of a real scene in order to deal with cases where the 2D motion due to the camera displacement cannot be globally compensated for. To this end, we use a hierarchical model with two levels: the local merging step and the global merging one. Then, an appropriate temporal filtering is applied to registered image sub-sequence to enhance signals corresponding to moving objects. The detection issue is stated as a labeling problem within a statistical regularization based on Markov Random Fields. Our method has been validated on numerous real image sequences depicting complex outdoor scenes. Finally, the feasibility of an integrated circuit for mobile object detection has been proved. This circuit could lead to an ASIC creation. (author) [fr

  9. Optical tomographic imaging for breast cancer detection

    Science.gov (United States)

    Cong, Wenxiang; Intes, Xavier; Wang, Ge

    2017-09-01

    Diffuse optical breast imaging utilizes near-infrared (NIR) light propagation through tissues to assess the optical properties of tissues for the identification of abnormal tissue. This optical imaging approach is sensitive, cost-effective, and does not involve any ionizing radiation. However, the image reconstruction of diffuse optical tomography (DOT) is a nonlinear inverse problem and suffers from severe illposedness due to data noise, NIR light scattering, and measurement incompleteness. An image reconstruction method is proposed for the detection of breast cancer. This method splits the image reconstruction problem into the localization of abnormal tissues and quantification of absorption variations. The localization of abnormal tissues is performed based on a well-posed optimization model, which can be solved via a differential evolution optimization method to achieve a stable reconstruction. The quantification of abnormal absorption is then determined in localized regions of relatively small extents, in which a potential tumor might be. Consequently, the number of unknown absorption variables can be greatly reduced to overcome the underdetermined nature of DOT. Numerical simulation experiments are performed to verify merits of the proposed method, and the results show that the image reconstruction method is stable and accurate for the identification of abnormal tissues, and robust against the measurement noise of data.

  10. Deep learning for the detection of barchan dunes in satellite images

    Science.gov (United States)

    Azzaoui, A. M.; Adnani, M.; Elbelrhiti, H.; Chaouki, B. E. K.; Masmoudi, L.

    2017-12-01

    Barchan dunes are known to be the fastest moving sand dunes in deserts as they form under unidirectional winds and limited sand supply over a firm coherent basement (Elbelrhiti and Hargitai,2015). They were studied in the context of natural hazard monitoring as they could be a threat to human activities and infrastructures. Also, they were studied as a natural phenomenon occurring in other planetary landforms such as Mars or Venus (Bourke et al., 2010). Our region of interest was located in a desert region in the south of Morocco, in a barchan dunes corridor next to the town of Tarfaya. This region which is part of the Sahara desert contained thousands of barchans; which limits the number of dunes that could be studied during field missions. Therefore, we chose to monitor barchan dunes with satellite imagery, which can be seen as a complementary approach to field missions. We collected data from the Sentinel platform (https://scihub.copernicus.eu/dhus/); we used a machine learning method as a basis for the detection of barchan dunes positions in the satellite image. We trained a deep learning model on a mid-sized dataset that contained blocks representing images of barchan dunes, and images of other desert features, that we collected by cropping and annotating the source image. During testing, we browsed the satellite image with a gliding window that evaluated each block, and then produced a probability map. Finally, a threshold on the latter map exposed the location of barchan dunes. We used a subsample of data to train the model and we gradually incremented the size of the training set to get finer results and avoid over fitting. The positions of barchan dunes were successfully detected and deep learning was an effective method for this application. Sentinel-2 images were chosen for their availability and good temporal resolution, which will allow the tracking of barchan dunes in future work. While Sentinel images had sufficient spatial resolution for the

  11. CEST ANALYSIS: AUTOMATED CHANGE DETECTION FROM VERY-HIGH-RESOLUTION REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    M. Ehlers

    2012-08-01

    Full Text Available A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST of the change algorithms is applied to calculate the probability of change for a particular location. CEST

  12. Low-dose 4D cone-beam CT via joint spatiotemporal regularization of tensor framelet and nonlocal total variation

    Science.gov (United States)

    Han, Hao; Gao, Hao; Xing, Lei

    2017-08-01

    Excessive radiation exposure is still a major concern in 4D cone-beam computed tomography (4D-CBCT) due to its prolonged scanning duration. Radiation dose can be effectively reduced by either under-sampling the x-ray projections or reducing the x-ray flux. However, 4D-CBCT reconstruction under such low-dose protocols is prone to image artifacts and noise. In this work, we propose a novel joint regularization-based iterative reconstruction method for low-dose 4D-CBCT. To tackle the under-sampling problem, we employ spatiotemporal tensor framelet (STF) regularization to take advantage of the spatiotemporal coherence of the patient anatomy in 4D images. To simultaneously suppress the image noise caused by photon starvation, we also incorporate spatiotemporal nonlocal total variation (SNTV) regularization to make use of the nonlocal self-recursiveness of anatomical structures in the spatial and temporal domains. Under the joint STF-SNTV regularization, the proposed iterative reconstruction approach is evaluated first using two digital phantoms and then using physical experiment data in the low-dose context of both under-sampled and noisy projections. Compared with existing approaches via either STF or SNTV regularization alone, the presented hybrid approach achieves improved image quality, and is particularly effective for the reconstruction of low-dose 4D-CBCT data that are not only sparse but noisy.

  13. An effective detection algorithm for region duplication forgery in digital images

    Science.gov (United States)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  14. Ship Detection in Optical Satellite Image Based on RX Method and PCAnet

    Science.gov (United States)

    Shao, Xiu; Li, Huali; Lin, Hui; Kang, Xudong; Lu, Ting

    2017-12-01

    In this paper, we present a novel method for ship detection in optical satellite image based on the ReedXiaoli (RX) method and the principal component analysis network (PCAnet). The proposed method consists of the following three steps. First, the spatially adjacent pixels in optical image are arranged into a vector, transforming the optical image into a 3D cube image. By taking this process, the contextual information of the spatially adjacent pixels can be integrated to magnify the discrimination between ship and background. Second, the RX anomaly detection method is adopted to preliminarily extract ship candidates from the produced 3D cube image. Finally, real ships are further confirmed among ship candidates by applying the PCAnet and the support vector machine (SVM). Specifically, the PCAnet is a simple deep learning network which is exploited to perform feature extraction, and the SVM is applied to achieve feature pooling and decision making. Experimental results demonstrate that our approach is effective in discriminating between ships and false alarms, and has a good ship detection performance.

  15. Grid-texture mechanisms in human vision: Contrast detection of regular sparse micro-patterns requires specialist templates.

    Science.gov (United States)

    Baker, Daniel H; Meese, Tim S

    2016-07-27

    Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.

  16. SQUID-detected magnetic resonance imaging in microtesla magnetic fields

    International Nuclear Information System (INIS)

    McDermott, Robert; Kelso, Nathan; Lee, SeungKyun; Moessle, Michael; Mueck, Michael; Myers, Whittier; Haken, Bernard ten; Seton, H.C.; Trabesinger, Andreas H.; Pines, Alex; Clarke, John

    2003-01-01

    We describe studies of nuclear magnetic resonance (NMR) spectroscopy and magnetic resonance imaging (MRI) of liquid samples at room temperature in microtesla magnetic fields. The nuclear spins are prepolarized in a strong transient field. The magnetic signals generated by the precessing spins, which range in frequency from tens of Hz to several kHz, are detected by a low-transition temperature dc SQUID (Superconducting QUantum Interference Device) coupled to an untuned, superconducting flux transformer configured as an axial gradiometer. The combination of prepolarization and frequency-independent detector sensitivity results in a high signal-to-noise ratio and high spectral resolution (∼1 Hz) even in grossly inhomogeneous magnetic fields. In the NMR experiments, the high spectral resolution enables us to detect the 10-Hz splitting of the spectrum of protons due to their scalar coupling to a 31P nucleus. Furthermore, the broadband detection scheme combined with a non-resonant field-reversal spin echo allows the simultaneous observation of signals from protons and 31P nuclei, even though their NMR resonance frequencies differ by a factor of 2.5. We extend our methodology to MRI in microtesla fields, where the high spectral resolution translates into high spatial resolution. We demonstrate two-dimensional images of a mineral oil phantom and slices of peppers, with a spatial resolution of about 1 mm. We also image an intact pepper using slice selection, again with 1-mm resolution. In further experiments we demonstrate T1-contrast imaging of a water phantom, some parts of which were doped with a paramagnetic salt to reduce the longitudinal relaxation time T1. Possible applications of this MRI technique include screening for tumors and integration with existing multichannel SQUID systems for brain imaging

  17. Toward robust high resolution fluorescence tomography: a hybrid row-action edge preserving regularization

    Science.gov (United States)

    Behrooz, Ali; Zhou, Hao-Min; Eftekhar, Ali A.; Adibi, Ali

    2011-02-01

    Depth-resolved localization and quantification of fluorescence distribution in tissue, called Fluorescence Molecular Tomography (FMT), is highly ill-conditioned as depth information should be extracted from limited number of surface measurements. Inverse solvers resort to regularization algorithms that penalize Euclidean norm of the solution to overcome ill-posedness. While these regularization algorithms offer good accuracy, their smoothing effects result in continuous distributions which lack high-frequency edge-type features of the actual fluorescence distribution and hence limit the resolution offered by FMT. We propose an algorithm that penalizes the total variation (TV) norm of the solution to preserve sharp transitions and high-frequency components in the reconstructed fluorescence map while overcoming ill-posedness. The hybrid algorithm is composed of two levels: 1) An Algebraic Reconstruction Technique (ART), performed on FMT data for fast recovery of a smooth solution that serves as an initial guess for the iterative TV regularization, 2) A time marching TV regularization algorithm, inspired by the Rudin-Osher-Fatemi TV image restoration, performed on the initial guess to further enhance the resolution and accuracy of the reconstruction. The performance of the proposed method in resolving fluorescent tubes inserted in a liquid tissue phantom imaged by a non-contact CW trans-illumination FMT system is studied and compared to conventional regularization schemes. It is observed that the proposed method performs better in resolving fluorescence inclusions at higher depths.

  18. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dong Seop Kim

    2018-03-01

    Full Text Available Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR open database, show that our method outperforms previous works.

  19. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  20. Detection of Blood Vessels in Color Fundus Images using a Local Radon Transform

    Directory of Open Access Journals (Sweden)

    Reza Pourreza

    2010-09-01

    Full Text Available Introduction: This paper addresses a method for automatic detection of blood vessels in color fundus images which utilizes two main tools: image partitioning and local Radon transform. Material and Methods: The input images are firstly divided into overlapping windows and then the Radon transform is applied to each. The maximum of the Radon transform in each window corresponds to the probable available sub-vessel. To verify the detected sub-vessel, the maximum is compared with a predefined threshold. The verified sub-vessels are reconstructed using the Radon transform information. All detected and reconstructed sub-vessels are finally combined to make the final vessel tree. Results: The algorithm’s performance was evaluated numerically by applying it to 40 images of DRIVE database, a standard retinal image database. The vessels were extracted manually by two physicians. This database was used to test and compare the available and proposed algorithms for vessel detection in color fundus images. By comparing the output of the algorithm with the manual results, the two parameters TPR and FPR were calculated for each image and the average of TPRs and FPRs were used to plot the ROC curve. Discussion and Conclusion: Comparison of the ROC curve of this algorithm with other algorithms demonstrated the high achieved accuracy. Beside the high accuracy, the Radon transform which is integral-based makes the algorithm robust against noise.