WorldWideScience

Sample records for regularization stabilizes pre-images

  1. Input Space Regularization Stabilizes Pre-images for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2009-01-01

    Solution of the pre-image problem is key to efficient nonlinear de-noising using kernel Principal Component Analysis. Pre-image estimation is inherently ill-posed for typical kernels used in applications and consequently the most widely used estimation schemes lack stability. For de...

  2. Regularized Pre-image Estimation for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    The main challenge in de-noising by kernel Principal Component Analysis (PCA) is the mapping of de-noised feature space points back into input space, also referred to as “the pre-image problem”. Since the feature space mapping is typically not bijective, pre-image estimation is inherently illposed...

  3. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  4. Image super-resolution reconstruction based on regularization technique and guided filter

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Gu, Pei-ting; Liu, Pei-zhong; Luo, Yan-min

    2017-06-01

    In order to improve the accuracy of sparse representation coefficients and the quality of reconstructed images, an improved image super-resolution algorithm based on sparse representation is presented. In the sparse coding stage, the autoregressive (AR) regularization and the non-local (NL) similarity regularization are introduced to improve the sparse coding objective function. A group of AR models which describe the image local structures are pre-learned from the training samples, and one or several suitable AR models can be adaptively selected for each image patch to regularize the solution space. Then, the image non-local redundancy is obtained by the NL similarity regularization to preserve edges. In the process of computing the sparse representation coefficients, the feature-sign search algorithm is utilized instead of the conventional orthogonal matching pursuit algorithm to improve the accuracy of the sparse coefficients. To restore image details further, a global error compensation model based on weighted guided filter is proposed to realize error compensation for the reconstructed images. Experimental results demonstrate that compared with Bicubic, L1SR, SISR, GR, ANR, NE + LS, NE + NNLS, NE + LLE and A + (16 atoms) methods, the proposed approach has remarkable improvement in peak signal-to-noise ratio, structural similarity and subjective visual perception.

  5. The pre-image problem for Laplacian Eigenmaps utilizing L 1 regularization with applications to data fusion

    International Nuclear Information System (INIS)

    Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy

    2017-01-01

    As the popularity of non-linear manifold learning techniques such as kernel PCA and Laplacian Eigenmaps grows, vast improvements have been seen in many areas of data processing, including heterogeneous data fusion and integration. One problem with the non-linear techniques, however, is the lack of an easily calculable pre-image. Existence of such pre-image would allow visualization of the fused data not only in the embedded space, but also in the original data space. The ability to make such comparisons can be crucial for data analysts and other subject matter experts who are the end users of novel mathematical algorithms. In this paper, we propose a pre-image algorithm for Laplacian Eigenmaps. Our method offers major improvements over existing techniques, which allow us to address the problem of noisy inputs and the issue of how to calculate the pre-image of a point outside the convex hull of training samples; both of which have been overlooked in previous studies in this field. We conclude by showing that our pre-image algorithm, combined with feature space rotations, allows us to recover occluded pixels of an imaging modality based off knowledge of that image measured by heterogeneous modalities. We demonstrate this data recovery on heterogeneous hyperspectral (HS) cameras, as well as by recovering LIDAR measurements from HS data. (paper)

  6. The pre-image problem for Laplacian Eigenmaps utilizing L 1 regularization with applications to data fusion

    Science.gov (United States)

    Cloninger, Alexander; Czaja, Wojciech; Doster, Timothy

    2017-07-01

    As the popularity of non-linear manifold learning techniques such as kernel PCA and Laplacian Eigenmaps grows, vast improvements have been seen in many areas of data processing, including heterogeneous data fusion and integration. One problem with the non-linear techniques, however, is the lack of an easily calculable pre-image. Existence of such pre-image would allow visualization of the fused data not only in the embedded space, but also in the original data space. The ability to make such comparisons can be crucial for data analysts and other subject matter experts who are the end users of novel mathematical algorithms. In this paper, we propose a pre-image algorithm for Laplacian Eigenmaps. Our method offers major improvements over existing techniques, which allow us to address the problem of noisy inputs and the issue of how to calculate the pre-image of a point outside the convex hull of training samples; both of which have been overlooked in previous studies in this field. We conclude by showing that our pre-image algorithm, combined with feature space rotations, allows us to recover occluded pixels of an imaging modality based off knowledge of that image measured by heterogeneous modalities. We demonstrate this data recovery on heterogeneous hyperspectral (HS) cameras, as well as by recovering LIDAR measurements from HS data.

  7. Stabilization, pole placement, and regular implementability

    NARCIS (Netherlands)

    Belur, MN; Trentelman, HL

    In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,

  8. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  9. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  10. Pre-, intra- and post-operative imaging of cochlear implants

    Energy Technology Data Exchange (ETDEWEB)

    Vogl, T.J.; Naguib, N.N.N.; Burck, I. [University Hospital Frankfurt (Germany). Inst. of Diagnostic and Interventional Radiology; Tawfik, A. [Mansoura Univ. (Egypt). Dept. of Diagnostic and Interventional Radiology; Emam, A. [University Hospital Alexandria (Egypt). Dept. of Diagnostic and Interventional Radiology; Nour-Eldin, A. [University Hospital Cairo (Egypt). Dept. of Radiology; Stoever, T. [University Hospital of Frankfurt (Germany). Dept. of Otolaryngology

    2015-11-15

    The purpose of this review is to present essential imaging aspects in patients who are candidates for a possible cochlear implant as well as in postsurgical follow-up. Imaging plays a major role in providing information on preinterventional topography, variations and possible infections. Preoperative imaging using DVT, CT, MRI or CT and MRI together is essential for candidate selection, planning of surgical approach and exclusion of contraindications like the complete absence of the cochlea or cochlear nerve, or infection. Relative contraindications are variations of the cochlea and vestibulum. Intraoperative imaging can be performed by fluoroscopy, mobile radiography or DVT. Postoperative imaging is regularly performed by conventional X-ray, DVT, or CT. In summary, radiological imaging has its essential role in the pre- and post-interventional period for patients who are candidates for cochlear implants.

  11. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman

    2017-11-02

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  12. Image deblurring using a perturbation-basec regularization approach

    KAUST Repository

    Alanazi, Abdulrahman; Ballal, Tarig; Masood, Mudassir; Al-Naffouri, Tareq Y.

    2017-01-01

    The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solving a regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. This perturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of the regularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularization parameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.

  13. Poisson image reconstruction with Hessian Schatten-norm regularization.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  14. DESIGN OF STRUCTURAL ELEMENTS IN THE EVENT OF THE PRE-SET RELIABILITY, REGULAR LOAD AND BEARING CAPACITY DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Tamrazyan Ashot Georgievich

    2012-10-01

    Full Text Available Accurate and adequate description of external influences and of the bearing capacity of the structural material requires the employment of the probability theory methods. In this regard, the characteristic that describes the probability of failure-free operation is required. The characteristic of reliability means that the maximum stress caused by the action of the load will not exceed the bearing capacity. In this paper, the author presents a solution to the problem of calculation of structures, namely, the identification of reliability of pre-set design parameters, in particular, cross-sectional dimensions. If the load distribution pattern is available, employment of the regularities of distributed functions make it possible to find the pattern of distribution of maximum stresses over the structure. Similarly, we can proceed to the design of structures of pre-set rigidity, reliability and stability in the case of regular load distribution. We consider the element of design (a monolithic concrete slab, maximum stress S which depends linearly on load q. Within a pre-set period of time, the probability will not exceed the values according to the Poisson law. The analysis demonstrates that the variability of the bearing capacity produces a stronger effect on relative sizes of cross sections of a slab than the variability of loads. It is therefore particularly important to reduce the coefficient of variation of the load capacity. One of the methods contemplates the truncation of the bearing capacity distribution by pre-culling the construction material.

  15. Stability, causality, and hyperbolicity in Carter's ''regular'' theory of relativistic heat-conducting fluids

    International Nuclear Information System (INIS)

    Olson, T.S.; Hiscock, W.A.

    1990-01-01

    Stability and causality are studied for linear perturbations about equilibrium in Carter's ''regular'' theory of relativistic heat-conducting fluids. The ''regular'' theory, when linearized around an equilibrium state having vanishing expansion and shear, is shown to be equivalent to the inviscid limit of the linearized Israel-Stewart theory of relativistic dissipative fluids for a particular choice of the second-order coefficients β 1 and γ 2 . A set of stability conditions is determined for linear perturbations of a general inviscid Israel-Stewart fluid using a monotonically decreasing energy functional. It is shown that, as in the viscous case, stability implies that the characteristic velocities are subluminal and that perturbations obey hyperbolic equations. The converse theorem is also true. We then apply this analysis to a nonrelativistic Boltzmann gas and to a strongly degenerate free Fermi gas in the ''regular'' theory. Carter's ''regular'' theory is shown to be incapable of correctly describing the nonrelativistic Boltzmann gas and the degenerate Fermi gas (at all temperatures)

  16. EIT image reconstruction with four dimensional regularization.

    Science.gov (United States)

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  17. SAR image regularization with fast approximate discrete minimization.

    Science.gov (United States)

    Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc

    2009-07-01

    Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.

  18. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  19. New image-stabilizing system

    Science.gov (United States)

    Zhao, Yuejin

    1996-06-01

    In this paper, a new method for image stabilization with a three-axis image- stabilizing reflecting prism assembly is presented, and the principle of image stabilization in this prism assembly, formulae for image stabilization and working formulae with an approximation up to the third power are given in detail. In this image-stabilizing system, a single chip microcomputer is used to calculate value of compensating angles and thus to control the prism assembly. Two gyroscopes act as sensors from which information of angular perturbation is obtained, three stepping motors drive the prism assembly to compensate for the movement of image produced by angular perturbation. The image-stabilizing device so established is a multifold system which involves optics, mechanics, electronics and computer.

  20. Graph Regularized Auto-Encoders for Image Representation.

    Science.gov (United States)

    Yiyi Liao; Yue Wang; Yong Liu

    2017-06-01

    Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.

  1. Wavelet domain image restoration with adaptive edge-preserving regularization.

    Science.gov (United States)

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  2. In Vivo Stabilized SB3, an Attractive GRPR Antagonist, for Pre- and Intra-Operative Imaging for Prostate Cancer.

    Science.gov (United States)

    Bakker, Ingrid L; van Tiel, Sandra T; Haeck, Joost; Doeswijk, Gabriela N; de Blois, Erik; Segbers, Marcel; Maina, Theodosia; Nock, Berthold A; de Jong, Marion; Dalm, Simone U

    2018-03-19

    The gastrin-releasing peptide receptor (GRPR), overexpressed on various tumor types, is an attractive target for receptor-mediated imaging and therapy. Another interesting approach would be the use of GRPR radioligands for pre-operative imaging and subsequent radio-guided surgery, with the goal to improve surgical outcome. GRPR radioligands were successfully implemented in clinical studies, especially Sarabesin 3 (SB3) is an appealing GRPR antagonist with high receptor affinity. Gallium-68 labeled SB3 has good in vivo stability, after labeling with Indium-111; however, the molecule shows poor in vivo stability, which negatively impacts tumor-targeting capacity. A novel approach to increase in vivo stability of radiopeptides is by co-administration of the neutral endopeptidase (NEP) inhibitor, phosphoramidon (PA). We studied in vivo stability and biodistribution of [ 111 In]SB3 without/with (-/+) PA in mice. Furthermore, SPECT/MRI on a novel, state-of-the-art platform was performed. GRPR affinity of SB3 was determined on PC295 xenograft sections using [ 125 I]Tyr 4 -bombesin with tracer only or with increasing concentrations of SB3. For in vivo stability, mice were injected with 200/2000 pmol [ 111 In]SB3 -/+ 300 μg PA. Blood was collected and analyzed. Biodistribution and SPECT/MRI studies were performed at 1, 4, and 24 h postinjection (p.i.) of 2.5 MBq/200 pmol or 25 MBq/200 pmol [ 111 In]SB3 -/+ 300 μg PA in PC-3-xenografted mice. SB3 showed high affinity for GRPR (IC 50 3.5 nM). Co-administration of PA resulted in twice higher intact peptide in vivo vs [ 111 In]SB3 alone. Biodistribution studies at 1, 4, and 24 h p.i. show higher tumor uptake values with PA co-administration (19.7 ± 3.5 vs 10.2 ± 1.5, 17.6 ± 5.1 vs 8.3 ± 1.1, 6.5 ± 3.3 vs 3.1 ± 1.9 % ID/g tissue (P < 0.0001)). Tumor imaging with SPECT/MRI clearly improved after co-injection of PA. Co-administration of PA increased in vivo tumor targeting capacity of

  3. Multiview Hessian regularization for image annotation.

    Science.gov (United States)

    Liu, Weifeng; Tao, Dacheng

    2013-07-01

    The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.

  4. Multilinear Graph Embedding: Representation and Regularization for Images.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  5. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  6. Manifold regularization for sparse unmixing of hyperspectral images.

    Science.gov (United States)

    Liu, Junmin; Zhang, Chunxia; Zhang, Jiangshe; Li, Huirong; Gao, Yuelin

    2016-01-01

    Recently, sparse unmixing has been successfully applied to spectral mixture analysis of remotely sensed hyperspectral images. Based on the assumption that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance, unmixing of each mixed pixel in the scene is to find an optimal subset of signatures in a very large spectral library, which is cast into the framework of sparse regression. However, traditional sparse regression models, such as collaborative sparse regression , ignore the intrinsic geometric structure in the hyperspectral data. In this paper, we propose a novel model, called manifold regularized collaborative sparse regression , by introducing a manifold regularization to the collaborative sparse regression model. The manifold regularization utilizes a graph Laplacian to incorporate the locally geometrical structure of the hyperspectral data. An algorithm based on alternating direction method of multipliers has been developed for the manifold regularized collaborative sparse regression model. Experimental results on both the simulated and real hyperspectral data sets have demonstrated the effectiveness of our proposed model.

  7. EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.

    Science.gov (United States)

    Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos

    2015-01-01

    Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.

  8. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  9. Stability of the Tonks–Langmuir discharge pre-sheath

    Energy Technology Data Exchange (ETDEWEB)

    Tskhakaya, D. D. [Fusion@ÖAW, Institute of Applied Physics, TU Wien, Wiedner Hauptstraße 8-10, 1040 Vienna (Austria); Kos, L. [LECAD Laboratory, Faculty of Mechanical Engineering, University of Ljubljana, SI-1000 Ljubljana (Slovenia); Tskhakaya, D. [Fusion@ÖAW, Institute of Applied Physics, TU Wien, Wiedner Hauptstraße 8-10, 1040 Vienna (Austria); Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck (Austria)

    2016-03-15

    The article formulates the stability problem of the plasma sheath in the Tonks–Langmuir discharge. Using the kinetic description of the ion gas, i.e., the stability of the potential shape in the quasi-neutral pre-sheath regarding the high and low frequency, the perturbations are investigated. The electrons are assumed to be Maxwell–Boltzmann distributed. Regarding high-frequency perturbations, the pre-sheath is shown to be stable. The stability problem regarding low-frequency perturbations can be reduced to an analysis of the “diffusion like” equation, which results in the instability of the potential distribution in the pre-sheath. By means of the Particle in Cell simulations, also the nonlinear stage of low frequency oscillations is investigated. Comparing the figure obtained with the figure for linear stage, one can find obvious similarity in the spatial-temporal behavior of the potential.

  10. Stability of the Regular Hayward Thin-Shell Wormholes

    Directory of Open Access Journals (Sweden)

    M. Sharif

    2016-01-01

    Full Text Available The aim of this paper is to construct regular Hayward thin-shell wormholes and analyze their stability. We adopt Israel formalism to calculate surface stresses of the shell and check the null and weak energy conditions for the constructed wormholes. It is found that the stress-energy tensor components violate the null and weak energy conditions leading to the presence of exotic matter at the throat. We analyze the attractive and repulsive characteristics of wormholes corresponding to ar>0 and ar<0, respectively. We also explore stability conditions for the existence of traversable thin-shell wormholes with arbitrarily small amount of fluid describing cosmic expansion. We find that the space-time has nonphysical regions which give rise to event horizon for 0stability of thin-shell wormholes.

  11. Bayesian regularization of diffusion tensor images

    DEFF Research Database (Denmark)

    Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif

    2007-01-01

    Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...

  12. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  13. An algorithm for total variation regularized photoacoustic imaging

    DEFF Research Database (Denmark)

    Dong, Yiqiu; Görner, Torsten; Kunis, Stefan

    2014-01-01

    Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During the iter......Recovery of image data from photoacoustic measurements asks for the inversion of the spherical mean value operator. In contrast to direct inversion methods for specific geometries, we consider a semismooth Newton scheme to solve a total variation regularized least squares problem. During...... the iteration, each matrix vector multiplication is realized in an efficient way using a recently proposed spectral discretization of the spherical mean value operator. All theoretical results are illustrated by numerical experiments....

  14. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    Science.gov (United States)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  15. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  16. Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing.

    Science.gov (United States)

    Elmoataz, Abderrahim; Lezoray, Olivier; Bougleux, Sébastien

    2008-07-01

    We introduce a nonlocal discrete regularization framework on weighted graphs of the arbitrary topologies for image and manifold processing. The approach considers the problem as a variational one, which consists of minimizing a weighted sum of two energy terms: a regularization one that uses a discrete weighted p-Dirichlet energy and an approximation one. This is the discrete analogue of recent continuous Euclidean nonlocal regularization functionals. The proposed formulation leads to a family of simple and fast nonlinear processing methods based on the weighted p-Laplace operator, parameterized by the degree p of regularity, the graph structure and the graph weight function. These discrete processing methods provide a graph-based version of recently proposed semi-local or nonlocal processing methods used in image and mesh processing, such as the bilateral filter, the TV digital filter or the nonlocal means filter. It works with equal ease on regular 2-D and 3-D images, manifolds or any data. We illustrate the abilities of the approach by applying it to various types of images, meshes, manifolds, and data represented as graphs.

  17. Benefits of regular walking exercise in advanced pre-dialysis chronic kidney disease.

    Science.gov (United States)

    Kosmadakis, George C; John, Stephen G; Clapp, Emma L; Viana, Joao L; Smith, Alice C; Bishop, Nicolette C; Bevington, Alan; Owen, Paul J; McIntyre, Christopher W; Feehally, John

    2012-03-01

    There is increasing evidence of the benefit of regular physical exercise in a number of long-term conditions including chronic kidney disease (CKD). In CKD, this evidence has mostly come from studies in end stage patients receiving regular dialysis. There is little evidence in pre-dialysis patients with CKD Stages 4 and 5. A prospective study compared the benefits of 6 months regular walking in 40 pre-dialysis patients with CKD Stages 4 and 5. Twenty of them were the exercising group and were compared to 20 patients who were continuing with usual physical activity. In addition, the 40 patients were randomized to receive additional oral sodium bicarbonate (target venous bicarbonate 29 mmol/L) or continue with previous sodium bicarbonate treatment (target 24 mmol/L). Improvements noted after 1 month were sustained to 6 months in the 18 of 20 who completed the exercise study. These included improvements in exercise tolerance (reduced exertion to achieve the same activity), weight loss, improved cardiovascular reactivity, avoiding an increase in blood pressure medication and improvements in quality of health and life and uraemic symptom scores assessed by questionnaire. Sodium bicarbonate supplementation did not produce any significant alterations. This study provides further support for the broad benefits of aerobic physical exercise in CKD. More studies are needed to understand the mechanisms of these benefits, to study whether resistance exercise will add to the benefit and to evaluate strategies to promote sustained lifestyle changes, that could ensure continued increase in habitual daily physical activity levels.

  18. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  19. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms.

  20. Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes

    International Nuclear Information System (INIS)

    Schee, Jan; Stuchlík, Zdeněk

    2015-01-01

    We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghost direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region

  1. Multiview vector-valued manifold regularization for multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang

    2013-05-01

    In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.

  2. Influence of whitening and regular dentifrices on orthodontic clear ligature color stability.

    Science.gov (United States)

    Oliveira, Adauê S; Kaizer, Marina R; Salgado, Vinícius E; Soldati, Dener C; Silva, Roberta C; Moraes, Rafael R

    2015-01-01

    This study evaluated the effect of brushing orthodontic clear ligatures with a whitening dentifrice containing a blue pigment (Close Up White Now, Unilever, London, UK) on their color stability, when exposed to a staining agent. Ligatures from 3M Unitek (Monrovia, CA, USA) and Morelli (Sorocaba, SP, Brazil) were tested. Baseline color measurements were performed and nonstained groups (control) were stored in distilled water whereas test groups were exposed for 1 hour daily to red wine. Specimens were brushed daily using regular or whitening dentifrice. Color measurements were repeated after 7, 14, 21, and 28 days using a spectrophotometer based on the CIE L*a*b* system. Decreased luminosity (CIE L*), increased red discoloration (CIE a* axis), and increased yellow discoloration (CIE b* axis) were generally observed for ligatures exposed to the staining agent. Color variation was generally lower in specimens brushed with regular dentifrice, but ligatures brushed with whitening dentifrice were generally less red and less yellow than regular dentifrice. The whitening dentifrice led to blue discoloration trend, with visually detectable differences particularly apparent according to storage condition and ligature brand. The whitening dentifrice containing blue pigment did not improve the ligature color stability, but it decreased yellow discoloration and increased a blue coloration. The use of a whitening dentifrice containing blue pigment during orthodontic treatment might decrease the yellow discoloration of elastic ligatures. © 2015 Wiley Periodicals, Inc.

  3. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    Science.gov (United States)

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  5. Sparse regularization for EIT reconstruction incorporating structural information derived from medical imaging.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-06-01

    Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.

  6. A general framework for regularized, similarity-based image restoration.

    Science.gov (United States)

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  7. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Science.gov (United States)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  8. Image segmentation with a novel regularized composite shape prior based on surrogate study

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  9. Image segmentation with a novel regularized composite shape prior based on surrogate study

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  10. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    Science.gov (United States)

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  11. An algorithmic framework for Mumford–Shah regularization of inverse problems in imaging

    International Nuclear Information System (INIS)

    Hohm, Kilian; Weinmann, Andreas; Storath, Martin

    2015-01-01

    The Mumford–Shah model is a very powerful variational approach for edge preserving regularization of image reconstruction processes. However, it is algorithmically challenging because one has to deal with a non-smooth and non-convex functional. In this paper, we propose a new efficient algorithmic framework for Mumford–Shah regularization of inverse problems in imaging. It is based on a splitting into specific subproblems that can be solved exactly. We derive fast solvers for the subproblems which are key for an efficient overall algorithm. Our method neither requires a priori knowledge of the gray or color levels nor of the shape of the discontinuity set. We demonstrate the wide applicability of the method for different modalities. In particular, we consider the reconstruction from Radon data, inpainting, and deconvolution. Our method can be easily adapted to many further imaging setups. The relevant condition is that the proximal mapping of the data fidelity can be evaluated a within reasonable time. In other words, it can be used whenever classical Tikhonov regularization is possible. (paper)

  12. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    Science.gov (United States)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  13. Stability study of pre-stack seismic inversion based on the full Zoeppritz equation

    Science.gov (United States)

    Liang, Lifeng; Zhang, Hongbing; Guo, Qiang; Saeed, Wasif; Shang, Zuoping; Huang, Guojiao

    2017-10-01

    Pre-stack seismic inversion is highly important and complicated. Its result is non-unique, and the process is unstable because pre-stack seismic inversion is an ill-posed problem that simultaneously obtains the results of multiple parameters. Combining the full Zoeppritz equation and additional assumptions with edge-preserving regularization (EPR) can help mitigate the problem. To achieve this combination, we developed an inversion method by constructing a new objective function, which includes the EPR and the Markov random field. The method directly gains reflectivity R PP by the full Zoeppritz equation instead of its approximations and effectively controls the stability of simultaneous inversion by two additional assumptions: the sectional constant V S/V P and the generalized Gardner equation. Thus, the simultaneous inversion of multiple parameters is directed toward to V P, ΔL S (the fitting deviation of V S) and density, and the generalized Gardner equation is regarded as a constraint from which the fitting relationship is derived. We applied the fast simulated annealing algorithm to solve the nonlinear optimization problem. The test results on 2D synthetic data indicated that the stability of simultaneous inversion for V P, ΔL S and density is better than these for V P, V S, and density. The inverted result of density gradually worsens as the deviation ΔL D (the fitting deviation of the density) increases. Moreover, the inverted results were acceptable when using the fitting relationships with error, although they showed varying degrees of influence. We constructed time-varying and space-varying fitting relationships using the logging data in pre-stack inversion of the field seismic data. This improved the inverted results of the simultaneous inversion for complex geological models. Finally, the inverted results of the field data distinctly revealed more detailed information about the layers and matched well with the logging data along the wells over most

  14. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  15. A regularized relaxed ordered subset list-mode reconstruction algorithm and its preliminary application to undersampling PET imaging

    International Nuclear Information System (INIS)

    Cao, Xiaoqing; Xie, Qingguo; Xiao, Peng

    2015-01-01

    List mode format is commonly used in modern positron emission tomography (PET) for image reconstruction due to certain special advantages. In this work, we proposed a list mode based regularized relaxed ordered subset (LMROS) algorithm for static PET imaging. LMROS is able to work with regularization terms which can be formulated as twice differentiable convex functions. Such a versatility would make LMROS a convenient and general framework for fulfilling different regularized list mode reconstruction methods. LMROS was applied to two simulated undersampling PET imaging scenarios to verify its effectiveness. Convex quadratic function, total variation constraint, non-local means and dictionary learning based regularization methods were successfully realized for different cases. The results showed that the LMROS algorithm was effective and some regularization methods greatly reduced the distortions and artifacts caused by undersampling. (paper)

  16. Pre-Service Teachers' Concept Images on Fractal Dimension

    Science.gov (United States)

    Karakus, Fatih

    2016-01-01

    The analysis of pre-service teachers' concept images can provide information about their mental schema of fractal dimension. There is limited research on students' understanding of fractal and fractal dimension. Therefore, this study aimed to investigate the pre-service teachers' understandings of fractal dimension based on concept image. The…

  17. SU-E-I-93: Improved Imaging Quality for Multislice Helical CT Via Sparsity Regularized Iterative Image Reconstruction Method Based On Tensor Framelet

    International Nuclear Information System (INIS)

    Nam, H; Guo, M; Lee, K; Li, R; Xing, L; Gao, H

    2014-01-01

    Purpose: Inspired by compressive sensing, sparsity regularized iterative reconstruction method has been extensively studied. However, its utility pertinent to multislice helical 4D CT for radiotherapy with respect to imaging quality, dose, and time has not been thoroughly addressed. As the beginning of such an investigation, this work carries out the initial comparison of reconstructed imaging quality between sparsity regularized iterative method and analytic method through static phantom studies using a state-of-art 128-channel multi-slice Siemens helical CT scanner. Methods: In our iterative method, tensor framelet (TF) is chosen as the regularization method for its superior performance from total variation regularization in terms of reduced piecewise-constant artifacts and improved imaging quality that has been demonstrated in our prior work. On the other hand, X-ray transforms and its adjoints are computed on-the-fly through GPU implementation using our previous developed fast parallel algorithms with O(1) complexity per computing thread. For comparison, both FDK (approximate analytic method) and Katsevich algorithm (exact analytic method) are used for multislice helical CT image reconstruction. Results: The phantom experimental data with different imaging doses were acquired using a state-of-art 128-channel multi-slice Siemens helical CT scanner. The reconstructed image quality was compared between TF-based iterative method, FDK and Katsevich algorithm with the quantitative analysis for characterizing signal-to-noise ratio, image contrast, and spatial resolution of high-contrast and low-contrast objects. Conclusion: The experimental results suggest that our tensor framelet regularized iterative reconstruction algorithm improves the helical CT imaging quality from FDK and Katsevich algorithm for static experimental phantom studies that have been performed

  18. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  19. State fusion entropy for continuous and site-specific analysis of landslide stability changing regularities

    Science.gov (United States)

    Liu, Yong; Qin, Zhimeng; Hu, Baodan; Feng, Shuai

    2018-04-01

    Stability analysis is of great significance to landslide hazard prevention, especially the dynamic stability. However, many existing stability analysis methods are difficult to analyse the continuous landslide stability and its changing regularities in a uniform criterion due to the unique landslide geological conditions. Based on the relationship between displacement monitoring data, deformation states and landslide stability, a state fusion entropy method is herein proposed to derive landslide instability through a comprehensive multi-attribute entropy analysis of deformation states, which are defined by a proposed joint clustering method combining K-means and a cloud model. Taking Xintan landslide as the detailed case study, cumulative state fusion entropy presents an obvious increasing trend after the landslide entered accelerative deformation stage and historical maxima match highly with landslide macroscopic deformation behaviours in key time nodes. Reasonable results are also obtained in its application to several other landslides in the Three Gorges Reservoir in China. Combined with field survey, state fusion entropy may serve for assessing landslide stability and judging landslide evolutionary stages.

  20. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  1. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    Science.gov (United States)

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  2. A new stabilized least-squares imaging condition

    International Nuclear Information System (INIS)

    Vivas, Flor A; Pestana, Reynam C; Ursin, Bjørn

    2009-01-01

    The classical deconvolution imaging condition consists of dividing the upgoing wave field by the downgoing wave field and summing over all frequencies and sources. The least-squares imaging condition consists of summing the cross-correlation of the upgoing and downgoing wave fields over all frequencies and sources, and dividing the result by the total energy of the downgoing wave field. This procedure is more stable than using the classical imaging condition, but it still requires stabilization in zones where the energy of the downgoing wave field is small. To stabilize the least-squares imaging condition, the energy of the downgoing wave field is replaced by its average value computed in a horizontal plane in poorly illuminated regions. Applications to the Marmousi and Sigsbee2A data sets show that the stabilized least-squares imaging condition produces better images than the least-squares and cross-correlation imaging conditions

  3. Pre- and postoperative MR imaging of craniopharyngiomas

    Energy Technology Data Exchange (ETDEWEB)

    Hald, J.K. [Rijkshospitalet, Oslo (Norway). Dept. of Radiology; Eldevik, O.P. [Rijkshospitalet, Oslo (Norway). Dept. of Neurosurgery; Quint, D.J. [Rijkshospitalet, Oslo (Norway). Dept. of Neurosurgery; Chandler, W.F. [Univ. of Michigan Hospital, Ann Arbor, MI (United States). Dept. of Radiology; Kollevold, T. [Univ. of Michigan Hospital, Ann Arbor, MI (United States). Dept. of Neurosurgery

    1996-09-01

    Purpose: To compare the pre- and postoperative MR appearance of craniopharyngiomas with respect to lesion size, tumour morphology and identification of surrounding normal structures. Material and Methods: MR images obtained prior to and following craniopharyngioma resection were evaluated retrospectively in 10 patients. Tumour signal charcteristics, size and extension with particular reference to the optic chiasm, the pituitary gland, the pituitary stalk and the third ventricle were evaluated. Results: Following surgery, tumour volume was reduced in all patients. In 6 patients there was further tumour volume reduction between the first and second postoperative images. Two of these patients received radiation therapy between the 2 postoperative studies, while 4 had no adjuvant treatment to the surgical intervention. There was improved visualization of the optic chiasm, in 3, the pituitary stalk in one, and the third ventricle in 9 of the 10 patients. The pituitary gland was identified preoperatively only in one patient, postoperatively only in another, pre- and postoperatively in 5, and neither pre- nor postoperatively in 3 patients. In 3 patients MR imaging 0-7 days postoperatively identified tumour remnants not seen at the end of the surgical procedure. The signal intensities of solid and cystic tumour components were stable from pre- to the first postoperative MR images. Optic tract increased signal prior to surgery was gone 28 days postoperatively in one patient, but persisted on the left side for 197 days after surgery in another. Conclusion: Postoperative MR imaging of craniopharyngiomas demonstrated tumour volume reduction and tumour remnants not seen at surgery. Early postoperative MR imaging of craniopharyngiomas may overestimate the size of residual tumour. Improved visualization of peritumoral structures may be achieved. (orig.).

  4. Accuracy of pre-contrast imaging in abdominal magnetic resonance imaging of pediatric oncology patients

    International Nuclear Information System (INIS)

    Mohd Zaki, Faizah; Moineddin, Rahim; Grant, Ronald; Chavhan, Govind B.

    2016-01-01

    Safety concerns are increasingly raised regarding the use of gadolinium-based contrast media for MR imaging. To determine the accuracy of pre-contrast abdominal MR imaging for lesion detection and characterization in pediatric oncology patients. We included 120 children (37 boys and 83 girls; mean age 8.94 years) referred by oncology services. Twenty-five had MRI for the first time and 95 were follow-up scans. Two authors independently reviewed pre-contrast MR images to note the following information about the lesions: location, number, solid vs. cystic and likely nature. Pre- and post-contrast imaging reviewed together served as the reference standard. The overall sensitivity was 88% for the first reader and 90% for the second; specificity was 94% and 91%; positive predictive value was 96% and 94%; negative predictive value was 82% and 84%; accuracy of pre-contrast imaging for lesion detection as compared to the reference standard was 90% for both readers. The difference between mean number of lesions detected on pre-contrast imaging and reference standard was not significant for either reader (reader 1, P = 0.072; reader 2, P = 0.071). There was substantial agreement (kappa values of 0.76 and 0.72 for readers 1 and 2) between pre-contrast imaging and reference standard for determining solid vs. cystic lesion and likely nature of the lesion. The addition of post-contrast imaging increased confidence of both readers significantly (P < 0.0001), but the interobserver agreement for the change in confidence was poor (kappa 0.12). Pre-contrast abdominal MR imaging has high accuracy in lesion detection in pediatric oncology patients and shows substantial agreement with the reference standard for characterization of lesions. Gadolinium-based contrast media administration cannot be completely eliminated but can be avoided in many cases, with the decision made on a case-by-case basis, taking into consideration location and type of tumor. (orig.)

  5. Accuracy of pre-contrast imaging in abdominal magnetic resonance imaging of pediatric oncology patients

    Energy Technology Data Exchange (ETDEWEB)

    Mohd Zaki, Faizah [University of Toronto, Department of Diagnostic Imaging, The Hospital for Sick Children and Medical Imaging, Toronto, ON (Canada); Universiti Kebangsaan Malaysia Medical Center, Department of Radiology, Kuala Lumpur (Malaysia); Moineddin, Rahim [University of Toronto, Department of Family and Community Medicine, Toronto, ON (Canada); Grant, Ronald [University of Toronto, Department of Hematology and Oncology, The Hospital for Sick Children and Medical Imaging, Toronto, ON (Canada); Chavhan, Govind B. [University of Toronto, Department of Diagnostic Imaging, The Hospital for Sick Children and Medical Imaging, Toronto, ON (Canada)

    2016-11-15

    Safety concerns are increasingly raised regarding the use of gadolinium-based contrast media for MR imaging. To determine the accuracy of pre-contrast abdominal MR imaging for lesion detection and characterization in pediatric oncology patients. We included 120 children (37 boys and 83 girls; mean age 8.94 years) referred by oncology services. Twenty-five had MRI for the first time and 95 were follow-up scans. Two authors independently reviewed pre-contrast MR images to note the following information about the lesions: location, number, solid vs. cystic and likely nature. Pre- and post-contrast imaging reviewed together served as the reference standard. The overall sensitivity was 88% for the first reader and 90% for the second; specificity was 94% and 91%; positive predictive value was 96% and 94%; negative predictive value was 82% and 84%; accuracy of pre-contrast imaging for lesion detection as compared to the reference standard was 90% for both readers. The difference between mean number of lesions detected on pre-contrast imaging and reference standard was not significant for either reader (reader 1, P = 0.072; reader 2, P = 0.071). There was substantial agreement (kappa values of 0.76 and 0.72 for readers 1 and 2) between pre-contrast imaging and reference standard for determining solid vs. cystic lesion and likely nature of the lesion. The addition of post-contrast imaging increased confidence of both readers significantly (P < 0.0001), but the interobserver agreement for the change in confidence was poor (kappa 0.12). Pre-contrast abdominal MR imaging has high accuracy in lesion detection in pediatric oncology patients and shows substantial agreement with the reference standard for characterization of lesions. Gadolinium-based contrast media administration cannot be completely eliminated but can be avoided in many cases, with the decision made on a case-by-case basis, taking into consideration location and type of tumor. (orig.)

  6. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    Science.gov (United States)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  7. An Image Processing Approach to Pre-compensation for Higher-Order Aberrations in the Eye

    Directory of Open Access Journals (Sweden)

    Miguel Alonso Jr

    2004-06-01

    Full Text Available Human beings rely heavily on vision for almost all of the tasks that are required in daily life. Because of this dependence on vision, humans with visual limitations, caused by genetic inheritance, disease, or age, will have difficulty in completing many of the tasks required of them. Some individuals with severe visual impairments, known as high-order aberrations, may have difficulty in interacting with computers, even when using a traditional means of visual correction (e.g., spectacles, contact lenses. This is, in part, because these correction mechanisms can only compensate for the most regular (low-order distortions or aberrations of the image in the eye. This paper presents an image processing approach that will pre-compensate the images displayed on the computer screen, so as to counter the effect of the eye's aberrations on the image. The characterization of the eye required to perform this customized pre-compensation is the eye's Point Spread Function (PSF. Ophthalmic instruments generically called "Wavefront Analyzers" can now measure this description of the eye's optical properties. The characterization provided by these instruments also includes the "higher-order aberration components" and could, therefore, lead to a more comprehensive vision correction than traditional mechanisms. This paper explains the theoretical foundation of the methods proposed and illustrates them with experiments involving the emulation of a known and constant PSF by interposing a lens in the field of view of normally sighted test subjects.

  8. Application of regularization technique in image super-resolution algorithm via sparse representation

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Huang, Hui; Zheng, Li-xin

    2017-11-01

    To make use of the prior knowledge of the image more effectively and restore more details of the edges and structures, a novel sparse coding objective function is proposed by applying the principle of the non-local similarity and manifold learning on the basis of super-resolution algorithm via sparse representation. Firstly, the non-local similarity regularization term is constructed by using the similar image patches to preserve the edge information. Then, the manifold learning regularization term is constructed by utilizing the locally linear embedding approach to enhance the structural information. The experimental results validate that the proposed algorithm has a significant improvement compared with several super-resolution algorithms in terms of the subjective visual effect and objective evaluation indices.

  9. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning

    OpenAIRE

    Lai, Rongjie; Li, Jia

    2017-01-01

    Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regu...

  11. Hydration Effects on the Stability of Calcium Carbonate Pre-Nucleation Species

    Directory of Open Access Journals (Sweden)

    Alejandro Burgos-Cara

    2017-07-01

    Full Text Available Recent experimental evidence and computer modeling have shown that the crystallization of a range of minerals does not necessarily follow classical models and theories. In several systems, liquid precursors, stable pre-nucleation clusters and amorphous phases precede the nucleation and growth of stable mineral phases. However, little is known on the effect of background ionic species on the formation and stability of pre-nucleation species formed in aqueous solutions. Here, we present a systematic study on the effect of a range of background ions on the crystallization of solid phases in the CaCO3-H2O system, which has been thoroughly studied due to its technical and mineralogical importance, and is known to undergo non-classical crystallization pathways. The induction time for the onset of calcium carbonate nucleation and effective critical supersaturation are systematically higher in the presence of background ions with decreasing ionic radii. We propose that the stabilization of water molecules in the pre-nucleation clusters by background ions can explain these results. The stabilization of solvation water hinders cluster dehydration, which is an essential step for precipitation. This hypothesis is corroborated by the observed correlation between parameters such as the macroscopic equilibrium constant for the formation of calcium/carbonate ion associates, the induction time, and the ionic radius of the background ions in the solution. Overall, these results provide new evidence supporting the hypothesis that pre-nucleation cluster dehydration is the rate-controlling step for calcium carbonate precipitation.

  12. Phantom experiments using soft-prior regularization EIT for breast cancer imaging.

    Science.gov (United States)

    Murphy, Ethan K; Mahara, Aditya; Wu, Xiaotian; Halter, Ryan J

    2017-06-01

    A soft-prior regularization (SR) electrical impedance tomography (EIT) technique for breast cancer imaging is described, which shows an ability to accurately reconstruct tumor/inclusion conductivity values within a dense breast model investigated using a cylindrical and a breast-shaped tank. The SR-EIT method relies on knowing the spatial location of a suspicious lesion initially detected from a second imaging modality. Standard approaches (using Laplace smoothing and total variation regularization) without prior structural information are unable to accurately reconstruct or detect the tumors. The soft-prior approach represents a very significant improvement to these standard approaches, and has the potential to improve conventional imaging techniques, such as automated whole breast ultrasound (AWB-US), by providing electrical property information of suspicious lesions to improve AWB-US's ability to discriminate benign from cancerous lesions. Specifically, the best soft-regularization technique found average absolute tumor/inclusion errors of 0.015 S m -1 for the cylindrical test and 0.055 S m -1 and 0.080 S m -1 for the breast-shaped tank for 1.8 cm and 2.5 cm inclusions, respectively. The standard approaches were statistically unable to distinguish the tumor from the mammary gland tissue. An analysis of false tumors (benign suspicious lesions) provides extra insight into the potential and challenges EIT has for providing clinically relevant information. The ability to obtain accurate conductivity values of a suspicious lesion (>1.8 cm) detected from another modality (e.g. AWB-US) could significantly reduce false positives and result in a clinically important technology.

  13. Iterative reconstruction for x-ray computed tomography using prior-image induced nonlocal regularization.

    Science.gov (United States)

    Zhang, Hua; Huang, Jing; Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-09-01

    Repeated X-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the X-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as "PWLS-PINL". Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive overrelaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection, and edge detail preservation.

  14. HIERARCHICAL REGULARIZATION OF POLYGONS FOR PHOTOGRAMMETRIC POINT CLOUDS OF OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    L. Xie

    2017-05-01

    Full Text Available Despite the success of multi-view stereo (MVS reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  15. Videokymography. Imaging and quantification of regular and irregular vocal fold vibrations

    NARCIS (Netherlands)

    Schutte, HK; Svec, JG; Sram, F; McCafferty, G; Coman, W; Carroll, R

    1996-01-01

    A newly developed imaging technique makes it possible to observe the vocal fold vibration pattern also under unstable conditions. In contrast to stroboscopy, which strongly relies on the regularity of the vibration under study videokymography enables the study of irregular patterns as well. The

  16. Evaluation on ecological stability and biodegradation of dyeing wastewater pre-treated by electron beam

    International Nuclear Information System (INIS)

    Lee, M.J.; Park, C.K.; Yoo, D.H.; Lee, J.K.; Lee, B.J.; Han, B.S.; Kim, J.K.; Kim, Y.R.

    2005-01-01

    Biological treatment of dye wastewater pre-treated by electron beam has been performed in order to evaluate the biodegradation and ecological stability of effluent. In the process of electron-beam treatment of wastewater there are utilized chemical transformations of pollutants induced by ionizing radiation. Partial decomposition of pollutant takes place as well as transformations of pollutant molecules that result in improving subsequent purification stages like as biological processing. Dyeing wastewater contains many kind of pollutants which are difficult to be decomposed completely by microorganisms. In this study, biodegradation with dyeing wastewater pre-treated by electron beams was observed. On the other hand, consideration on public acceptance in terms of ecological stability of biological effluent pre-treated by electron beams was given in this study. The results of laboratory investigations on biodegradation and ecological stability of effluent showed that biodegradation of dye wastewater pre-treated by electron beam was enhanced compared to unirradiated one. In the initial stage of biological oxidation regardless of different HRT, dye wastewater pre-treated by electron beam could be oxidized easily compare to without treated one. More number of survived daphnia magna could be observed in the biological effluent pre-treated by electron beam. This means that biological effluent pre-treated by electron beam can be said 'it is safe on the ecological system'

  17. A Novel Kernel-Based Regularization Technique for PET Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Abdelwahhab Boudjelal

    2017-06-01

    Full Text Available Positron emission tomography (PET is an imaging technique that generates 3D detail of physiological processes at the cellular level. The technique requires a radioactive tracer, which decays and releases a positron that collides with an electron; consequently, annihilation photons are emitted, which can be measured. The purpose of PET is to use the measurement of photons to reconstruct the distribution of radioisotopes in the body. Currently, PET is undergoing a revamp, with advancements in data measurement instruments and the computing methods used to create the images. These computer methods are required to solve the inverse problem of “image reconstruction from projection”. This paper proposes a novel kernel-based regularization technique for maximum-likelihood expectation-maximization ( κ -MLEM to reconstruct the image. Compared to standard MLEM, the proposed algorithm is more robust and is more effective in removing background noise, whilst preserving the edges; this suppresses image artifacts, such as out-of-focus slice blur.

  18. ℓ1/2-norm regularized nonnegative low-rank and sparse affinity graph for remote sensing image segmentation

    Science.gov (United States)

    Tian, Shu; Zhang, Ye; Yan, Yiming; Su, Nan

    2016-10-01

    Segmentation of real-world remote sensing images is a challenge due to the complex texture information with high heterogeneity. Thus, graph-based image segmentation methods have been attracting great attention in the field of remote sensing. However, most of the traditional graph-based approaches fail to capture the intrinsic structure of the feature space and are sensitive to noises. A ℓ-norm regularization-based graph segmentation method is proposed to segment remote sensing images. First, we use the occlusion of the random texture model (ORTM) to extract the local histogram features. Then, a ℓ-norm regularized low-rank and sparse representation (LNNLRS) is implemented to construct a ℓ-regularized nonnegative low-rank and sparse graph (LNNLRS-graph), by the union of feature subspaces. Moreover, the LNNLRS-graph has a high ability to discriminate the manifold intrinsic structure of highly homogeneous texture information. Meanwhile, the LNNLRS representation takes advantage of the low-rank and sparse characteristics to remove the noises and corrupted data. Last, we introduce the LNNLRS-graph into the graph regularization nonnegative matrix factorization to enhance the segmentation accuracy. The experimental results using remote sensing images show that when compared to five state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.

  19. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    Science.gov (United States)

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  1. Pre-analytical factors influencing the stability of cerebrospinal fluid proteins

    DEFF Research Database (Denmark)

    Simonsen, Anja H; Bahl, Justyna M C; Danborg, Pia B

    2013-01-01

    Cerebrospinal fluid (CSF) is a potential source for new biomarkers due to its proximity to the brain. This study aimed to clarify the stability of the CSF proteome when undergoing pre-analytical factors. We investigated the effects of repeated freeze/thaw cycles, protease inhibitors and delayed s...

  2. From regular text to artistic writing and artworks: Fourier statistics of images with low and high aesthetic appeal

    Directory of Open Access Journals (Sweden)

    Tamara eMelmer

    2013-04-01

    Full Text Available The spatial characteristics of letters and their influence on readability and letter identification have been intensely studied during the last decades. There have been few studies, however, on statistical image properties that reflect more global aspects of text, for example properties that may relate to its aesthetic appeal. It has been shown that natural scenes and a large variety of visual artworks possess a scale-invariant Fourier power spectrum that falls off linearly with increasing frequency in log-log plots. We asked whether images of text share this property. As expected, the Fourier spectrum of images of regular typed or handwritten text is highly anisotropic, i.e. the spectral image properties in vertical, horizontal and oblique orientations differ. Moreover, the spatial frequency spectra of text images are not scale invariant in any direction. The decline is shallower in the low-frequency part of the spectrum for text than for aesthetic artworks, whereas, in the high-frequency part, it is steeper. These results indicate that, in general, images of regular text contain less global structure (low spatial frequencies relative to fine detail (high spatial frequencies than images of aesthetics artworks. Moreover, we studied images of text with artistic claim (ornate print and calligraphy and ornamental art. For some measures, these images assume average values intermediate between regular text and aesthetic artworks. Finally, to answer the question of whether the statistical properties measured by us are universal amongst humans or are subject to intercultural differences, we compared images from three different cultural backgrounds (Western, East Asian and Arabic. Results for different categories (regular text, aesthetic writing, ornamental art and fine art were similar across cultures.

  3. From regular text to artistic writing and artworks: Fourier statistics of images with low and high aesthetic appeal

    Science.gov (United States)

    Melmer, Tamara; Amirshahi, Seyed A.; Koch, Michael; Denzler, Joachim; Redies, Christoph

    2013-01-01

    The spatial characteristics of letters and their influence on readability and letter identification have been intensely studied during the last decades. There have been few studies, however, on statistical image properties that reflect more global aspects of text, for example properties that may relate to its aesthetic appeal. It has been shown that natural scenes and a large variety of visual artworks possess a scale-invariant Fourier power spectrum that falls off linearly with increasing frequency in log-log plots. We asked whether images of text share this property. As expected, the Fourier spectrum of images of regular typed or handwritten text is highly anisotropic, i.e., the spectral image properties in vertical, horizontal, and oblique orientations differ. Moreover, the spatial frequency spectra of text images are not scale-invariant in any direction. The decline is shallower in the low-frequency part of the spectrum for text than for aesthetic artworks, whereas, in the high-frequency part, it is steeper. These results indicate that, in general, images of regular text contain less global structure (low spatial frequencies) relative to fine detail (high spatial frequencies) than images of aesthetics artworks. Moreover, we studied images of text with artistic claim (ornate print and calligraphy) and ornamental art. For some measures, these images assume average values intermediate between regular text and aesthetic artworks. Finally, to answer the question of whether the statistical properties measured by us are universal amongst humans or are subject to intercultural differences, we compared images from three different cultural backgrounds (Western, East Asian, and Arabic). Results for different categories (regular text, aesthetic writing, ornamental art, and fine art) were similar across cultures. PMID:23554592

  4. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  5. Prospective regularization design in prior-image-based reconstruction

    International Nuclear Information System (INIS)

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-01-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  6. Functional imaging in pre-motor Parkinson’s disease

    International Nuclear Information System (INIS)

    Arnaldi, D.; Picco, A.; Ferrara, M.; Nobili, F.; Famà, F.; Buschiazzo, A.; Morbelli, S.; De Carli, F.

    2014-01-01

    Several non motor symptoms (NMS) can precede the onset of the classical motor Parkinson’s disease (PD) syndrome. The existence of pre-motor and even pre-clinical PD stages has been proposed but the best target population to be screened to disclose PD patients in a pre-clinical, thus asymptomatic, stage is still matter of debate. The REM sleep behavior disorder (RBD) often affects PD patients at different stages of the disease and could precede the onset of motor symptoms by several years. However, RBD could also precede other synucleinopathies (namely, dementia with Lewy bodies and multisystem atrophy), and less frequently could be related to other neurological conditions or remain idiopathic. Moreover, not all PD patients exhibit RBD. Despite these caveats, RBD probably represents the best feature to disclose pre-motor PD patients given its high-risk of developing a full motor syndrome. Other clinical clues in the premotor stages of PD undergoing active investigation include hyposmia, depression, and autonomic dysfunction. Effective biomarkers are needed in order to improve the diagnostic accuracy in the pre-motor stage of PD, to monitor disease progression and to plan both pharmacological and non-pharmacological intervention. Functional imaging, in particular radionuclide methodologies, has been often used to investigate dopaminergic and non-dopaminergic features as well as cortical functioning in patients with RBD in its idiopathic form (iRBD) and/or associated with PD. Recently, new tracers to image α-synuclein pathologies are under development. Functional imaging in pre-motor PD, and in particular in iRBD, could improve our knowledge about the underlying mechanisms and the neurodegenerative progress of PD

  7. Imaging stability in force-feedback high-speed atomic force microscopy

    International Nuclear Information System (INIS)

    Kim, Byung I.; Boehm, Ryan D.

    2013-01-01

    We studied the stability of force-feedback high-speed atomic force microscopy (HSAFM) by imaging soft, hard, and biological sample surfaces at various applied forces. The HSAFM images showed sudden topographic variations of streaky fringes with a negative applied force when collected on a soft hydrocarbon film grown on a grating sample, whereas they showed stable topographic features with positive applied forces. The instability of HSAFM images with the negative applied force was explained by the transition between contact and noncontact regimes in the force–distance curve. When the grating surface was cleaned, and thus hydrophilic by removing the hydrocarbon film, enhanced imaging stability was observed at both positive and negative applied forces. The higher adhesive interaction between the tip and the surface explains the improved imaging stability. The effects of imaging rate on the imaging stability were tested on an even softer adhesive Escherichia coli biofilm deposited onto the grating structure. The biofilm and planktonic cell structures in HSAFM images were reproducible within the force deviation less than ∼0.5 nN at the imaging rate up to 0.2 s per frame, suggesting that the force-feedback HSAFM was stable for various imaging speeds in imaging softer adhesive biological samples. - Highlights: ► We investigated the imaging stability of force-feedback HSAFM. ► Stable–unstable imaging transitions rely on applied force and sample hydrophilicity. ► The stable–unstable transitions are found to be independent of imaging rate

  8. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  9. Cross-Modality Image Synthesis via Weakly Coupled and Geometry Co-Regularized Joint Dictionary Learning.

    Science.gov (United States)

    Huang, Yawen; Shao, Ling; Frangi, Alejandro F

    2018-03-01

    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.

  10. Bias correction for magnetic resonance images via joint entropy regularization.

    Science.gov (United States)

    Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang

    2014-01-01

    Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.

  11. Improvement of Sidestream Dark Field Imaging with an Image Acquisition Stabilizer

    Directory of Open Access Journals (Sweden)

    Sjauw Krishan D

    2010-07-01

    Full Text Available Abstract Background In the present study we developed, evaluated in volunteers, and clinically validated an image acquisition stabilizer (IAS for Sidestream Dark Field (SDF imaging. Methods The IAS is a stainless steel sterilizable ring which fits around the SDF probe tip. The IAS creates adhesion to the imaged tissue by application of negative pressure. The effects of the IAS on the sublingual microcirculatory flow velocities, the force required to induce pressure artifacts (PA, the time to acquire a stable image, and the duration of stable imaging were assessed in healthy volunteers. To demonstrate the clinical applicability of the SDF setup in combination with the IAS, simultaneous bilateral sublingual imaging of the microcirculation were performed during a lung recruitment maneuver (LRM in mechanically ventilated critically ill patients. One SDF device was operated handheld; the second was fitted with the IAS and held in position by a mechanic arm. Lateral drift, number of losses of image stability and duration of stable imaging of the two methods were compared. Results Five healthy volunteers were studied. The IAS did not affect microcirculatory flow velocities. A significantly greater force had to applied onto the tissue to induced PA with compared to without IAS (0.25 ± 0.15 N without vs. 0.62 ± 0.05 N with the IAS, p Conclusions The present study has validated the use of an IAS for improvement of SDF imaging by demonstrating that the IAS did not affect microcirculatory perfusion in the microscopic field of view. The IAS improved both axial and lateral SDF image stability and thereby increased the critical force required to induce pressure artifacts. The IAS ensured a significantly increased duration of maintaining a stable image sequence.

  12. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Improvement of sidestream dark field imaging with an image acquisition stabilizer.

    Science.gov (United States)

    Balestra, Gianmarco M; Bezemer, Rick; Boerma, E Christiaan; Yong, Ze-Yie; Sjauw, Krishan D; Engstrom, Annemarie E; Koopmans, Matty; Ince, Can

    2010-07-13

    In the present study we developed, evaluated in volunteers, and clinically validated an image acquisition stabilizer (IAS) for Sidestream Dark Field (SDF) imaging. The IAS is a stainless steel sterilizable ring which fits around the SDF probe tip. The IAS creates adhesion to the imaged tissue by application of negative pressure. The effects of the IAS on the sublingual microcirculatory flow velocities, the force required to induce pressure artifacts (PA), the time to acquire a stable image, and the duration of stable imaging were assessed in healthy volunteers. To demonstrate the clinical applicability of the SDF setup in combination with the IAS, simultaneous bilateral sublingual imaging of the microcirculation were performed during a lung recruitment maneuver (LRM) in mechanically ventilated critically ill patients. One SDF device was operated handheld; the second was fitted with the IAS and held in position by a mechanic arm. Lateral drift, number of losses of image stability and duration of stable imaging of the two methods were compared. Five healthy volunteers were studied. The IAS did not affect microcirculatory flow velocities. A significantly greater force had to applied onto the tissue to induced PA with compared to without IAS (0.25 +/- 0.15 N without vs. 0.62 +/- 0.05 N with the IAS, p IAS ensured an increased duration of a stable image sequence (8 +/- 2 s without vs. 42 +/- 8 s with the IAS, p IAS. In eight mechanically ventilated patients undergoing a LRM the use of the IAS resulted in a significantly reduced image drifting and enabled the acquisition of significantly longer stable image sequences (24 +/- 5 s without vs. 67 +/- 14 s with the IAS, p = 0.006). The present study has validated the use of an IAS for improvement of SDF imaging by demonstrating that the IAS did not affect microcirculatory perfusion in the microscopic field of view. The IAS improved both axial and lateral SDF image stability and thereby increased the critical force required

  14. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  15. Semi-supervised manifold learning with affinity regularization for Alzheimer's disease identification using positron emission tomography imaging.

    Science.gov (United States)

    Lu, Shen; Xia, Yong; Cai, Tom Weidong; Feng, David Dagan

    2015-01-01

    Dementia, Alzheimer's disease (AD) in particular is a global problem and big threat to the aging population. An image based computer-aided dementia diagnosis method is needed to providing doctors help during medical image examination. Many machine learning based dementia classification methods using medical imaging have been proposed and most of them achieve accurate results. However, most of these methods make use of supervised learning requiring fully labeled image dataset, which usually is not practical in real clinical environment. Using large amount of unlabeled images can improve the dementia classification performance. In this study we propose a new semi-supervised dementia classification method based on random manifold learning with affinity regularization. Three groups of spatial features are extracted from positron emission tomography (PET) images to construct an unsupervised random forest which is then used to regularize the manifold learning objective function. The proposed method, stat-of-the-art Laplacian support vector machine (LapSVM) and supervised SVM are applied to classify AD and normal controls (NC). The experiment results show that learning with unlabeled images indeed improves the classification performance. And our method outperforms LapSVM on the same dataset.

  16. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  17. Improvement of Sidestream Dark Field Imaging with an Image Acquisition Stabilizer

    International Nuclear Information System (INIS)

    Balestra, Gianmarco M; Bezemer, Rick; Boerma, E Christiaan; Yong, Ze-Yie; Sjauw, Krishan D; Engstrom, Annemarie E; Koopmans, Matty; Ince, Can

    2010-01-01

    In the present study we developed, evaluated in volunteers, and clinically validated an image acquisition stabilizer (IAS) for Sidestream Dark Field (SDF) imaging. The IAS is a stainless steel sterilizable ring which fits around the SDF probe tip. The IAS creates adhesion to the imaged tissue by application of negative pressure. The effects of the IAS on the sublingual microcirculatory flow velocities, the force required to induce pressure artifacts (PA), the time to acquire a stable image, and the duration of stable imaging were assessed in healthy volunteers. To demonstrate the clinical applicability of the SDF setup in combination with the IAS, simultaneous bilateral sublingual imaging of the microcirculation were performed during a lung recruitment maneuver (LRM) in mechanically ventilated critically ill patients. One SDF device was operated handheld; the second was fitted with the IAS and held in position by a mechanic arm. Lateral drift, number of losses of image stability and duration of stable imaging of the two methods were compared. Five healthy volunteers were studied. The IAS did not affect microcirculatory flow velocities. A significantly greater force had to applied onto the tissue to induced PA with compared to without IAS (0.25 ± 0.15 N without vs. 0.62 ± 0.05 N with the IAS, p < 0.001). The IAS ensured an increased duration of a stable image sequence (8 ± 2 s without vs. 42 ± 8 s with the IAS, p < 0.001). The time required to obtain a stable image sequence was similar with and without the IAS. In eight mechanically ventilated patients undergoing a LRM the use of the IAS resulted in a significantly reduced image drifting and enabled the acquisition of significantly longer stable image sequences (24 ± 5 s without vs. 67 ± 14 s with the IAS, p = 0.006). The present study has validated the use of an IAS for improvement of SDF imaging by demonstrating that the IAS did not affect microcirculatory perfusion in the microscopic field of view. The IAS

  18. Dry eye evaluation and correlation analysis between tear film stability and corneal surface regularity after small incision lenticule extraction.

    Science.gov (United States)

    Zhang, Hui; Wang, Yan

    2017-09-22

    To investigate the dry eye after small incision lenticule extraction (SMILE) and explore the correlations between changes in the tear film stability, the tear secretion and the corneal surface regularity. Sixty-two eyes of 22 men and 13 women who underwent SMILE were included in this study. Corneal topography was measured to assess the index of surface variance (ISV) and the index of vertical asymmetry (IVA). Dry eye tests including subjective symptom questionnaire, tear breakup time (TBUT), corneal fluorescein staining and Schirmer's test (ST) were evaluated before and at 1 and 6 months postoperatively. TBUT was found to be significantly decreased from 9.8 ± 3.4 s preoperatively to 7.4 ± 3.8 s at 1 month and 6.5 ± 3.6 s at 6 months (both P short-TBUT type of dry eye. Corneal surface regularity indices might be helpful in the assessment of tear film stability following SMILE procedure.

  19. Beamforming Through Regularized Inverse Problems in Ultrasound Medical Imaging.

    Science.gov (United States)

    Szasz, Teodora; Basarab, Adrian; Kouame, Denis

    2016-12-01

    Beamforming (BF) in ultrasound (US) imaging has significant impact on the quality of the final image, controlling its resolution and contrast. Despite its low spatial resolution and contrast, delay-and-sum (DAS) is still extensively used nowadays in clinical applications, due to its real-time capabilities. The most common alternatives are minimum variance (MV) method and its variants, which overcome the drawbacks of DAS, at the cost of higher computational complexity that limits its utilization in real-time applications. In this paper, we propose to perform BF in US imaging through a regularized inverse problem based on a linear model relating the reflected echoes to the signal to be recovered. Our approach presents two major advantages: 1) its flexibility in the choice of statistical assumptions on the signal to be beamformed (Laplacian and Gaussian statistics are tested herein) and 2) its robustness to a reduced number of pulse emissions. The proposed framework is flexible and allows for choosing the right tradeoff between noise suppression and sharpness of the resulted image. We illustrate the performance of our approach on both simulated and experimental data, with in vivo examples of carotid and thyroid. Compared with DAS, MV, and two other recently published BF techniques, our method offers better spatial resolution, respectively contrast, when using Laplacian and Gaussian priors.

  20. Research for correction pre-operative MRI images of brain during operation using particle method simulation

    International Nuclear Information System (INIS)

    Shino, Ryosaku; Koshizuka, Seiichi; Sakai, Mikio; Ito, Hirotaka; Iseki, Hiroshi; Muragaki, Yoshihiro

    2010-01-01

    In the neurosurgical procedures, surgeon formulates a surgery plan based on pre-operative images such as MRI. However, the brain is transformed by removal of the affected area. In this paper, we propose a method for reconstructing pre-operative images involving the deformation with physical simulation. First, the domain of brain is identified in pre-operative images. Second, we create particles for physical simulation. Then, we carry out the linear elastic simulation taking into account the gravity. Finally, we reconstruct pre-operative images with deformation according to movement of the particles. We show the effectiveness of this method by reconstructing the pre-operative image actually taken before surgery. (author)

  1. Improvement of Sidestream Dark Field Imaging with an Image Acquisition Stabilizer

    NARCIS (Netherlands)

    Balestra, Gianmarco M.; Bezemer, Rick; Boerma, E. Christiaan; Yong, Ze-Yie; Sjauw, Krishan D.; Engstrom, Annemarie E.; Koopmans, Matty; Ince, Can

    2010-01-01

    ABSTRACT: BACKGROUND: In the present study we developed, evaluated in volunteers, and clinically validated an image acquisition stabilizer (IAS) for Sidestream Dark Field (SDF) imaging. METHODS: The IAS is a stainless steel sterilizable ring which fits around the SDF probe tip. The IAS creates

  2. Improvement of Sidestream Dark Field Imaging with an Image Acquisition Stabilizer

    NARCIS (Netherlands)

    G. Balestra (Gianmarco); R. Bezemer (Rick); E.C. Boerma (Christiaan); Z-Y. Yong (Ze-Yie); K.D. Sjauw (Krishan); A.E. Engstrom (Annemarie); M. Koopmans (Matty); C. Ince (Can)

    2010-01-01

    textabstractBackground: In the present study we developed, evaluated in volunteers, and clinically validated an image acquisition stabilizer (IAS) for Sidestream Dark Field (SDF) imaging.Methods: The IAS is a stainless steel sterilizable ring which fits around the SDF probe tip. The IAS creates

  3. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  4. An innovative pre-targeting strategy for tumor cell specific imaging and therapy.

    Science.gov (United States)

    Qin, Si-Yong; Peng, Meng-Yun; Rong, Lei; Jia, Hui-Zhen; Chen, Si; Cheng, Si-Xue; Feng, Jun; Zhang, Xian-Zheng

    2015-09-21

    A programmed pre-targeting system for tumor cell imaging and targeting therapy was established based on the "biotin-avidin" interaction. In this programmed functional system, transferrin-biotin can be actively captured by tumor cells with the overexpression of transferrin receptors, thus achieving the pre-targeting modality. Depending upon avidin-biotin recognition, the attachment of multivalent FITC-avidin to biotinylated tumor cells not only offered the rapid fluorescence labelling, but also endowed the pre-targeted cells with targeting sites for the specifically designed biotinylated peptide nano-drug. Owing to the successful pre-targeting, tumorous HepG2 and HeLa cells were effectively distinguished from the normal 3T3 cells via fluorescence imaging. In addition, the self-assembled peptide nano-drug resulted in enhanced cell apoptosis in the observed HepG2 cells. The tumor cell specific pre-targeting strategy is applicable for a variety of different imaging and therapeutic agents for tumor treatments.

  5. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  6. Improvement of Sidestream Dark Field Imaging with an Image Acquisition Stabilizer

    OpenAIRE

    Balestra, Gianmarco M; Bezemer, Rick; Boerma, E Christiaan; Yong, Ze-Yie; Sjauw, Krishan D; Engstrom, Annemarie E; Koopmans, Matty; Ince, Can

    2010-01-01

    Abstract Background In the present study we developed, evaluated in volunteers, and clinically validated an image acquisition stabilizer (IAS) for Sidestream Dark Field (SDF) imaging. Methods The IAS is a stainless steel sterilizable ring which fits around the SDF probe tip. The IAS creates adhesion to the imaged tissue by application of negative pressure. The effects of the IAS on the sublingual microcirculatory flow velocities, the force required to induce pressure artifacts (PA), the time ...

  7. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  8. Contour extraction of echocardiographic images based on pre-processing

    International Nuclear Information System (INIS)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana; Zamrin, D M; Saripan, M Iqbal

    2011-01-01

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  9. Energy functions for regularization algorithms

    Science.gov (United States)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  10. A Gimbal-Stabilized Compact Hyperspectral Imaging System, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The Gimbal-stabilized Compact Hyperspectral Imaging System (GCHIS) fully integrates multi-sensor spectral imaging, stereovision, GPS and inertial measurement,...

  11. Bayesian estimation of regularization and atlas building in diffeomorphic image registration.

    Science.gov (United States)

    Zhang, Miaomiao; Singh, Nikhil; Fletcher, P Thomas

    2013-01-01

    This paper presents a generative Bayesian model for diffeomorphic image registration and atlas building. We develop an atlas estimation procedure that simultaneously estimates the parameters controlling the smoothness of the diffeomorphic transformations. To achieve this, we introduce a Monte Carlo Expectation Maximization algorithm, where the expectation step is approximated via Hamiltonian Monte Carlo sampling on the manifold of diffeomorphisms. An added benefit of this stochastic approach is that it can successfully solve difficult registration problems involving large deformations, where direct geodesic optimization fails. Using synthetic data generated from the forward model with known parameters, we demonstrate the ability of our model to successfully recover the atlas and regularization parameters. We also demonstrate the effectiveness of the proposed method in the atlas estimation problem for 3D brain images.

  12. Stability of the Minimizers of Least Squares with a Non-Convex Regularization. Part I: Local Behavior

    International Nuclear Information System (INIS)

    Durand, S.; Nikolova, M.

    2006-01-01

    Many estimation problems amount to minimizing a piecewise C m objective function, with m ≥ 2, composed of a quadratic data-fidelity term and a general regularization term. It is widely accepted that the minimizers obtained using non-convex and possibly non-smooth regularization terms are frequently good estimates. However, few facts are known on the ways to control properties of these minimizers. This work is dedicated to the stability of the minimizers of such objective functions with respect to variations of the data. It consists of two parts: first we consider all local minimizers, whereas in a second part we derive results on global minimizers. In this part we focus on data points such that every local minimizer is isolated and results from a C m-1 local minimizer function, defined on some neighborhood. We demonstrate that all data points for which this fails form a set whose closure is negligible

  13. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  14. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    Science.gov (United States)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  15. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    International Nuclear Information System (INIS)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-01-01

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise

  16. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Pre-analytic process control: projecting a quality image.

    Science.gov (United States)

    Serafin, Mark D

    2006-09-26

    Within the health-care system, the term "ancillary department" often describes the laboratory. Thus, laboratories may find it difficult to define their image and with it, customer perception of department quality. Regulatory requirements give laboratories who so desire an elegant way to address image and perception issues--a comprehensive pre-analytic system solution. Since large laboratories use such systems--laboratory service manuals--I describe and illustrate the process for the benefit of smaller facilities. There exist resources to help even small laboratories produce a professional service manual--an elegant solution to image and customer perception of quality.

  18. Influence of pre-strain on thermal stability of non-equilibrium microstructures in a low alloy steel

    International Nuclear Information System (INIS)

    Sun, Chao; Yang, Shanwu; Wang, Xian; Zhang, Rui; He, Xinlai

    2013-01-01

    Highlights: ► High pre-strain and low pre-strain influence differently on thermal stability of non-equilibrium microstructures. ► High pre-strain, in which dislocation sources can be actuated and dislocation density is increased excessively, will markedly promote recrystallization. ► Low pre-strain, in which dislocations are induced to redistribute into a low-energy structure, can slow down microstructure evolution. -- Abstract: Non-equilibrium microstructures in steels including martensite and bainite, which are main phases in current high strength steels, possess high strength and hardness. However, these microstructures are metastable due to their high density of crystal defects. In the present investigation, hardness test, optical microscopy and electron microscopy have been carried out to detect microstructure evolution in a low alloy steel, which was reheated and held isothermally at 550 °C. Special emphasis was put on influence of pre-strain on thermal stability of non-equilibrium microstructures. It is found that high pre-strain, in which dislocation sources can be actuated and dislocation density is increased excessively, will markedly promote recrystallization of non-equilibrium microstructures at 550 °C, while low pre-strain, in which only can mono-glide of dislocations can be operated in each grain and dislocations are induced to redistribute into a low-energy structure, can slow down microstructure evolution

  19. Regular Discrete Cosine Transform and its Application to Digital Images Representation

    Directory of Open Access Journals (Sweden)

    Yuri A. Gadzhiev

    2011-11-01

    Full Text Available Discrete cosine transform dct-i, unlike dct-ii, does not concentrate the energy of a transformed vector sufficiently well, so it is not used practically for the purposes of digital image compression. By performing regular normalization of the basic cosine transform matrix, we obtain a discrete cosine transform which has the same cosine basis as dct-i, coincides as dct-i with its own inverse transform, but unlike dct-i, it does not reduce the proper ability of cosine transform to the energy concentration. In this paper we consider briefly the properties of this transform, its possible integer implementation for the case of 8x8-matrix, its applications to the image itself and to the preliminary rgb colour space transformations, further more we investigate some models of quantization, perform an experiment for the estimation of the level of digital images compression and the quality achieved by use of this transform. This experiment shows that the transform can be sufficiently effective for practical use, but the question of its comparative effectiveness with respect to dct-ii remains open.

  20. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    Science.gov (United States)

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  1. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology

    Science.gov (United States)

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-01

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a

  2. First-principles study of structural stability and elastic property of pre-perovskite PbTiO3

    International Nuclear Information System (INIS)

    Liu Yong; Ni Li-Hong; Ren Zhao-Hui; Xu Gang; Li Xiang; Song Chen-Lu; Han Gao-Rong

    2012-01-01

    The structural stability and the elastic properties of a novel structure of lead titanate, which is named pre- perovskite PbTiO 3 (PP-PTO) and is constructed with TiO 6 octahedral columns arranged in a one-dimensional manner, are investigated by using first-principles calculations. PP-PTO is energetically unstable compared with conventional perovskite phases, however it is mechanically stable. The equilibrium transition pressures for changing from pre- perovskite to cubic and tetragonal phases are −0.5 GPa and −1.4 GPa, respectively, with first-order characteristics. Further, the differences in elastic properties between pre-perovskite and conventional perovskite phases are discussed for the covalent bonding network, which shows a highly anisotropic character in PP-PTO. This study provides a crucial insight into the structural stabilities of PP-PTO and conventional perovskite. (condensed matter: structural, mechanical, and thermal properties)

  3. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    International Nuclear Information System (INIS)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  4. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Science.gov (United States)

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  5. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    Science.gov (United States)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  6. Regularity, variability and bi-stability in the activity of cerebellar purkinje cells.

    Science.gov (United States)

    Rokni, Dan; Tal, Zohar; Byk, Hananel; Yarom, Yosef

    2009-01-01

    Recent studies have demonstrated that the membrane potential of Purkinje cells is bi-stable and that this phenomenon underlies bi-modal simple spike firing. Membrane potential alternates between a depolarized state, that is associated with spontaneous simple spike firing (up state), and a quiescent hyperpolarized state (down state). A controversy has emerged regarding the relevance of bi-stability to the awake animal, yet recordings made from behaving cat Purkinje cells have demonstrated that at least 50% of the cells exhibit bi-modal firing. The robustness of the phenomenon in vitro or in anaesthetized systems on the one hand, and the controversy regarding its expression in behaving animals on the other hand suggest that state transitions are under neuronal control. Indeed, we have recently demonstrated that synaptic inputs can induce transitions between the states and suggested that the role of granule cell input is to control the states of Purkinje cells rather than increase or decrease firing rate gradually. We have also shown that the state of a Purkinje cell does not only affect its firing but also the waveform of climbing fiber-driven complex spikes and the associated calcium influx. These findings call for a reconsideration of the role of Purkinje cells in cerebellar function. In this manuscript we review the recent findings on Purkinje cell bi-stability and add some analyses of its effect on the regularity and variability of Purkinje cell activity.

  7. Regularity, variabilty and bi-stability in the activity of cerebellar Purkinje cells

    Directory of Open Access Journals (Sweden)

    Dan Rokni

    2009-11-01

    Full Text Available Recent studies have demonstrated that the membrane potential of Purkinje cells is bi-stable and that this phenomenon underlies bi-modal simple spike firing. Membrane potential alternates between a depolarized state, that is associated with spontaneous simple spike firing (up state, and a quiescent hyperpolarized state (down state. A controversy has emerged regarding the relevance of bi-stability to the awake animal, yet recordings made from behaving cat Purkinje cells have demonstrated that at least 50% of the cells exhibit bi-modal firing. The robustness of the phenomenon in-vitro or in anaesthetized systems on the one hand, and the controversy regarding its expression in behaving animals on the other hand suggest that state transitions are under neuronal control. Indeed, we have recently demonstrated that synaptic inputs can induce transitions between the states and suggested that the role of granule cell input is to control the states of Purkinje cells rather than increase or decrease firing rate gradually. We have also shown that the state of a Purkinje cell does not only affect its firing but also the waveform of climbing fiber-driven complex spikes and the associated calcium influx. These findings call for a reconsideration of the role of Purkinje cells in cerebellar function. In this manuscript we review the recent findings on Purkinje cell bi-stability and add some analyses of its effect on the regularity and variability of Purkinje cell activity.

  8. Breast ultrasound tomography with total-variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  9. Assessment of prior image induced nonlocal means regularization for low-dose CT reconstruction: Change in anatomy.

    Science.gov (United States)

    Zhang, Hao; Ma, Jianhua; Wang, Jing; Moore, William; Liang, Zhengrong

    2017-09-01

    Repeated computed tomography (CT) scans are prescribed for some clinical applications such as lung nodule surveillance. Several studies have demonstrated that incorporating a high-quality prior image into the reconstruction of subsequent low-dose CT (LDCT) acquisitions can either improve image quality or reduce data fidelity requirements. Our proposed previous normal-dose image induced nonlocal means (ndiNLM) regularization method for LDCT is an example of such a method. However, one major concern with prior image based methods is that they might produce false information when the prior image and the current LDCT image show different structures (for example, if a lung nodule emerges, grows, shrinks, or disappears over time). This study aims to assess the performance of the ndiNLM regularization method in situations with change in anatomy. We incorporated the ndiNLM regularization into the statistical image reconstruction (SIR) framework for reconstruction of subsequent LDCT images. Because of its patch-based search mechanism, a rough registration between the prior image and the current LDCT image is adequate for the SIR-ndiNLM method. We assessed the performance of the SIR-ndiNLM method in lung nodule surveillance for two different scenarios: (a) the nodule was not found in a baseline exam but appears in a follow-up LDCT scan; (b) the nodule was present in a baseline exam but disappears in a follow-up LDCT scan. We further investigated the effect of nodule size on the performance of the SIR-ndiNLM method. We found that a relatively large search-window (e.g., 33 × 33) should be used for the SIR-ndiNLM method to account for misalignment between the prior image and the current LDCT image, and to ensure that enough similar patches can be found in the prior image. With proper selection of other parameters, experimental results with two patient datasets demonstrated that the SIR-ndiNLM method did not miss true nodules nor introduce false nodules in the lung nodule

  10. Statistical Quality Assessment of Pre-fried Carrots Using Multispectral Imaging

    DEFF Research Database (Denmark)

    Sharifzadeh, Sara; Clemmensen, Line Katrine Harder; Løje, Hanne

    2013-01-01

    Multispectral imaging is increasingly being used for quality assessment of food items due to its non-invasive benefits. In this paper, we investigate the use of multispectral images of pre-fried carrots, to detect changes over a period of 14 days. The idea is to distinguish changes in quality from...

  11. The interpretation of remote sensing image on the stability of fault zone at HLW repository site

    International Nuclear Information System (INIS)

    Liu Linqing; Yu Yunxiang

    1994-01-01

    It is attempted to interpret the buried fault at the preselected HLW repository site in western Gansu province with a remote sensing image. The authors discuss the features of neotectonism of Shule River buried fault zone and its two sides in light of the remote sensing image, geomorphology, stream pattern, type and thickness difference of Quaternary sediments, and structural basin, etc.. The stability of Shule River fault zone is mainly dominated by the neotectonic movement pattern and strength of its two sides. Although there exist normal and differential vertical movements along it, their strengths are small. Therefore, this is a weakly-active passive fault zone. The east Beishan area north to Shule River fault zone is weakliest active and is considered as the target for further pre-selection for HLW repository site

  12. Planning JWST NIRSpec MSA spectroscopy using NIRCam pre-images

    Science.gov (United States)

    Beck, Tracy L.; Ubeda, Leonardo; Kassin, Susan A.; Gilbert, Karoline; Karakla, Diane M.; Reid, I. N.; Blair, William P.; Keyes, Charles D.; Soderblom, D. R.; Peña-Guerrero, Maria A.

    2016-07-01

    The Near-Infrared Spectrograph (NIRSpec) is the work-horse spectrograph at 1-5microns for the James Webb Space Telescope (JWST). A showcase observing mode of NIRSpec is the multi-object spectroscopy with the Micro-Shutter Arrays (MSAs), which consist of a quarter million tiny configurable shutters that are 0. ''20×0. ''46 in size. The NIRSpec MSA shutters can be opened in adjacent rows to create flexible and positionable spectroscopy slits on prime science targets of interest. Because of the very small shutter width, the NIRSpec MSA spectral data quality will benefit significantly from accurate astrometric knowledge of the positions of planned science sources. Images acquired with the Hubble Space Telescope (HST) have the optimal relative astrometric accuracy for planning NIRSpec observations of 5-10 milli-arcseconds (mas). However, some science fields of interest might have no HST images, galactic fields can have moderate proper motions at the 5mas level or greater, and extragalactic images with HST may have inadequate source information at NIRSpec wavelengths beyond 2 microns. Thus, optimal NIRSpec spectroscopy planning may require pre-imaging observations with the Near-Infrared Camera (NIRCam) on JWST to accurately establish source positions for alignment with the NIRSpec MSAs. We describe operational philosophies and programmatic considerations for acquiring JWST NIRCam pre-image observations for NIRSpec MSA spectroscopic planning within the same JWST observing Cycle.

  13. Pre-clinical functional magnetic resonance imaging. Pt. I. The kidney

    Energy Technology Data Exchange (ETDEWEB)

    Zoellner, Frank G.; Kalayciyan, Raffi; Chacon-Caldera, Jorge; Zimmer, Fabian; Schad, Lothar R. [Heidelberg Univ., Mannheim (Germany). Computer Assisted Clinical Medicine

    2014-07-01

    The prevalence of chronic kidney disease (CKD) is increasing worldwide. In Europe alone, at least 8% of the population currently has some degree of CKD. CKD is associated with serious comorbidity, reduced life expectancy, and high economic costs; hence, the early detection and adequate treatment of kidney disease is important. Pre-clinical research can not only give insights into the mechanisms of the various kidney diseases but it also allows for investigating the outcome of new drugs developed to treat kidney disease. Functional magnetic resonance imaging provides non-invasive access to tissue and organ function in animal models. Advantages over classical animal research approaches are numerous: the same animal might be repeatedly imaged to investigate a progress or a treatment of disease over time. This has also a direct impact on animal welfare and the refinement of classical animal experiments as the number of animals in the studies might be reduced. In this paper, we review current state of the art in functional magnetic resonance imaging with a focus on pre-clinical kidney imaging.

  14. Is pre-operative imaging essential prior to ureteric stone surgery?

    Science.gov (United States)

    Youssef, F R; Wilkinson, B A; Hastie, K J; Hall, J

    2012-09-01

    The aim of this study was to identify patients not requiring ureteric stone surgery based on pre-operative imaging (within 24 hours) prior to embarking on semirigid ureteroscopy (R-URS) for urolithiasis. The imaging of all consecutive patients on whom R-URS for urolithiasis was performed over a 12-month period was reviewed. All patients had undergone a plain x-ray of the kidney, ureters and bladder (KUB), abdominal non-contrast computed tomography (NCCT-KUB) or both on the day of surgery. A total of 96 patients were identified for the study. Stone sizes ranged from 3 mm to 20 mm. Thirteen patients (14%) were cancelled as no stone(s) were identified on pre-operative imaging. Of the patients cancelled, 8 (62%) required NCCT-KUB to confirm spontaneous stone passage. One in seven patients were stone free on the day of surgery. This negates the need for unnecessary anaesthetic and instrumentation of the urinary tract, with the associated morbidity. Up-to-date imaging prior to embarking on elective ureteric stone surgery is highly recommended.

  15. Pre-clinical functional magnetic resonance imaging. Pt. I. The kidney

    International Nuclear Information System (INIS)

    Zoellner, Frank G.; Kalayciyan, Raffi; Chacon-Caldera, Jorge; Zimmer, Fabian; Schad, Lothar R.

    2014-01-01

    The prevalence of chronic kidney disease (CKD) is increasing worldwide. In Europe alone, at least 8% of the population currently has some degree of CKD. CKD is associated with serious comorbidity, reduced life expectancy, and high economic costs; hence, the early detection and adequate treatment of kidney disease is important. Pre-clinical research can not only give insights into the mechanisms of the various kidney diseases but it also allows for investigating the outcome of new drugs developed to treat kidney disease. Functional magnetic resonance imaging provides non-invasive access to tissue and organ function in animal models. Advantages over classical animal research approaches are numerous: the same animal might be repeatedly imaged to investigate a progress or a treatment of disease over time. This has also a direct impact on animal welfare and the refinement of classical animal experiments as the number of animals in the studies might be reduced. In this paper, we review current state of the art in functional magnetic resonance imaging with a focus on pre-clinical kidney imaging.

  16. Sports activities are reflected in the local stability and regularity of body sway : Older ice-skaters have better postural control than inactive elderly

    NARCIS (Netherlands)

    Lamoth, Claudine J. C.; van Heuvelen, Marieke J. G.

    With age postural control deteriorates and increases the risk for falls. Recent research has suggested that in contrast to persons with superior balance control (dancer's athletes), with pathology and aging, predictability and regularity of sway patterns increase and stability decreases implying a

  17. Stability of negative ionization fronts: Regularization by electric screening?

    International Nuclear Information System (INIS)

    Arrayas, Manuel; Ebert, Ute

    2004-01-01

    We recently have proposed that a reduced interfacial model for streamer propagation is able to explain spontaneous branching. Such models require regularization. In the present paper we investigate how transversal Fourier modes of a planar ionization front are regularized by the electric screening length. For a fixed value of the electric field ahead of the front we calculate the dispersion relation numerically. These results guide the derivation of analytical asymptotes for arbitrary fields: for small wave-vector k, the growth rate s(k) grows linearly with k, for large k, it saturates at some positive plateau value. We give a physical interpretation of these results

  18. Poster — Thur Eve — 15: Improvements in the stability of the tomotherapy imaging beam

    Energy Technology Data Exchange (ETDEWEB)

    Belec, J [The Ottawa Hospital Cancer Center, Ontario (Canada)

    2014-08-15

    Use of helical TomoTherapy based MVCT imaging for adaptive planning requires the image values (HU) to remain stable over the course of treatment. In the past, the image value stability was suboptimal, which required frequent change to the image value to density calibration curve to avoid dose errors on the order of 2–4%. The stability of the image values at our center was recently improved by stabilizing the dose rate of the machine (dose control servo) and performing daily MVCT calibration corrections. In this work, we quantify the stability of the image values over treatment time by comparing patient treatment image density derived using MVCT and KVCT. The analysis includes 1) MVCT - KVCT density difference histogram, 2) MVCT vs KVCT density spectrum, 3) multiple average profile density comparison and 4) density difference in homogeneous locations. Over two months, the imaging beam stability was compromised several times due to a combination of target wobbling, spectral calibration, target change and magnetron issues. The stability of the image values were analyzed over the same period. Results show that the impact on the patient dose calculation is 0.7% +− 0.6%.

  19. Sparse reconstruction by means of the standard Tikhonov regularization

    International Nuclear Information System (INIS)

    Lu Shuai; Pereverzev, Sergei V

    2008-01-01

    It is a common belief that Tikhonov scheme with || · ||L 2 -penalty fails in sparse reconstruction. We are going to show, however, that this standard regularization can help if the stability measured in L 1 -norm will be properly taken into account in the choice of the regularization parameter. The crucial point is that now a stability bound may depend on the bases with respect to which the solution of the problem is assumed to be sparse. We discuss how such a stability can be estimated numerically and present the results of computational experiments giving the evidence of the reliability of our approach.

  20. Performance and stability of low-cost dye-sensitized solar cell based crude and pre-concentrated anthocyanins: Combined experimental and DFT/TDDFT study

    Science.gov (United States)

    Chaiamornnugool, Phrompak; Tontapha, Sarawut; Phatchana, Ratchanee; Ratchapolthavisin, Nattawat; Kanokmedhakul, Somdej; Sang-aroon, Wichien; Amornkitbamrung, Vittaya

    2017-01-01

    The low cost DSSCs utilized by crude and pre-concentrated anthocyanins extracted from six anthocyanin-rich samples including mangosteen pericarp, roselle, red cabbage, Thai berry, black rice and blue pea were fabricated. Their photo-to-current conversion efficiencies and stability were examined. Pre-concentrated extracts were obtained by solid phase extraction (SPE) using C18 cartridge. The results obviously showed that all pre-concentrated extracts performed on photovoltaic performances in DSSCs better than crude extracts except for mangosteen pericarp. The DSSC sensitized by pre-concentrated anthocyanin from roselle and red cabbage showed maximum current efficiency η = 0.71% while DSSC sensitized by crude anthocyanin from mangosteen pericarp reached maximum efficiency η = 0.97%. In addition, pre-concentrated extract based cells possess more stability than those of crude extract based cells. This indicates that pre-concentration of anthocyanin via SPE method is very effective for DSSCs based on good photovoltaic performance and stability. The DFT/TDDFT calculations of electronic and photoelectrochemical properties of the major anthocyanins found in the samples are employed to support the experimental results.

  1. A visibility-based approach using regularization for imaging-spectroscopy in solar X-ray astronomy

    Energy Technology Data Exchange (ETDEWEB)

    Prato, M; Massone, A M; Piana, M [CNR - INFM LAMIA, Via Dodecaneso 33 1-16146 Genova (Italy); Emslie, A G [Department of Physics, Oklahoma State University, Stillwater, OK 74078 (United States); Hurford, G J [Space Sciences Laboratory, University of California at Berkeley, 8 Gauss Way, Berkeley, CA 94720-7450 (United States); Kontar, E P [Department of Physics and Astronomy, The University, Glasgow G12 8QQ, Scotland (United Kingdom); Schwartz, R A [CUA - Catholic University and LSSP at NASA Goddard Space Flight Center, code 671.1 Greenbelt, MD 20771 (United States)], E-mail: massone@ge.infm.it

    2008-11-01

    The Reuven Ramaty High-Energy Solar Spectroscopic Imager (RHESSI) is a nine-collimators satellite detecting X-rays and {gamma}-rays emitted by the Sun during flares. As the spacecraft rotates, imaging information is encoded as rapid time-variations of the detected flux. We recently proposed a method for the construction of electron flux maps at different electron energies from sets of count visibilities (i.e., direct, calibrated measurements of specific Fourier components of the source spatial structure) measured by RHESSI. The method requires the application of regularized inversion for the synthesis of electron visibility spectra and of imaging techniques for the reconstruction of two-dimensional electron flux maps. The method, already tested on real events registered by RHESSI, is validated in this paper by means of simulated realistic data.

  2. Evaluation of Laser Stabilization and Imaging Systems for LCLS-II - Final Paper

    Energy Technology Data Exchange (ETDEWEB)

    Barry, Matthew [Auburn Univ., AL (United States)

    2015-08-20

    By combining the top performing commercial laser beam stabilization system with the most ideal optical imaging configuration, the beamline for the Linear Accelerator Coherent Light Source II (LCLS-II) will deliver the highest quality and most stable beam to the cathode. To determine the optimal combination, LCLS-II beamline conditions were replicated and the systems tested with a He-Ne laser. The Guidestar-II and MRC active laser beam stabilization systems were evaluated for their ideal positioning and stability. Both a two and four lens optical imaging configuration was then evaluated for beam imaging quality, magnification properties, and natural stability. In their best performances when tested over fifteen hours, Guidestar-II kept the beam stable over approximately 70-110um while the MRC system kept it stable over approximately 90-100um. During short periods of time, Guidestar-II kept the beam stable between 10-20um, but was more susceptible to drift over time, while the MRC system maintained the beam between 30-50um with less overall drift. The best optical imaging configuration proved to be a four lens system that images to the iris located in the cathode room and from there, imaged to the cathode. The magnification from the iris to the cathode was 2:1, within an acceptable tolerance to the expected 2.1:1 magnification. The two lens configuration was slightly more stable in small periods of time (less than 10 minutes) without the assistance of a stability system, approximately 55um compared to approximately 70um, but the four lens configurations beam image had a significantly flatter intensity distribution compared to the two lens configuration which had a Gaussian distribution. A final test still needs to be run with both stability systems running at the same time through the four lens system. With this data, the optimal laser beam stabilization system can be determined for the beamline of LCLS-II.

  3. Rhythmic regularity revisited : Is beat induction indeed pre-attentive?

    NARCIS (Netherlands)

    Bouwer, F.; Honing, H.; Cambouropoulos, E.; Tsougras, C.; Mavromatis, P.; Pastiadis, K.

    2012-01-01

    When listening to musical rhythm, regularity in time is often perceived in the form of a beat or pulse. External rhythmic events can give rise to the perception of a beat, through a process known as beat induction. In addition, internal processes, like long-term memory, working memory and automatic

  4. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    Science.gov (United States)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  5. Experiment Design Regularization-Based Hardware/Software Codesign for Real-Time Enhanced Imaging in Uncertain Remote Sensing Environment

    Directory of Open Access Journals (Sweden)

    Castillo Atoche A

    2010-01-01

    Full Text Available A new aggregated Hardware/Software (HW/SW codesign approach to optimization of the digital signal processing techniques for enhanced imaging with real-world uncertain remote sensing (RS data based on the concept of descriptive experiment design regularization (DEDR is addressed. We consider the applications of the developed approach to typical single-look synthetic aperture radar (SAR imaging systems operating in the real-world uncertain RS scenarios. The software design is aimed at the algorithmic-level decrease of the computational load of the large-scale SAR image enhancement tasks. The innovative algorithmic idea is to incorporate into the DEDR-optimized fixed-point iterative reconstruction/enhancement procedure the convex convergence enforcement regularization via constructing the proper multilevel projections onto convex sets (POCS in the solution domain. The hardware design is performed via systolic array computing based on a Xilinx Field Programmable Gate Array (FPGA XC4VSX35-10ff668 and is aimed at implementing the unified DEDR-POCS image enhancement/reconstruction procedures in a computationally efficient multi-level parallel fashion that meets the (near real-time image processing requirements. Finally, we comment on the simulation results indicative of the significantly increased performance efficiency both in resolution enhancement and in computational complexity reduction metrics gained with the proposed aggregated HW/SW co-design approach.

  6. 4D PET iterative deconvolution with spatiotemporal regularization for quantitative dynamic PET imaging.

    Science.gov (United States)

    Reilhac, Anthonin; Charil, Arnaud; Wimberley, Catriona; Angelis, Georgios; Hamze, Hasar; Callaghan, Paul; Garcia, Marie-Paule; Boisson, Frederic; Ryder, Will; Meikle, Steven R; Gregoire, Marie-Claude

    2015-09-01

    Quantitative measurements in dynamic PET imaging are usually limited by the poor counting statistics particularly in short dynamic frames and by the low spatial resolution of the detection system, resulting in partial volume effects (PVEs). In this work, we present a fast and easy to implement method for the restoration of dynamic PET images that have suffered from both PVE and noise degradation. It is based on a weighted least squares iterative deconvolution approach of the dynamic PET image with spatial and temporal regularization. Using simulated dynamic [(11)C] Raclopride PET data with controlled biological variations in the striata between scans, we showed that the restoration method provides images which exhibit less noise and better contrast between emitting structures than the original images. In addition, the method is able to recover the true time activity curve in the striata region with an error below 3% while it was underestimated by more than 20% without correction. As a result, the method improves the accuracy and reduces the variability of the kinetic parameter estimates calculated from the corrected images. More importantly it increases the accuracy (from less than 66% to more than 95%) of measured biological variations as well as their statistical detectivity. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  7. T2 values of femoral cartilage of the knee joint: Comparison between pre-contrast and post-contrast images

    International Nuclear Information System (INIS)

    Yoon, Hyun Jung; Yoon, Young Cheol; Choe, Bong Keun

    2014-01-01

    To retrospectively evaluate the relationship between T2 values of pre- and post-contrast magnetic resonance (MR) images of femoral cartilage in patients with varying degrees of osteoarthritis. A total of 19 patients underwent delayed gadolinium-enhanced MRI of cartilage. Six regions of interest for T2 value measurement were obtained from pre- and post-contrast T2-weighted, sagittal, multi-slice, multi-echo, source images in each subject. Regions with modified Noyes classification grade 2B and 3 were excluded. Comparison of T2 values between pre- and post-contrast images and T2 values among regions with the grade 0, 1 and 2A groups were statistically analyzed. Of a total of 114 regions, 79 regions showing grade 0 (n = 46), 1 (n = 18), or 2A (n = 15) were analyzed. The overall and individual T2 values of post-contrast images were significantly lower than those of pre-contrast images (overall, 35.3 ± 9.2 [mean ± SD] vs. 29.9 ± 8.2, p < 0.01; range of individual, 28.9-37.6 vs. 27.1-36.4, p < 0.01). Pearson correlation coefficients showed a strong positive correlation between pre- and post-contrast images (rho-Pearson = 0.712-0.905). T2 values of pre- and post-contrast images of the grade 0 group were significantly lower than those of the grade 1/2A group (pre T2, p = 0.003; post T2, p = 0.006). T2 values of the femoral cartilage of the knee joint are significantly lower on post-contrast images than on pre-contrast images. Furthermore, these T2 values have a strong positive correlation between pre- and post-contrast images.

  8. T2 values of femoral cartilage of the knee joint: Comparison between pre-contrast and post-contrast images

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hyun Jung; Yoon, Young Cheol [Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Choe, Bong Keun [Department of Preventive Medicine, Kyung Hee University School of Medicine, Seoul (Korea, Republic of)

    2014-02-15

    To retrospectively evaluate the relationship between T2 values of pre- and post-contrast magnetic resonance (MR) images of femoral cartilage in patients with varying degrees of osteoarthritis. A total of 19 patients underwent delayed gadolinium-enhanced MRI of cartilage. Six regions of interest for T2 value measurement were obtained from pre- and post-contrast T2-weighted, sagittal, multi-slice, multi-echo, source images in each subject. Regions with modified Noyes classification grade 2B and 3 were excluded. Comparison of T2 values between pre- and post-contrast images and T2 values among regions with the grade 0, 1 and 2A groups were statistically analyzed. Of a total of 114 regions, 79 regions showing grade 0 (n = 46), 1 (n = 18), or 2A (n = 15) were analyzed. The overall and individual T2 values of post-contrast images were significantly lower than those of pre-contrast images (overall, 35.3 ± 9.2 [mean ± SD] vs. 29.9 ± 8.2, p < 0.01; range of individual, 28.9-37.6 vs. 27.1-36.4, p < 0.01). Pearson correlation coefficients showed a strong positive correlation between pre- and post-contrast images (rho-Pearson = 0.712-0.905). T2 values of pre- and post-contrast images of the grade 0 group were significantly lower than those of the grade 1/2A group (pre T2, p = 0.003; post T2, p = 0.006). T2 values of the femoral cartilage of the knee joint are significantly lower on post-contrast images than on pre-contrast images. Furthermore, these T2 values have a strong positive correlation between pre- and post-contrast images.

  9. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  10. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  11. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    Science.gov (United States)

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-03-15

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes

  12. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    International Nuclear Information System (INIS)

    Bildhauer, Michael; Fuchs, Martin

    2012-01-01

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  13. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)

    2014-05-14

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  14. TU-CD-BRA-12: Coupling PET Image Restoration and Segmentation Using Variational Method with Multiple Regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was used on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural

  15. From Matched Spatial Filtering towards the Fused Statistical Descriptive Regularization Method for Enhanced Radar Imaging

    Directory of Open Access Journals (Sweden)

    Shkvarko Yuriy

    2006-01-01

    Full Text Available We address a new approach to solve the ill-posed nonlinear inverse problem of high-resolution numerical reconstruction of the spatial spectrum pattern (SSP of the backscattered wavefield sources distributed over the remotely sensed scene. An array or synthesized array radar (SAR that employs digital data signal processing is considered. By exploiting the idea of combining the statistical minimum risk estimation paradigm with numerical descriptive regularization techniques, we address a new fused statistical descriptive regularization (SDR strategy for enhanced radar imaging. Pursuing such an approach, we establish a family of the SDR-related SSP estimators, that encompass a manifold of existing beamforming techniques ranging from traditional matched filter to robust and adaptive spatial filtering, and minimum variance methods.

  16. Characteristic of sausages as influenced by partial replacement of pork back-fat using pre-emulsified soybean oil stabilized by fish proteins isolate

    Directory of Open Access Journals (Sweden)

    Nopparat Cheetangdee

    2017-08-01

    Full Text Available Substitution of animal fat with oils rich in n-3 is a feasible way to improve the nutritive value of comminuted meat product. The effect on the characteristics of sausages was investigated of partial replacement of porcine fat with soybean oil (SBO using a pre-emulsification technique. Fish protein isolate (FPI produced from yellow stripe trevally (Selaroides leptolepis was used as an emulsifier to prepare pre-emulsified SBO (preSBO, and its concentration effect (1%, 2% and 3%, w/v was observed in comparison with soy protein isolate (SPI. Substitution of porcine fat using preSBO enhanced the product stability. SPI exhibited better emulsifying ability than FPI. However, FPI was more effective at reinforcing the protein matrix of the sausages than SPI, as suggested by a lowered cooking loss and the restored textural attributes of the sausages formulated with FPI stabilized preSBO. The effective concentration of FPI to improve the product stability was 2%. This work suggested that FPI was promising in the preparation of emulsified meat products.

  17. Scaled lattice fermion fields, stability bounds, and regularity

    Science.gov (United States)

    O'Carroll, Michael; Faria da Veiga, Paulo A.

    2018-02-01

    We consider locally gauge-invariant lattice quantum field theory models with locally scaled Wilson-Fermi fields in d = 1, 2, 3, 4 spacetime dimensions. The use of scaled fermions preserves Osterwalder-Seiler positivity and the spectral content of the models (the decay rates of correlations are unchanged in the infinite lattice). In addition, it also results in less singular, more regular behavior in the continuum limit. Precisely, we treat general fermionic gauge and purely fermionic lattice models in an imaginary-time functional integral formulation. Starting with a hypercubic finite lattice Λ ⊂(aZ ) d, a ∈ (0, 1], and considering the partition function of non-Abelian and Abelian gauge models (the free fermion case is included) neglecting the pure gauge interactions, we obtain stability bounds uniformly in the lattice spacing a ∈ (0, 1]. These bounds imply, at least in the subsequential sense, the existence of the thermodynamic (Λ ↗ (aZ ) d) and the continuum (a ↘ 0) limits. Specializing to the U(1) gauge group, the known non-intersecting loop expansion for the d = 2 partition function is extended to d = 3 and the thermodynamic limit of the free energy is shown to exist with a bound independent of a ∈ (0, 1]. In the case of scaled free Fermi fields (corresponding to a trivial gauge group with only the identity element), spectral representations are obtained for the partition function, free energy, and correlations. The thermodynamic and continuum limits of the free fermion free energy are shown to exist. The thermodynamic limit of n-point correlations also exist with bounds independent of the point locations and a ∈ (0, 1], and with no n! dependence. Also, a time-zero Hilbert-Fock space is constructed, as well as time-zero, spatially pointwise scaled fermion creation operators which are shown to be norm bounded uniformly in a ∈ (0, 1]. The use of our scaled fields since the beginning allows us to extract and isolate the singularities of the free

  18. Applying Enhancement Filters in the Pre-processing of Images of Lymphoma

    International Nuclear Information System (INIS)

    Silva, Sérgio Henrique; Do Nascimento, Marcelo Zanchetta; Neves, Leandro Alves; Batista, Valério Ramos

    2015-01-01

    Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement

  19. The effects of aerobic- versus strength-training on body image among young women with pre-existing body image concerns.

    Science.gov (United States)

    Martin Ginis, Kathleen A; Strong, Heather A; Arent, Shawn M; Bray, Steven R; Bassett-Gunter, Rebecca L

    2014-06-01

    This experiment compared the effects of aerobic-training (AT) versus strength-training (ST) on body image among young women with pre-existing body image concerns. Theory-based correlates of body image change were also examined. Participants were 46 women (M age=21.5 years), randomly assigned to an 8-week AT or ST intervention consisting of supervised exercise 3 days/week. Multidimensional measures of body image were administered pre- and post-intervention, along with measures of physical fitness, perceived fitness, and exercise self-efficacy. Women in the AT condition reported greater reductions in social physique anxiety (p=.001) and tended to report greater improvements in appearance evaluation (p=.06) than women in the ST condition. Changes in perceived fatness, perceived aerobic endurance and aerobic self-efficacy were significantly correlated with body image change (psexercise to improve body image and advancing theory to account for the effects of exercise. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Early Childhood Pre-Service Teachers' Self-Images of Science Teaching in Constructivism Science Education Courses

    Science.gov (United States)

    Go, Youngmi; Kang, Jinju

    2015-01-01

    The purpose of this study is two-fold. First, it investigates the self-images of science teaching held by early childhood pre-service teachers who took constructivism early childhood science education courses. Second, it analyzes what aspects of those courses influenced these images. The participants were eight pre-service teachers who took these…

  1. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  2. Inverted light-sheet microscope for imaging mouse pre-implantation development.

    Science.gov (United States)

    Strnad, Petr; Gunther, Stefan; Reichmann, Judith; Krzic, Uros; Balazs, Balint; de Medeiros, Gustavo; Norlin, Nils; Hiiragi, Takashi; Hufnagel, Lars; Ellenberg, Jan

    2016-02-01

    Despite its importance for understanding human infertility and congenital diseases, early mammalian development has remained inaccessible to in toto imaging. We developed an inverted light-sheet microscope that enabled us to image mouse embryos from zygote to blastocyst, computationally track all cells and reconstruct a complete lineage tree of mouse pre-implantation development. We used this unique data set to show that the first cell fate specification occurs at the 16-cell stage.

  3. The pre-existing population of 5S rRNA effects p53 stabilization during ribosome biogenesis inhibition.

    Science.gov (United States)

    Onofrillo, Carmine; Galbiati, Alice; Montanaro, Lorenzo; Derenzini, Massimo

    2017-01-17

    Pre-ribosomal complex RPL5/RPL11/5S rRNA (5S RNP) is considered the central MDM2 inhibitory complex that control p53 stabilization during ribosome biogenesis inhibition. Despite its role is well defined, the dynamic of 5S RNP assembly still requires further characterization. In the present work, we report that MDM2 inhibition is dependent by a pre-existing population of 5S rRNA.

  4. Analysis of rocket flight stability based on optical image measurement

    Science.gov (United States)

    Cui, Shuhua; Liu, Junhu; Shen, Si; Wang, Min; Liu, Jun

    2018-02-01

    Based on the abundant optical image measurement data from the optical measurement information, this paper puts forward the method of evaluating the rocket flight stability performance by using the measurement data of the characteristics of the carrier rocket in imaging. On the basis of the method of measuring the characteristics of the carrier rocket, the attitude parameters of the rocket body in the coordinate system are calculated by using the measurements data of multiple high-speed television sets, and then the parameters are transferred to the rocket body attack angle and it is assessed whether the rocket has a good flight stability flying with a small attack angle. The measurement method and the mathematical algorithm steps through the data processing test, where you can intuitively observe the rocket flight stability state, and also can visually identify the guidance system or failure analysis.

  5. Pairing renormalization and regularization within the local density approximation

    International Nuclear Information System (INIS)

    Borycki, P.J.; Dobaczewski, J.; Nazarewicz, W.; Stoitsov, M.V.

    2006-01-01

    We discuss methods used in mean-field theories to treat pairing correlations within the local density approximation. Pairing renormalization and regularization procedures are compared in spherical and deformed nuclei. Both prescriptions give fairly similar results, although the theoretical motivation, simplicity, and stability of the regularization procedure make it a method of choice for future applications

  6. Mean magnetic susceptibility regularized susceptibility tensor imaging (MMSR-STI) for estimating orientations of white matter fibers in human brain.

    Science.gov (United States)

    Li, Xu; van Zijl, Peter C M

    2014-09-01

    An increasing number of studies show that magnetic susceptibility in white matter fibers is anisotropic and may be described by a tensor. However, the limited head rotation possible for in vivo human studies leads to an ill-conditioned inverse problem in susceptibility tensor imaging (STI). Here we suggest the combined use of limiting the susceptibility anisotropy to white matter and imposing morphology constraints on the mean magnetic susceptibility (MMS) for regularizing the STI inverse problem. The proposed MMS regularized STI (MMSR-STI) method was tested using computer simulations and in vivo human data collected at 3T. The fiber orientation estimated from both the STI and MMSR-STI methods was compared to that from diffusion tensor imaging (DTI). Computer simulations show that the MMSR-STI method provides a more accurate estimation of the susceptibility tensor than the conventional STI approach. Similarly, in vivo data show that use of the MMSR-STI method leads to a smaller difference between the fiber orientation estimated from STI and DTI for most selected white matter fibers. The proposed regularization strategy for STI can improve estimation of the susceptibility tensor in white matter. © 2014 Wiley Periodicals, Inc.

  7. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    Science.gov (United States)

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  8. Pre-processing, registration and selection of adaptive optics corrected retinal images.

    Science.gov (United States)

    Ramaswamy, Gomathy; Devaney, Nicholas

    2013-07-01

    In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased

  9. Surface-based prostate registration with biomechanical regularization

    Science.gov (United States)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  10. The trochlear pre-ossification center: a normal developmental stage and potential pitfall on MR images

    Energy Technology Data Exchange (ETDEWEB)

    Jaimes, Camilo [The Children' s Hospital of Philadelphia, Department of Radiology, Philadelphia, PA (United States); Jimenez, Mauricio [Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA (United States); Marin, Diana [Miami Children' s Hospital, Department of Radiology, Miami, FL (United States); Ho-Fung, Victor; Jaramillo, Diego [The Children' s Hospital of Philadelphia, Department of Radiology, Philadelphia, PA (United States); University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA (United States)

    2012-11-15

    The hypertrophic changes that occur in the cartilage of an epiphysis prior to the onset of ossification are known as the pre-ossification center. Awareness of the appearance of the pre-ossification center on MR images is important to avoid confusing normal developmental changes with pathology. The purpose of this study was to determine the characteristics of the trochlear pre-ossification center on MR imaging and examine age and gender differences. We retrospectively analyzed MR images from 61 children. The trochleas were categorized into three types on the basis of signal intensity (SI). Trochlear types were compared to age and gender. There was no significant difference between the ages of boys and girls. Type 1 trochleas showed homogeneous SI on all pulse sequences. Type 2 trochleas demonstrated a focus of high SI in the epiphyseal cartilage on fat-suppressed water-sensitive sequences, with high or intermediate SI on gradient-echo images (pre-ossification center). Type 3 trochleas showed low SI on fat-suppressed water-sensitive sequences and gradient-echo images. Thirty-seven trochleas were described as type 1, sixteen as type 2 and eight as type 3. ANOVAs confirmed a statistically significant difference in the age of children with type 3 trochleas and those with types 1 and 2 (P < 0.001). Spearman rank correlations determined a positive relationship between trochlear type and age of the children (r = 0.53). Development-related changes in the trochlea follow a predictable pattern. The signal characteristics of the pre-ossification center likely reflect normal chondrocyte hypertrophy and an increase in free water in the matrix. (orig.)

  11. SU-E-J-50: An Evaluation of the Stability of Image Quality Parameters of the Elekta XVI and IView Imaging Systems

    International Nuclear Information System (INIS)

    Stanley, D; Papanikolaou, N; Gutierrez, A

    2015-01-01

    Introduction Quality assurance of the image quality for image guided localization systems is crucial to ensure accurate visualization and localization of target volumes. In this study, the long term stability of selected image parameters was assessed and evaluated for CBCT mode, planar radiographic kV mode and MV mode. Methods and Materials: The CATPHAN, QckV-1 and QC-3 phantoms were used to evaluate the image quality parameters. The planar radiographic images were analyzed in PIPSpro™ with spatial resolution (f30, f40, f50) being recorded. For XVI CBCT, Head and Neck Small20 (S20) and Pelvis Medium20 (M20) standard acquisition modes were evaluated for Uniformity, Noise, Spatial Resolution and HU constancy. Dose and kVp for the XVI were recorded using the Unfors RaySafe Xi system with the R/F Low Detector for the kV planar radiographic mode. Results A total of 20 and 10 measurements were acquired for the planar radiographic and CBCT systems respectively over a two month period. Values were normalized to the mean and the standard deviations (STD) were recorded. For the planar radiographic spatial resolution, the STD for f30, f40, f50 were 0.004, 0.002, 0.002 and 0.005, 0.007, 0.008 for the kV and MV, respectively. The average recorded dose for kV was 38.7±2.7 μGy. The STD of the evaluated metrics for the S20 acquisition were: 0.444(f30), 0.067(f40), 0.062(f50), 0.018(Water/poly-HU constancy), 0.028(uniformity) and 0.106(noise). The standard deviations for the M20 acquisition were: 0.108(f30), 0.073(f40), 0.091(f50), 0.008(Water/poly-HU constancy), 0.005(uniformity) and 0.005(noise). Using these, tolerances can be reported as a warning and action threshold of 1σ and 2σ. Conclusion A study was performed to assess the stability of the basic image quality parameters recommended by TG-142 for the Elekta XVI and iView imaging systems. Consistent imaging and dosimetric properties over the evaluated time frame were noted. This work was funded in part by the Cancer

  12. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  13. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  14. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    Science.gov (United States)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  15. Temporal regularization of ultrasound-based liver motion estimation for image-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J. [Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS foundation Trust, Sutton, London SM2 5PT (United Kingdom)

    2016-01-15

    Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison with normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking

  16. 'Regular' and 'emergency' repair

    International Nuclear Information System (INIS)

    Luchnik, N.V.

    1975-01-01

    Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)

  17. Pre- and Postoperative Imaging of the Aortic Root

    Science.gov (United States)

    Chan, Frandics P.; Mitchell, R. Scott; Miller, D. Craig; Fleischmann, Dominik

    2016-01-01

    Three-dimensional datasets acquired using computed tomography and magnetic resonance imaging are ideally suited for characterization of the aortic root. These modalities offer different advantages and limitations, which must be weighed according to the clinical context. This article provides an overview of current aortic root imaging, highlighting normal anatomy, pathologic conditions, imaging techniques, measurement thresholds, relevant surgical procedures, postoperative complications and potential imaging pitfalls. Patients with a range of clinical conditions are predisposed to aortic root disease, including Marfan syndrome, bicuspid aortic valve, vascular Ehlers-Danlos syndrome, and Loeys-Dietz syndrome. Various surgical techniques may be used to repair the aortic root, including placement of a composite valve graft, such as the Bentall and Cabrol procedures; placement of an aortic root graft with preservation of the native valve, such as the Yacoub and David techniques; and implantation of a biologic graft, such as a homograft, autograft, or xenograft. Potential imaging pitfalls in the postoperative period include mimickers of pathologic processes such as felt pledgets, graft folds, and nonabsorbable hemostatic agents. Postoperative complications that may be encountered include pseudoaneurysms, infection, and dehiscence. Radiologists should be familiar with normal aortic root anatomy, surgical procedures, and postoperative complications, to accurately interpret pre- and postoperative imaging performed for evaluation of the aortic root. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26761529

  18. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  19. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  20. Heterogeneity in stabilization phenomena in FLT PET images of canines

    International Nuclear Information System (INIS)

    Simoncic, Urban; Jeraj, Robert

    2014-01-01

    3'-( 18 F)fluoro-3'-deoxy-L-thymidine (FLT) is a PET marker of cellular proliferation. Its tissue uptake rate is often quantified with a Standardized Uptake Value (SUV), although kinetic analysis provides a more accurate quantification. The purpose of this study is to investigate the heterogeneity in FLT stabilization phenomena. The study was done on 15 canines with spontaneously occurring sinonasal tumours. They were imaged dynamically for 90 min with FLT PET/CT twice; before and during the radiotherapy. Images were analyzed for kinetics on a voxel basis through compartmental analysis. Stabilization curves were calculated as a time-dependant correlation between the time-dependant SUV and the kinetic parameters (voxel values within the tumour were correlated). Stabilization curves were analyzed for stabilization speed, maximal correlation and correlation decrease following the maximal correlation. These stabilization parameters were correlated with the region-averaged kinetic parameters. The FLT SUV was highly correlated with vasculature fraction immediately post-injection, followed by maximum in correlation with the perfusion/permeability. At later times post-injection the FLT SUV was highly correlated (Pearson correlation coefficient above 0.95) with the FLT influx parameter for cases with tumour-averaged SUV 30–50 min above 2, while others were indeterminate (correlation coefficients from 0.1 to 0.97). All cases with highly correlated SUV and FLT influx parameter had correlation coefficient within 0.5% of its maximum in the period of 30–50 min post-injection. Stabilization time was inversely proportional to the FLT influx rate. Correlation between the FLT SUV and FLT influx parameter dropped at later times post-injection with drop being proportional to the dephosphorylation rate. The FLT was found to be metabolically stable in canines. FLT PET imaging protocol should define minimal and maximal FLT uptake period, which would be 30–50

  1. Bioluminescent imaging: a critical tool in pre-clinical oncology research.

    LENUS (Irish Health Repository)

    O'Neill, Karen

    2010-02-01

    Bioluminescent imaging (BLI) is a non-invasive imaging modality widely used in the field of pre-clinical oncology research. Imaging of small animal tumour models using BLI involves the generation of light by luciferase-expressing cells in the animal following administration of substrate. This light may be imaged using an external detector. The technique allows a variety of tumour-associated properties to be visualized dynamically in living models. The increasing use of BLI as a small-animal imaging modality has led to advances in the development of xenogeneic, orthotopic, and genetically engineered animal models expressing luciferase genes. This review aims to provide insight into the principles of BLI and its applications in cancer research. Many studies to assess tumour growth and development, as well as efficacy of candidate therapeutics, have been performed using BLI. More recently, advances have also been made using bioluminescent imaging in studies of protein-protein interactions, genetic screening, cell-cycle regulators, and spontaneous cancer development. Such novel studies highlight the versatility and potential of bioluminescent imaging in future oncological research.

  2. Enhancement of nerve structure segmentation by a correntropy-based pre-image approach

    Directory of Open Access Journals (Sweden)

    J. Gil-González

    2017-05-01

    Full Text Available Peripheral Nerve Blocking (PNB is a commonly used technique for performing regional anesthesia and managing pain. PNB comprises the administration of anesthetics in the proximity of a nerve. In this sense, the success of PNB procedures depends on an accurate location of the target nerve. Recently, ultrasound images (UI have been widely used to locate nerve structures for PNB, since they enable a noninvasive visualization of the target nerve and the anatomical structures around it. However, UI are affected by speckle noise, which makes it difficult to accurately locate a given nerve. Thus, it is necessary to perform a filtering step to attenuate the speckle noise without eliminating relevant anatomical details that are required for high-level tasks, such as segmentation of nerve structures. In this paper, we propose an UI improvement strategy with the use of a pre-image-based filter. In particular, we map the input images by a nonlinear function (kernel. Specifically, we employ a correntropybased mapping as kernel functional to code higher-order statistics of the input data under both nonlinear and non-Gaussian conditions. We validate our approach against an UI dataset focused on nerve segmentation for PNB. Likewise, our Correntropy-based Pre-Image Filtering (CPIF is applied as a pre-processing stage to segment nerve structures in a UI. The segmentation performance is measured in terms of the Dice coefficient. According to the results, we observe that CPIF finds a suitable approximation for UI by highlighting discriminative nerve patterns.

  3. SmartScan: a robust pushbroom imaging concept for moderate spacecraft attitude stability

    Science.gov (United States)

    Janschek, K.; Tchernykh, V.; Dyblenko, S.; Harnisch, B.

    2017-11-01

    Pushbroom scan cameras with linear image sensors, commonly used for Earth observation from satellites, require high attitude stability during the image acquisition. Especially noticeable are the effects of high frequency attitude variations originating from micro shocks and vibrations, produced by momentum and reaction wheels, mechanically activated coolers, steering and deployment mechanics and other reasons. The SMARTSCAN imaging concept offers high quality imaging even with moderate satellite attitude stability on a sole opto-electronic basis without any moving parts. It uses real-time recording of the actual image motion in the focal plane of the remote sensing camera during the frame acquisition and a posteriori correction of the obtained image distortions on base of the image motion record. Exceptional real-time performances with subpixel accuracy image motion measurement are provided by an innovative high-speed onboard optoelectronic correlation processor. SMARTSCAN allows therefore using smart pushbroom cameras for hyper-spectral imagers on satellites and platforms which are not specially intended for imaging missions, e.g. micro satellites. The paper gives an overview on the system concept and main technologies used (advanced optical correlator for ultra high-speed image motion tracking), it discusses the conceptual design for a smart compact space camera and it reports on airborne test results of a functional breadboard model.

  4. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  5. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    Science.gov (United States)

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Design, fabrication and actuation of a MEMS-based image stabilizer for photographic cell phone applications

    International Nuclear Information System (INIS)

    Chiou, Jin-Chern; Hung, Chen-Chun; Lin, Chun-Ying

    2010-01-01

    This work presents a MEMS-based image stabilizer applied for anti-shaking function in photographic cell phones. The proposed stabilizer is designed as a two-axis decoupling XY stage 1.4 × 1.4 × 0.1 mm 3 in size, and adequately strong to suspend an image sensor for anti-shaking photographic function. This stabilizer is fabricated by complex fabrication processes, including inductively coupled plasma (ICP) processes and flip-chip bonding technique. Based on the special designs of a hollow handle layer and a corresponding wire-bonding assisted holder, electrical signals of the suspended image sensor can be successfully sent out with 32 signal springs without incurring damage during wire-bonding packaging. The longest calculated traveling distance of the stabilizer is 25 µm which is sufficient to resolve the anti-shaking problem in a three-megapixel image sensor. Accordingly, the applied voltage for the 25 µm moving distance is 38 V. Moreover, the resonant frequency of the actuating device with the image sensor is 1.123 kHz.

  7. Segmentation of Brain Tissues from Magnetic Resonance Images Using Adaptively Regularized Kernel-Based Fuzzy C-Means Clustering

    Directory of Open Access Journals (Sweden)

    Ahmed Elazab

    2015-01-01

    Full Text Available An adaptively regularized kernel-based fuzzy C-means clustering framework is proposed for segmentation of brain magnetic resonance images. The framework can be in the form of three algorithms for the local average grayscale being replaced by the grayscale of the average filter, median filter, and devised weighted images, respectively. The algorithms employ the heterogeneity of grayscales in the neighborhood and exploit this measure for local contextual information and replace the standard Euclidean distance with Gaussian radial basis kernel functions. The main advantages are adaptiveness to local context, enhanced robustness to preserve image details, independence of clustering parameters, and decreased computational costs. The algorithms have been validated against both synthetic and clinical magnetic resonance images with different types and levels of noises and compared with 6 recent soft clustering algorithms. Experimental results show that the proposed algorithms are superior in preserving image details and segmentation accuracy while maintaining a low computational complexity.

  8. A Recursive Fuzzy System for Efficient Digital Image Stabilization

    Directory of Open Access Journals (Sweden)

    Nikolaos Kyriakoulis

    2008-01-01

    Full Text Available A novel digital image stabilization technique is proposed in this paper. It is based on a fuzzy Kalman compensation of the global motion vector (GMV, which is estimated in the log-polar plane. The GMV is extracted using four local motion vectors (LMVs computed on respective subimages in the logpolar plane. The fuzzy Kalman system consists of a fuzzy system with the Kalman filter's discrete time-invariant definition. Due to this inherited recursiveness, the output results into smoothed image sequences. The proposed stabilization system aims to compensate any oscillations of the frame absolute positions, based on the motion estimation in the log-polar domain, filtered by the fuzzy Kalman system, and thus the advantages of both the fuzzy Kalman system and the log-polar transformation are exploited. The described technique produces optimal results in terms of the output quality and the level of compensation.

  9. Terahertz Pulsed Imaging and Magnetic Resonance Imaging as Tools to Probe Formulation Stability

    Science.gov (United States)

    Zhang, Qilei; Gladden, Lynn F.; Avalle, Paolo; Zeitler, J. Axel; Mantle, Michael D.

    2013-01-01

    Dissolution stability over the entire shelf life duration is of critical importance to ensure the quality of solid dosage forms. Changes in the drug release profile during storage may affect the bioavailability of drug products. This study investigated the stability of a commercial tablet (Lescol® XL) when stored under accelerated conditions (40 °C/75% r.h.). Terahertz pulsed imaging (TPI) was used to investigate the structure of the tablet coating before and after the accelerated aging process. The results indicate that the coating was reduced in thickness and exhibited a higher density after being stored under accelerated conditions for four weeks. In situ magnetic resonance imaging (MRI) of the water penetration processes during tablet dissolution in a USP-IV dissolution cell equipped with an in-line UV-vis analyzer was carried out to study local differences in water uptake into the tablet matrix between the stressed and unstressed state. The drug release profiles of the Lescol® XL tablet before and after the accelerated storage stability testing were compared using a “difference” factor f1 and a “similarity” factor f2. The results reveal that even though the physical properties of the coating layers changed significantly during the stress testing, the coating protected the tablet matrix and the densification of the coating polymer had no adverse effect on the drug release performance. PMID:24300564

  10. Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Duo Hao

    2017-11-01

    Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

  11. Gamma regularization based reconstruction for low dose CT

    International Nuclear Information System (INIS)

    Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis

    2015-01-01

    Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l 0 -norm and l 1 -norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms. (paper)

  12. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  13. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  14. Thin-shell wormholes from the regular Hayward black hole

    Energy Technology Data Exchange (ETDEWEB)

    Halilsoy, M.; Ovgun, A.; Mazharimousavi, S.H. [Eastern Mediterranean University, Department of Physics, Mersin 10 (Turkey)

    2014-03-15

    We revisit the regular black hole found by Hayward in 4-dimensional static, spherically symmetric spacetime. To find a possible source for such a spacetime we resort to the nonlinear electrodynamics in general relativity. It is found that a magnetic field within this context gives rise to the regular Hayward black hole. By employing such a regular black hole we construct a thin-shell wormhole for the case of various equations of state on the shell. We abbreviate a general equation of state by p = ψ(σ) where p is the surface pressure which is a function of the mass density (σ). In particular, linear, logarithmic, Chaplygin, etc. forms of equations of state are considered. In each case we study the stability of the thin shell against linear perturbations.We plot the stability regions by tuning the parameters of the theory. It is observed that the role of the Hayward parameter is to make the TSW more stable. Perturbations of the throat with small velocity condition are also studied. The matter of our TSWs, however, remains exotic. (orig.)

  15. The Effect of Each Other Perceived Service Quality and Institutional Image In Pre - sc hool Education

    Directory of Open Access Journals (Sweden)

    Ebru Sönmez Karapınar

    2015-12-01

    Full Text Available Main purpose of this study is to examine the effect of service quality and dimensions of perceived institutional image; and effect of perceived institutional image and perceived service quality in pre-school education facilities. Two models were developed for that purpose. Perceived service quality was evaluated in five dimensions (empathy, reliability, responsiveness, assurance and tangibles and perceived institutional image was evaluated in four dimensions (quality image institutional communication, social image and institutional perspective. Influence of independent variable on dependent variable was mentioned in both of two models. Sample of the study consists of 250 families who use service provided by pre-schools in Kayseri. Data was collected by the way of a questionnaire which formed in the basis of two scales named as “servperf scale” and “institutional image scale”. Factor analysis, KMO test and regression analysis were used in order to test data. Findings indicate that there was a positive affect each other perceived service quality and perceived institutional image.

  16. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong; Nan, Liangliang; Yan, Dongming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2015-01-01

    plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing

  17. The stability of liquid-filled matrix ionization chamber electronic portal imaging devices for dosimetry purposes

    International Nuclear Information System (INIS)

    Louwe, R.J.W.; Tielenburg, R.; Ingen, K.M. van; Mijnheer, B.J.; Herk, M.B. van

    2004-01-01

    This study was performed to determine the stability of liquid-filled matrix ionization chamber (LiFi-type) electronic portal imaging devices (EPID) for dosimetric purposes. The short- and long-term stability of the response was investigated, as well as the importance of factors influencing the response (e.g., temperature fluctuations, radiation damage, and the performance of the electronic hardware). It was shown that testing the performance of the electronic hardware as well as the short-term stability of the imagers may reveal the cause of a poor long-term stability of the imager response. In addition, the short-term stability was measured to verify the validity of the fitted dose-response curve immediately after beam startup. The long-term stability of these imagers could be considerably improved by correcting for room temperature fluctuations and gradual changes in response due to radiation damage. As a result, the reproducibility was better than 1% (1 SD) over a period of two years. The results of this study were used to formulate recommendations for a quality control program for portal dosimetry. The effect of such a program was assessed by comparing the results of portal dosimetry and in vivo dosimetry using diodes during the treatment of 31 prostate patients. The improvement of the results for portal dosimetry was consistent with the deviations observed with the reproducibility tests in that particular period. After a correction for the variation in response of the imager, the average difference between the measured and prescribed dose during the treatment of prostate patients was -0.7%±1.5% (1 SD), and -0.6%±1.1% (1 SD) for EPID and diode in vivo dosimetry, respectively. It can be concluded that a high stability of the response can be achieved for this type of EPID by applying a rigorous quality control program

  18. A comparative study of MR imaging scores and MR perfusion imaging in pre-operative grading of intracranial gliomas

    International Nuclear Information System (INIS)

    Wu Honglin; Chen Junkun; Zhang Zongjun; Lu Guangming; Chen Ziqian; Wang Wei; Ji Xueman; Tang Xiaojun; Li Lin

    2005-01-01

    Objective: To compare the accuracy of MR imaging scores with MR perfusion imaging in pre-operative grading of intracranial gliomas. Methods: Thirty patients with intracranial gliomas (8 low-grade and 22 high-grade, according to WHO criteria) were examined with MR perfusion imaging pre-operatively. The lesions were evaluated by using an MR imaging score based on nine criteria. rCBV of the lesions were calculated by comparing the CBV of the lesion and that of contralateral normal white matter. The scores and ratios in high-grade and low-grade tumours were compared. Results: The MR imaging score of low grade (grades I and II) gliomas (0.67±0.29) were significantly lower than that of grades III (1.32 ± 0.47) (t=-3.48, P=0.003) and IV (1.56 ± 0.20) (t=-7.36, P=0.000) gliomas. There was no statistical difference when MR imaging scores of grades III and IV gliomas (t=-1.39, P=0.182) were compared. The maximum rCBV ratio of low grade (grades I and II) gliomas (2.38 ± 0.66 ) were significantly lower than that of grades III (5.81 ± 3.20) (t=-3.57, P=0.003) and IV (6.99 ± 2.47) (t=-5.09, P=0.001). There was no statistical difference when rCBV ratios of grades III and IV (t =-0.93, P=0.365) gliomas were compared. The accuracy of MR imaging scores in the noninvasive grading of untreated gliomas was all most the same as that of MR perfusion imaging (90.00% vs 89.29%). Conclusion: The MR imaging scores and MR perfusion imaging are two very useful tools in the evaluation of the histopathologic grade of cerebral gliomas. The overall accuracy in the noninvasive grading of gliomas may be imp roved if MR imaging scores and MR perfusion imaging are combined. (authors)

  19. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    Science.gov (United States)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  20. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    Science.gov (United States)

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  1. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  2. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2013-09-22

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.

  3. Directional Total Generalized Variation Regularization for Impulse Noise Removal

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas; Dong, Yiqiu

    2017-01-01

    this regularizer for directional images is highly advantageous. In order to estimate directions in impulse noise corrupted images, which is much more challenging compared to Gaussian noise corrupted images, we introduce a new Fourier transform-based method. Numerical experiments show that this method is more...

  4. Rice Husk Ash to Stabilize Heavy Metals Contained in Municipal Solid Waste Incineration Fly Ash: First Results by Applying New Pre-treatment Technology

    Directory of Open Access Journals (Sweden)

    Laura Benassi

    2015-10-01

    Full Text Available A new technology was recently developed for municipal solid waste incineration (MSWI fly ash stabilization, based on the employment of all waste and byproduct materials. In particular, the proposed method is based on the use of amorphous silica contained in rice husk ash (RHA, an agricultural byproduct material (COSMOS-RICE project. The obtained final inert can be applied in several applications to produce “green composites”. In this work, for the first time, a process for pre-treatment of rice husk, before its use in the stabilization of heavy metals, based on the employment of Instant Pressure Drop technology (DIC was tested. The aim of this work is to verify the influence of the pre-treatment on the efficiency on heavy metals stabilization in the COSMOS-RICE technology. DIC technique is based on a thermomechanical effect induced by an abrupt transition from high steam pressure to a vacuum, to produce changes in the material. Two different DIC pre-treatments were selected and thermal annealing at different temperatures were performed on rice husk. The resulting RHAs were employed to obtain COSMOS-RICE samples, and the stabilization procedure was tested on the MSWI fly ash. In the frame of this work, some thermal treatments were also realized in O2-limiting conditions, to test the effect of charcoal obtained from RHA on the stabilization procedure. The results of this work show that the application of DIC technology into existing treatment cycles of some waste materials should be investigated in more details to offer the possibility to stabilize and reuse waste.

  5. Rice Husk Ash to Stabilize Heavy Metals Contained in Municipal Solid Waste Incineration Fly Ash: First Results by Applying New Pre-treatment Technology

    Science.gov (United States)

    Benassi, Laura; Franchi, Federica; Catina, Daniele; Cioffi, Flavio; Rodella, Nicola; Borgese, Laura; Pasquali, Michela; Depero, Laura E.; Bontempi, Elza

    2015-01-01

    A new technology was recently developed for municipal solid waste incineration (MSWI) fly ash stabilization, based on the employment of all waste and byproduct materials. In particular, the proposed method is based on the use of amorphous silica contained in rice husk ash (RHA), an agricultural byproduct material (COSMOS-RICE project). The obtained final inert can be applied in several applications to produce “green composites”. In this work, for the first time, a process for pre-treatment of rice husk, before its use in the stabilization of heavy metals, based on the employment of Instant Pressure Drop technology (DIC) was tested. The aim of this work is to verify the influence of the pre-treatment on the efficiency on heavy metals stabilization in the COSMOS-RICE technology. DIC technique is based on a thermomechanical effect induced by an abrupt transition from high steam pressure to a vacuum, to produce changes in the material. Two different DIC pre-treatments were selected and thermal annealing at different temperatures were performed on rice husk. The resulting RHAs were employed to obtain COSMOS-RICE samples, and the stabilization procedure was tested on the MSWI fly ash. In the frame of this work, some thermal treatments were also realized in O2-limiting conditions, to test the effect of charcoal obtained from RHA on the stabilization procedure. The results of this work show that the application of DIC technology into existing treatment cycles of some waste materials should be investigated in more details to offer the possibility to stabilize and reuse waste. PMID:28793605

  6. Differentiating invasive and pre-invasive lung cancer by quantitative analysis of histopathologic images

    Science.gov (United States)

    Zhou, Chuan; Sun, Hongliu; Chan, Heang-Ping; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir; Kazerooni, Ella

    2018-02-01

    We are developing automated radiopathomics method for diagnosis of lung nodule subtypes. In this study, we investigated the feasibility of using quantitative methods to analyze the tumor nuclei and cytoplasm in pathologic wholeslide images for the classification of pathologic subtypes of invasive nodules and pre-invasive nodules. We developed a multiscale blob detection method with watershed transform (MBD-WT) to segment the tumor cells. Pathomic features were extracted to characterize the size, morphology, sharpness, and gray level variation in each segmented nucleus and the heterogeneity patterns of tumor nuclei and cytoplasm. With permission of the National Lung Screening Trial (NLST) project, a data set containing 90 digital haematoxylin and eosin (HE) whole-slide images from 48 cases was used in this study. The 48 cases contain 77 regions of invasive subtypes and 43 regions of pre-invasive subtypes outlined by a pathologist on the HE images using the pathological tumor region description provided by NLST as reference. A logistic regression model (LRM) was built using leave-one-case-out resampling and receiver operating characteristic (ROC) analysis for classification of invasive and pre-invasive subtypes. With 11 selected features, the LRM achieved a test area under the ROC curve (AUC) value of 0.91+/-0.03. The results demonstrated that the pathologic invasiveness of lung adenocarcinomas could be categorized with high accuracy using pathomics analysis.

  7. CHARACTERIZATIONS OF FUZZY SOFT PRE SEPARATION AXIOMS

    OpenAIRE

    El-Latif, Alaa Mohamed Abd

    2015-01-01

    − The notions of fuzzy pre open soft sets and fuzzy pre closed soft sets were introducedby Abd El-latif et al. [2]. In this paper, we continue the study on fuzzy soft topological spaces andinvestigate the properties of fuzzy pre open soft sets, fuzzy pre closed soft sets and study variousproperties and notions related to these structures. In particular, we study the relationship betweenfuzzy pre soft interior fuzzy pre soft closure. Moreover, we study the properties of fuzzy soft pre regulars...

  8. How does confirmation of motivations influence on the pre- and post-visit change of image of a destination?

    Directory of Open Access Journals (Sweden)

    Asunción Beerli-Palacio

    2017-07-01

    Full Text Available Purpose - The purpose of this paper is to analyse the influence of the confirmation of the motivations of tourists in changing image of a tourist destination pre- and post-visit. That is, considering whether once the tourist has made the trip, depending on whether their expectations have been met and confirmed motivations, will have a more or less image gap. Design/methodology/approach - The authors conducted an empirical study with a representative sample of leisure tourists to Tenerife (Canary Islands, Spain of both sexes, 16 or more years of age, and visiting the island of Tenerife for the first time from abroad and from the rest of Spain. The final sample was 411 participants. Findings - The results verify that the confirmation of the intellectual and escape motivations influences directly and positively change cognitive image pre- and post-visit. The fact that the affiliation motivations do not influence the cognitive image gap may be due to that tourists who visit a destination stay with friends or family and for this they interact less with the destination, which will imply that the cognitive image pre- and post-visit do not vary. Originality/value - This research has sought to contribute towards a better understanding of the area, which is concerned, with the image of destinations and, more specifically, the concept of how the image changes after a visit to the destination. In this sense, and given the of lack empirical evidence about how confirmation of motivations influences on destination image gap, this research aims to contribute to the improvement of knowledge about the personal factors that influence the change of the pre- and post-visit destination image.

  9. Designing a stable feedback control system for blind image deconvolution.

    Science.gov (United States)

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. On image pre-processing for PIV of sinlge- and two-phase flows over reflecting objects

    NARCIS (Netherlands)

    Deen, N.G.; Willems, P.; van Sint Annaland, M.; Kuipers, J.A.M.; Lammertink, Rob G.H.; Kemperman, Antonius J.B.; Wessling, Matthias; van der Meer, Walterus Gijsbertus Joseph

    2010-01-01

    A novel image pre-processing scheme for PIV of single- and two-phase flows over reflecting objects which does not require the use of additional hardware is discussed. The approach for single-phase flow consists of image normalization and intensity stretching followed by background subtraction. For

  11. Enhancing the thermal stability of natural rubber/recycled ethylene–propylene–diene rubber blends by means of introducing pre-vulcanised ethylene–propylene–diene rubber and electron beam irradiation

    International Nuclear Information System (INIS)

    Nabil, H.; Ismail, H.

    2014-01-01

    Highlights: • New route of processing was introduced to optimise the thermal stability of NR/R-EPDM blends. • Pre-vulcanised EPDM and EB irradiation were introduced into NR/R-EPDM blends. • Thermal stability is obviously enhanced by applying these two techniques. • Applying new route of processing methods is clearly successful to NR/R-EPDM blends. - Abstract: Most rubber materials are subjected to oxidation. The rate of oxidation depends on the type of rubber, processing method, and end-use conditions. The oxidation of rubber can result in the loss of physical properties, such as tensile strength, elongation, and flexibility. Hence, the service life is determined by oxidation stability. Thermal properties are relevant to the potential use of polymeric materials in many consumer oriented applications. Thermo-oxidative ageing and thermogravimetric analysis (TGA) have been proven to be successful techniques in determining the thermal stability of polymers and polymer blends. In this article, preparation of a series of natural rubber/recycled ethylene–propylene–diene rubber (NR/R-EPDM) blends is described. Processing of the blends, by means of introducing pre-vulcanised EPDM and electron beam (EB) irradiation, was carried out. Two thermal analysis methods, namely thermo-oxidative ageing and thermogravimetric analysis, were conducted. The results indicated that pre-vulcanising EPDM for 1.45 min (ts − 2) is sufficient to gain the optimum retained tensile and elongation at break. It was simultaneously observed that the introduction of pre-vulcanised EPDM increased decomposition temperature and activation energy by showing optimum values at a pre-vulcanising time of 3.45 min (ts). In the latter study, the retained properties increased after EB irradiation. The results can be verified by the thermal decomposition temperature and their activation energy. The obtained TG profiles and the calculated kinetic parameters indicated that introducing EB irradiation

  12. Biochemical Stability Analysis of Nano Scaled Contrast Agents Used in Biomolecular Imaging Detection of Tumor Cells

    Science.gov (United States)

    Kim, Jennifer; Kyung, Richard

    Imaging contrast agents are materials used to improve the visibility of internal body structures in the imaging process. Many agents that are used for contrast enhancement are now studied empirically and computationally by researchers. Among various imaging techniques, magnetic resonance imaging (MRI) has become a major diagnostic tool in many clinical specialties due to its non-invasive characteristic and its safeness in regards to ionizing radiation exposure. Recently, researchers have prepared aqueous fullerene nanoparticles using electrochemical methods. In this paper, computational simulations of thermodynamic stabilities of nano scaled contrast agents that can be used in biomolecular imaging detection of tumor cells are presented using nanomaterials such as fluorescent functionalized fullerenes. In addition, the stability and safety of different types of contrast agents composed of metal oxide a, b, and c are tested in the imaging process. Through analysis of the computational simulations, the stabilities of the contrast agents, determined by optimized energies of the conformations, are presented. The resulting numerical data are compared. In addition, Density Functional Theory (DFT) is used in order to model the electron properties of the compound.

  13. Regularization design for high-quality cone-beam CT of intracranial hemorrhage using statistical reconstruction

    Science.gov (United States)

    Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2016-03-01

    Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.

  14. Diagnostic value of radiological imaging pre- and post-drainage of pleural effusions.

    Science.gov (United States)

    Corcoran, John P; Acton, Louise; Ahmed, Asia; Hallifax, Robert J; Psallidas, Ioannis; Wrightson, John M; Rahman, Najib M; Gleeson, Fergus V

    2016-02-01

    Patients with an unexplained pleural effusion often require urgent investigation. Clinical practice varies due to uncertainty as to whether an effusion should be drained completely before diagnostic imaging. We performed a retrospective study of patients undergoing medical thoracoscopy for an unexplained effusion. In 110 patients with paired (pre- and post-drainage) chest X-rays and 32 patients with paired computed tomography scans, post-drainage imaging did not provide additional information that would have influenced the clinical decision-making process. © 2015 Asian Pacific Society of Respirology.

  15. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  16. PET regularization by envelope guided conjugate gradients

    International Nuclear Information System (INIS)

    Kaufman, L.; Neumaier, A.

    1996-01-01

    The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations

  17. Pre-operative imaging of rectal cancer and its impact on surgical performance and treatment outcome.

    Science.gov (United States)

    Beets-Tan, R G H; Lettinga, T; Beets, G L

    2005-08-01

    To discuss the ability of pre-operative MRI to have a beneficial effect on surgical performance and treatment outcome in patients with rectal cancer. A description on how MRI can be used as a tool so select patients for differentiated neoadjuvant treatment, how it can be used as an anatomical road map for the resection of locally advanced cases, and how it can serve as a tool for quality assurance of both the surgical procedure and overall patient management. As an illustration the proportion of microscopically complete resections of the period 1993-1997, when there was no routine pre-operative imaging, is compared to that of the period 1998-2002, when pre-operative MR imaging was standardized. The proportion of R0 resections increased from 92.5 to 97% (p=0.08) and the proportion of resections with a lateral tumour free margin of >1mm increased from 84.4 to 92.1% (p=0.03). The incomplete resections in the first period were mainly due to inadequate surgical management of unsuspected advanced or bulky tumours, whereas in the second period insufficient consideration was given to extensive neoadjuvant treatment when the tumour was close to or invading the mesorectal fascia on MR. There are good indications that in our setting pre-operative MR imaging, along with other improvements in rectal cancer management, had a beneficial effect on patient outcome. Audit and discussion of the incomplete resections can lead to an improved operative and perioperative management.

  18. Nonlocal Regularized Algebraic Reconstruction Techniques for MRI: An Experimental Study

    Directory of Open Access Journals (Sweden)

    Xin Li

    2013-01-01

    Full Text Available We attempt to revitalize researchers' interest in algebraic reconstruction techniques (ART by expanding their capabilities and demonstrating their potential in speeding up the process of MRI acquisition. Using a continuous-to-discrete model, we experimentally study the application of ART into MRI reconstruction which unifies previous nonuniform-fast-Fourier-transform- (NUFFT- based and gridding-based approaches. Under the framework of ART, we advocate the use of nonlocal regularization techniques which are leveraged from our previous research on modeling photographic images. It is experimentally shown that nonlocal regularization ART (NR-ART can often outperform their local counterparts in terms of both subjective and objective qualities of reconstructed images. On one real-world k-space data set, we find that nonlocal regularization can achieve satisfactory reconstruction from as few as one-third of samples. We also address an issue related to image reconstruction from real-world k-space data but overlooked in the open literature: the consistency of reconstructed images across different resolutions. A resolution-consistent extension of NR-ART is developed and shown to effectively suppress the artifacts arising from frequency extrapolation. Both source codes and experimental results of this work are made fully reproducible.

  19. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  20. Automatic prostate MR image segmentation with sparse label propagation and domain-specific manifold regularization.

    Science.gov (United States)

    Liao, Shu; Gao, Yaozong; Shi, Yinghuan; Yousuf, Ambereen; Karademir, Ibrahim; Oto, Aytekin; Shen, Dinggang

    2013-01-01

    Automatic prostate segmentation in MR images plays an important role in prostate cancer diagnosis. However, there are two main challenges: (1) Large inter-subject prostate shape variations; (2) Inhomogeneous prostate appearance. To address these challenges, we propose a new hierarchical prostate MR segmentation method, with the main contributions lying in the following aspects: First, the most salient features are learnt from atlases based on a subclass discriminant analysis (SDA) method, which aims to find a discriminant feature subspace by simultaneously maximizing the inter-class distance and minimizing the intra-class variations. The projected features, instead of only voxel-wise intensity, will be served as anatomical signature of each voxel. Second, based on the projected features, a new multi-atlases sparse label fusion framework is proposed to estimate the prostate likelihood of each voxel in the target image from the coarse level. Third, a domain-specific semi-supervised manifold regularization method is proposed to incorporate the most reliable patient-specific information identified by the prostate likelihood map to refine the segmentation result from the fine level. Our method is evaluated on a T2 weighted prostate MR image dataset consisting of 66 patients and compared with two state-of-the-art segmentation methods. Experimental results show that our method consistently achieves the highest segmentation accuracies than other methods under comparison.

  1. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    Science.gov (United States)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  2. Recursive regularization step for high-order lattice Boltzmann methods

    Science.gov (United States)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  3. A reference sample for investigating the stability of the imaging system of x-ray computed tomography

    International Nuclear Information System (INIS)

    Sun, Wenjuan; Brown, Stephen; Flay, Nadia; McCarthy, Michael; McBride, John

    2016-01-01

    The use of x-ray computed tomography for dimensional measurements associated with engineering applications has flourished in recent years. However, error sources associated with the technology are not well understood. In this paper, a novel two-sphere reference sample has been developed and used to investigate the stability of the imaging system that consists of an x-ray tube and a detector. In contrast with other research work reported, this work considered relative positional variation along the x -, y - and z -axes. This sample is a significant improvement over the one sphere sample proposed previously, which can only be used to observe the stability of the imaging system along x - and y -axes. Temperature variations of different parts of the system have been monitored and the relationship between temperature variations and x-ray image stability has been studied. Other effects that may also influence the stability of the imaging system have been discussed. The proposed reference sample and testing method are transferable to other types of x-ray computed tomography systems, for example, systems with transmission targets and systems with sub-micrometre focal spots. (paper)

  4. Regular classroom assessment as a means of enhancing Teacher ...

    African Journals Online (AJOL)

    This study was an action research which employed regular classroom tests to help students learn and understand some concepts in electricity and magnetism. The participants of the study were 35 Level 200 B.Ed. (Basic Education, JSS Option) pre-service science teachers of the University of Education, Winneba in the ...

  5. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin; Dai, Wei; Schuster, Gerard T.

    2013-01-01

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity

  6. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    Science.gov (United States)

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  7. Limited-angle multi-energy CT using joint clustering prior and sparsity regularization

    Science.gov (United States)

    Zhang, Huayu; Xing, Yuxiang

    2016-03-01

    In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM). We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.

  8. Hydrologic Process Regularization for Improved Geoelectrical Monitoring of a Lab-Scale Saline Tracer Experiment

    Science.gov (United States)

    Oware, E. K.; Moysey, S. M.

    2016-12-01

    Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time

  9. A wavelet-based regularized reconstruction algorithm for SENSE parallel MRI with applications to neuroimaging

    International Nuclear Information System (INIS)

    Chaari, L.; Pesquet, J.Ch.; Chaari, L.; Ciuciu, Ph.; Benazza-Benyahia, A.

    2011-01-01

    To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990's as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired under sampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used Sensitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical l 1 term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5 T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors. (authors)

  10. An Lq–Lp optimization framework for image reconstruction of electrical resistance tomography

    International Nuclear Information System (INIS)

    Zhao, Jia; Xu, Yanbin; Dong, Feng

    2014-01-01

    Image reconstruction in electrical resistance tomography (ERT) is an ill-posed and nonlinear problem, which is easily affected by measurement noise. The regularization method with L 2 constraint term or L 1 constraint term is often used to solve the inverse problem of ERT. It shows that the reconstruction method with L 2 regularization puts smoothness to obtain stability in the image reconstruction process, which is blurry at the interface of different conductivities. The regularization method with L 1 norm is powerful at dealing with the over-smoothing effects, which is beneficial in obtaining a sharp transaction in conductivity distribution. To find the reason for these effects, an L q –L p optimization framework (1 ⩽ q ⩽ 2, 1 ⩽ p ⩽ 2) for the image reconstruction of ERT is presented in this paper. The L q –L p optimization framework is solved based on an approximation handling with Gauss–Newton iteration algorithm. The optimization framework is tested for image reconstruction of ERT with different models and the effects of the L p regularization term on the quality of the reconstructed images are discussed with both simulation and experiment. By comparing the reconstructed results with different p in the regularization term, it is found that a large penalty is implemented on small data in the solution when p is small and a lesser penalty is implemented on small data in the solution when p is larger. It also makes the reconstructed images smoother and more easily affected by noise when p is larger. (paper)

  11. [Pre-analytical stability before centrifugation of 7 biochemical analytes in whole blood].

    Science.gov (United States)

    Perrier-Cornet, Andreas; Moineau, Marie-Pierre; Narbonne, Valérie; Plee-Gautier, Emmanuelle; Le Saos, Fabienne; Carre, Jean-Luc

    2015-01-01

    The pre-analytical stability of 7 biochemical parameters (parathyroid hormone -PTH-, vitamins A, C E and D, 1,25-dihydroxyvitamin D and insulin) at +4 °C, was studied on whole blood samples before centrifugation. The impact of freezing at -20°C was also analyzed/performed for PTH and vitamin D. The differences in the results of assays for whole blood samples, being kept for different times between sampling time and analysis, from 9 healthy adults, were compaired by using a Student t test. The 7 analytes investigated remained stable up to 4 hours at +4°C in whole blood. This study showed that it is possible to accept uncentrifuged whole blood specimens kept at +4°C before analysis. PTH is affected by freezing whereas vitamin D is not.

  12. Intra-fraction prostate displacement in radiotherapy estimated from pre- and post-treatment imaging of patients with implanted fiducial markers

    International Nuclear Information System (INIS)

    Kron, Tomas; Thomas, Jessica; Fox, Chris; Thompson, Ann; Owen, Rebecca; Herschtal, Alan; Haworth, Annette; Tai, Keen-Hun; Foroudi, Farshad

    2010-01-01

    Purpose: To determine intra-fraction displacement of the prostate gland from imaging pre- and post-radiotherapy delivery of prostate cancer patients with three implanted fiducial markers. Methods and materials: Data were collected from 184 patients who had two orthogonal X-rays pre- and post-delivery on at least 20 occasions using a Varian On Board kV Imaging system. A total of 5778 image pairs covering time intervals between 3 and 30 min between pre- and post-imaging were evaluated for intra-fraction prostate displacement. Results: The mean three dimensional vector shift between images was 1.7 mm ranging from 0 to 25 mm. No preferential direction of displacement was found; however, there was an increase of prostate displacement with time between images. There was a large variation in typical shifts between patients (range 1 ± 1 to 6 ± 2 mm) with no apparent trends throughout the treatment course. Images acquired in the first five fractions of treatment could be used to predict displacement patterns for individual patients. Conclusion: Intra-fraction motion of the prostate gland appears to be a limiting factor when considering margins for radiotherapy. Given the variation between patients, a uniform set of margins for all patients may not be satisfactory when high target doses are to be delivered.

  13. Methods for prostate stabilization during transperineal LDR brachytherapy.

    Science.gov (United States)

    Podder, Tarun; Sherman, Jason; Rubens, Deborah; Messing, Edward; Strang, John; Ng, Wan-Sing; Yu, Yan

    2008-03-21

    In traditional prostate brachytherapy procedures for a low-dose-rate (LDR) radiation seed implant, stabilizing needles are first inserted to provide some rigidity and support to the prostate. Ideally this will provide better seed placement and an overall improved treatment. However, there is much speculation regarding the effectiveness of using regular brachytherapy needles as stabilizers. In this study, we explored the efficacy of two types of needle geometries (regular brachytherapy needle and hooked needle) and several clinically feasible configurations of the stabilization needles. To understand and assess the prostate movement during seed implantation, we collected in vivo data from patients during actual brachytherapy procedures. In vitro experimentation with tissue-equivalent phantoms allowed us to further understand the mechanics behind prostate stabilization. We observed superior stabilization with the hooked needles compared to the regular brachytherapy needles (more than 40% in bilateral parallel needle configuration). Prostate movement was also reduced significantly when regular brachytherapy needles were in an angulated configuration as compared to the parallel configuration (more than 60%). When the hooked needles were angulated for stabilization, further reduction in prostate displacement was observed. In general, for convenience of dosimetric planning and to avoid needle collision, all needles are desired to be in a parallel configuration. In this configuration, hooked needles provide improved stabilization of the prostate. On the other hand, both regular and hooked needles appear to be equally effective in reducing prostate movement when they are in angulated configurations, which will be useful in seed implantation using a robotic system. We have developed nonlinear spring-damper model for the prostate movement which can be used for adapting dosimetric planning during brachytherapy as well as for developing more realistic haptic devices and

  14. Methods for prostate stabilization during transperineal LDR brachytherapy

    International Nuclear Information System (INIS)

    Podder, Tarun; Yu Yan; Sherman, Jason; Rubens, Deborah; Strang, John; Messing, Edward; Ng, Wan-Sing

    2008-01-01

    In traditional prostate brachytherapy procedures for a low-dose-rate (LDR) radiation seed implant, stabilizing needles are first inserted to provide some rigidity and support to the prostate. Ideally this will provide better seed placement and an overall improved treatment. However, there is much speculation regarding the effectiveness of using regular brachytherapy needles as stabilizers. In this study, we explored the efficacy of two types of needle geometries (regular brachytherapy needle and hooked needle) and several clinically feasible configurations of the stabilization needles. To understand and assess the prostate movement during seed implantation, we collected in vivo data from patients during actual brachytherapy procedures. In vitro experimentation with tissue-equivalent phantoms allowed us to further understand the mechanics behind prostate stabilization. We observed superior stabilization with the hooked needles compared to the regular brachytherapy needles (more than 40% in bilateral parallel needle configuration). Prostate movement was also reduced significantly when regular brachytherapy needles were in an angulated configuration as compared to the parallel configuration (more than 60%). When the hooked needles were angulated for stabilization, further reduction in prostate displacement was observed. In general, for convenience of dosimetric planning and to avoid needle collision, all needles are desired to be in a parallel configuration. In this configuration, hooked needles provide improved stabilization of the prostate. On the other hand, both regular and hooked needles appear to be equally effective in reducing prostate movement when they are in angulated configurations, which will be useful in seed implantation using a robotic system. We have developed nonlinear spring-damper model for the prostate movement which can be used for adapting dosimetric planning during brachytherapy as well as for developing more realistic haptic devices and

  15. Fast Superpixel Segmentation Algorithm for PolSAR Images

    Directory of Open Access Journals (Sweden)

    Zhang Yue

    2017-10-01

    Full Text Available As a pre-processing technique, superpixel segmentation algorithms should be of high computational efficiency, accurate boundary adherence and regular shape in homogeneous regions. A fast superpixel segmentation algorithm based on Iterative Edge Refinement (IER has shown to be applicable on optical images. However, it is difficult to obtain the ideal results when IER is applied directly to PolSAR images due to the speckle noise and small or slim regions in PolSAR images. To address these problems, in this study, the unstable pixel set is initialized as all the pixels in the PolSAR image instead of the initial grid edge pixels. In the local relabeling of the unstable pixels, the fast revised Wishart distance is utilized instead of the Euclidean distance in CIELAB color space. Then, a post-processing procedure based on dissimilarity measure is empolyed to remove isolated small superpixels as well as to retain the strong point targets. Finally, extensive experiments based on a simulated image and a real-world PolSAR image from Airborne Synthetic Aperture Radar (AirSAR are conducted, showing that the proposed algorithm, compared with three state-of-the-art methods, performs better in terms of several commonly used evaluation criteria with high computational efficiency, accurate boundary adherence, and homogeneous regularity.

  16. Hypoxia-Targeting Fluorescent Nanobodies for Optical Molecular Imaging of Pre-Invasive Breast Cancer

    NARCIS (Netherlands)

    van Brussel, Aram S A; Adams, Arthur; Oliveira, Sabrina; Dorresteijn, Bram; El Khattabi, Mohamed; Vermeulen, J. F.; van der Wall, Elsken; Mali, Willem P Th M; Derksen, Patrick W B; van Diest, Paul J; van Bergen En Henegouwen, Paul M P

    PURPOSE: The aim of this work was to develop a CAIX-specific nanobody conjugated to IRDye800CW for molecular imaging of pre-invasive breast cancer. PROCEDURES: CAIX-specific nanobodies were selected using a modified phage display technology, conjugated site-specifically to IRDye800CW and evaluated

  17. Hypoxia-Targeting Fluorescent Nanobodies for Optical Molecular Imaging of Pre-Invasive Breast Cancer

    NARCIS (Netherlands)

    van Brussel, Aram S A; Adams, Arthur; Oliveira, Sabrina; Dorresteijn, Bram; El Khattabi, Mohamed; Vermeulen, Jeroen F.; van der Wall, Elsken; Mali, W.P.T.M.; Derksen, Patrick W B; van Diest, Paul J.; van Bergen En Henegouwen, Paul M P

    Purpose: The aim of this work was to develop a CAIX-specific nanobody conjugated to IRDye800CW for molecular imaging of pre-invasive breast cancer. Procedures: CAIX-specific nanobodies were selected using a modified phage display technology, conjugated site-specifically to IRDye800CW and evaluated

  18. Variational analysis of regular mappings theory and applications

    CERN Document Server

    Ioffe, Alexander D

    2017-01-01

    This monograph offers the first systematic account of (metric) regularity theory in variational analysis. It presents new developments alongside classical results and demonstrates the power of the theory through applications to various problems in analysis and optimization theory. The origins of metric regularity theory can be traced back to a series of fundamental ideas and results of nonlinear functional analysis and global analysis centered around problems of existence and stability of solutions of nonlinear equations. In variational analysis, regularity theory goes far beyond the classical setting and is also concerned with non-differentiable and multi-valued operators. The present volume explores all basic aspects of the theory, from the most general problems for mappings between metric spaces to those connected with fairly concrete and important classes of operators acting in Banach and finite dimensional spaces. Written by a leading expert in the field, the book covers new and powerful techniques, whic...

  19. Regularizing properties of Complex Monge-Amp\\`ere flows

    OpenAIRE

    Tô, Tat Dat

    2016-01-01

    We study the regularizing properties of complex Monge-Amp\\`ere flows on a K\\"ahler manifold $(X,\\omega)$ when the initial data are $\\omega$-psh functions with zero Lelong number at all points. We prove that the general Monge-Amp\\`ere flow has a solution which is immediately smooth. We also prove the uniqueness and stability of solution.

  20. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    Science.gov (United States)

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.

  1. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  2. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    International Nuclear Information System (INIS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-01-01

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical

  3. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  4. Prewarping techniques in imaging: applications in nanotechnology and biotechnology

    Science.gov (United States)

    Poonawala, Amyn; Milanfar, Peyman

    2005-03-01

    In all imaging systems, the underlying process introduces undesirable distortions that cause the output signal to be a warped version of the input. When the input to such systems can be controlled, pre-warping techniques can be employed which consist of systematically modifying the input such that it cancels out (or compensates for) the process losses. In this paper, we focus on the mask (reticle) design problem for 'optical micro-lithography', a process similar to photographic printing used for transferring binary circuit patterns onto silicon wafers. We use a pixel-based mask representation and model the above process as a cascade of convolution (aerial image formation) and thresholding (high-contrast recording) operations. The pre-distorted mask is obtained by minimizing the norm of the difference between the 'desired' output image and the 'reproduced' output image. We employ the regularization framework to ensure that the resulting masks are close-to-binary as well as simple and easy to fabricate. Finally, we provide insight into two additional applications of pre-warping techniques. First is 'e-beam lithography', used for fabricating nano-scale structures, and second is 'electronic visual prosthesis' which aims at providing limited vision to the blind by using a prosthetic retinally implanted chip capable of electrically stimulating the retinal neuron cells.

  5. A new approach to pre-processing digital image for wavelet-based watermark

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  6. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  7. PET Imaging Stability Measurements During Simultaneous Pulsing of Aggressive MR Sequences on the SIGNA PET/MR System.

    Science.gov (United States)

    Deller, Timothy W; Khalighi, Mohammad Mehdi; Jansen, Floris P; Glover, Gary H

    2018-01-01

    The recent introduction of simultaneous whole-body PET/MR scanners has enabled new research taking advantage of the complementary information obtainable with PET and MRI. One such application is kinetic modeling, which requires high levels of PET quantitative stability. To accomplish the required PET stability levels, the PET subsystem must be sufficiently isolated from the effects of MR activity. Performance measurements have previously been published, demonstrating sufficient PET stability in the presence of MR pulsing for typical clinical use; however, PET stability during radiofrequency (RF)-intensive and gradient-intensive sequences has not previously been evaluated for a clinical whole-body scanner. In this work, PET stability of the GE SIGNA PET/MR was examined during simultaneous scanning of aggressive MR pulse sequences. Methods: PET performance tests were acquired with MR idle and during simultaneous MR pulsing. Recent system improvements mitigating RF interference and gain variation were used. A fast recovery fast spin echo MR sequence was selected for high RF power, and an echo planar imaging sequence was selected for its high heat-inducing gradients. Measurements were performed to determine PET stability under varying MR conditions using the following metrics: sensitivity, scatter fraction, contrast recovery, uniformity, count rate performance, and image quantitation. A final PET quantitative stability assessment for simultaneous PET scanning during functional MRI studies was performed with a spiral in-and-out gradient echo sequence. Results: Quantitation stability of a 68 Ge flood phantom was demonstrated within 0.34%. Normalized sensitivity was stable during simultaneous scanning within 0.3%. Scatter fraction measured with a 68 Ge line source in the scatter phantom was stable within the range of 40.4%-40.6%. Contrast recovery and uniformity were comparable for PET images acquired simultaneously with multiple MR conditions. Peak noise equivalent count

  8. Use of spectral pre-processing methods to compensate for the presence of packaging film in visible–near infrared hyperspectral images of food products

    Directory of Open Access Journals (Sweden)

    A.A. Gowen

    2010-10-01

    Full Text Available The presence of polymeric packaging film in images of food products may modify spectra obtained in hyperspectral imaging (HSI experiments, leading to undesirable image artefacts which may impede image classification. Some pre-processing of the image is typically required to reduce the presence of such artefacts. The objective of this research was to investigate the use of spectral pre-processing techniques to compensate for the presence of packaging film in hyperspectral images obtained in the visible–near infrared wavelength range (445–945 nm, with application in food quality assessment. A selection of commonly used pre-processing methods, used individually and in combination, were applied to hyperspectral images of flat homogeneous samples, imaged in the presence and absence of different packaging films (polyvinyl chloride and polyethylene terephthalate. Effects of the selected pre-treatments on variation due to the film’s presence were examined in principal components score space. The results show that the combination of first derivative Savitzky–Golay followed by standard normal variate transformation was useful in reducing variations in spectral response caused by the presence of packaging film. Compared to other methods examined, this combination has the benefits of being computationally fast and not requiring a priori knowledge about the sample or film used.

  9. An investigation of the general regularity of size dependence of reaction kinetics of nanoparticles

    International Nuclear Information System (INIS)

    Cui, Zixiang; Duan, Huijuan; Xue, Yongqiang; Li, Ping

    2015-01-01

    In the processes of preparation and application of nanomaterials, the chemical reactions of nanoparticles are often involved, and the size of nanoparticles has dramatic influence on the reaction kinetics. Nevertheless, there are many conflicts on regularities of size dependence of reaction kinetic parameters, and these conflicts have not been explained so far. In this paper, taking the reaction of nano-ZnO (average diameter is from 20.96 to 53.31 nm) with acrylic acid solution as a system, the influence regularities of the particle size on the kinetic parameters were researched. The regularities were consistent with that in most literatures, but inconsistent with that in a few of literatures, the reasons for the conflicts were interpreted. The reasons can be attributed to two factors: one is improper data processing for fewer data points, and the other is the difference between solid particles and porous particles. A general regularity of the size dependence of reaction kinetics for solid particles was obtained. The regularity shows that with the size of nanoparticles decreasing, the rate constant and the reaction order increase, while the apparent activation energy and the pre-exponential factor decrease; and the relationships of the logarithm of rate constant, the logarithm of pre-exponential factor, and the apparent activation energy to the reciprocal of the particle size are linear, respectively

  10. Stability Analysis of Buffer Storage Large Basket and Temporary Storage Pre-packaging Basket Used in the Type B Radwaste Process Area

    International Nuclear Information System (INIS)

    Kim, Sung Kyun; Lee, Kune Woo; Moon, Jei Kwon

    2011-01-01

    The ITER radioactive waste (radwaste) treatment and storage systems are currently being designed to manage Type B, Type A and dust radwastes generated during the ITER machine operation. The Type B management system is to be in the hot cell building basement with temporary storage and the modular type storages outside the hot cell building for the pre-packed Type B radwaste during the ITER operation of 20 years. In order to store Type B radwaste components in onsite storage, the waste treatment chain process for Type B radwastes was developed as follows. First, Type B full components filled in a large basket are imported from Tokamak to the hot cell basement and they are stored in the buffer storage before treatment. Second, they are cut properly with a laser cutting machine or band saw machine and sliced waste parts are filled in a pre-packaging basket. Third, the sampling of Type B components is performed and then the tritium removal treatment is done in an oven to remove tritium from the waste surface and then the sampling is performed again. Forth, the characterization is performed by using a gamma spectrometry. Fifth, the pre-packaging operation is done to ensure the final packaging of the radwaste. Sixth, the pre-packaging baskets are stored in the temporary storage for 6 months and then they are sent to the extension storage and stored until export to host country. One of issues in the waste treatment scheme is to analyze the stacking stability of a stack of large baskets and pre-packaging baskets in the storage system. The baseline plan is to stack the large baskets in two layers in the buffer storage and to stack the pre-packaging baskets in three layers in the temporary storage and extension storage. In this study, the stacking stability analysis for the buffer storage large basket and temporary storage pre-packaging basket was performed for various stack failure modes

  11. Shakeout: A New Approach to Regularized Deep Neural Network Training.

    Science.gov (United States)

    Kang, Guoliang; Li, Jun; Tao, Dacheng

    2018-05-01

    Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training. In this paper, we present a new regularized training approach: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, Shakeout randomly chooses to enhance or reverse each unit's contribution to the next layer. This minor modification of Dropout has the statistical trait: the regularizer induced by Shakeout adaptively combines , and regularization terms. Our classification experiments with representative deep architectures on image datasets MNIST, CIFAR-10 and ImageNet show that Shakeout deals with over-fitting effectively and outperforms Dropout. We empirically demonstrate that Shakeout leads to sparser weights under both unsupervised and supervised settings. Shakeout also leads to the grouping effect of the input units in a layer. Considering the weights in reflecting the importance of connections, Shakeout is superior to Dropout, which is valuable for the deep model compression. Moreover, we demonstrate that Shakeout can effectively reduce the instability of the training process of the deep architecture.

  12. Total variation regularization for a backward time-fractional diffusion problem

    International Nuclear Information System (INIS)

    Wang, Liyan; Liu, Jijun

    2013-01-01

    Consider a two-dimensional backward problem for a time-fractional diffusion process, which can be considered as image de-blurring where the blurring process is assumed to be slow diffusion. In order to avoid the over-smoothing effect for object image with edges and to construct a fast reconstruction scheme, the total variation regularizing term and the data residual error in the frequency domain are coupled to construct the cost functional. The well posedness of this optimization problem is studied. The minimizer is sought approximately using the iteration process for a series of optimization problems with Bregman distance as a penalty term. This iteration reconstruction scheme is essentially a new regularizing scheme with coupling parameter in the cost functional and the iteration stopping times as two regularizing parameters. We give the choice strategy for the regularizing parameters in terms of the noise level of measurement data, which yields the optimal error estimate on the iterative solution. The series optimization problems are solved by alternative iteration with explicit exact solution and therefore the amount of computation is much weakened. Numerical implementations are given to support our theoretical analysis on the convergence rate and to show the significant reconstruction improvements. (paper)

  13. Stability of cascade search

    Energy Technology Data Exchange (ETDEWEB)

    Fomenko, Tatiana N [M. V. Lomonosov Moscow State University, Faculty of Computational Mathematics and Cybernetics, Moscow (Russian Federation)

    2010-10-22

    We find sufficient conditions on a searching multi-cascade for a modification of the set of limit points of the cascade that satisfy an assessing inequality for the distance from each of these points to the initial point to be small, provided that the modifications of the initial point and the initial set-valued functionals or maps used to construct the multi-cascade are small. Using this result, we prove the stability (in the above sense) of the cascade search for the set of common pre-images of a closed subspace under the action of n set-valued maps, n{>=}1 (in particular, for the set of common roots of these maps and for the set of their coincidences). For n=2 we obtain generalizations of some results of A. V. Arutyunov; the very statement of the problem comes from a recent paper of his devoted to the study of the stability of the subset of coincidences of a Lipschitz map and a covering map.

  14. Improving image quality in Electrical Impedance Tomography (EIT using Projection Error Propagation-based Regularization (PEPR technique: A simulation study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-03-01

    Full Text Available A Projection Error Propagation-based Regularization (PEPR method is proposed and the reconstructed image quality is improved in Electrical Impedance Tomography (EIT. A projection error is produced due to the misfit of the calculated and measured data in the reconstruction process. The variation of the projection error is integrated with response matrix in each iterations and the reconstruction is carried out in EIDORS. The PEPR method is studied with the simulated boundary data for different inhomogeneity geometries. Simulated results demonstrate that the PEPR technique improves image reconstruction precision in EIDORS and hence it can be successfully implemented to increase the reconstruction accuracy in EIT.>doi:10.5617/jeb.158 J Electr Bioimp, vol. 2, pp. 2-12, 2011

  15. Stability Measurements for Alignment of the NIF Neutron Imaging System Pinhole Array

    International Nuclear Information System (INIS)

    Fittinghoff, D.N.; Bower, D.E.; Drury, O.B.; Dzenitis, J.M.; Frank, M.; Buckles, R.A.; Munson, C.; Wilde, C.H.

    2011-01-01

    The alignment system for the National Ignition Facility's neutron imaging system has been commissioned and measurements of the relative stability of the 90-315 DIM, the front and the back of the neutron imaging pinhole array and an exploding pusher target have been made using the 90-135 and the 90-258 opposite port alignment systems. Additionally, a laser beam shot from the neutron-imaging Annex and reflected from a mirror at the back of the pinhole array was used to monitor the pointing of the pinhole. Over a twelve hour period, the relative stability of these parts was found to be within ∼ ±18 (micro)m rms even when using manual methods for tracking the position of the objects. For highly visible features, use of basic particle tracking techniques found that the front of the pinhole array was stable relative to the 90-135 opposite port alignment camera to within ±3.4 (micro)m rms. Reregistration, however, of the opposite port alignment systems themselves using the target alignment sensor was found to change the expected position of target chamber center by up to 194 (micro)m.

  16. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    International Nuclear Information System (INIS)

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-01-01

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  17. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    Science.gov (United States)

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  18. Efficiency of Dinucleosides as the Backbone to Pre-Organize Multi-Porphyrins and Enhance Their Stability as Sandwich Type Complexes with DABCO

    Directory of Open Access Journals (Sweden)

    Sonja Merkaš

    2017-07-01

    Full Text Available Flexible linkers such as uridine or 2′-deoxyuridine pre-organize bis-porphyrins in a face-to-face conformation, thus forming stable sandwich complexes with a bidentate base such as 1,4-diazabicyclo[2.2.2]octane (DABCO. Increased stability can be even greater when a dinucleotide linker is used. Such pre-organization increases the association constant by one to two orders of magnitude when compared to the association constant of DABCO with a reference porphyrin. Comparison with rigid tweezers shows a better efficiency of nucleosidic dimers. Thus, the choice of rigid spacers is not the only way to pre-organize bis-porphyrins, and well-chosen nucleosidic linkers offer an interesting option for the synthesis of such devices.

  19. Stream Processing Using Grammars and Regular Expressions

    DEFF Research Database (Denmark)

    Rasmussen, Ulrik Terp

    disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...

  20. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  1. Stability of large-area molecular junctions

    NARCIS (Netherlands)

    Akkerman, Hylke B.; Kronemeijer, Auke J.; Harkema, Jan; van Hal, Paul A.; Smits, Edsger C. P.; de Leeuw, Dago M.; Blom, Paul W. M.

    The stability of molecular junctions is crucial for any application of molecular electronics. Degradation of molecular junctions when exposed to ambient conditions is regularly observed. In this report the stability of large-area molecular junctions under ambient conditions for more than two years

  2. MEMS-based thermally-actuated image stabilizer for cellular phone camera

    International Nuclear Information System (INIS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-01-01

    This work develops an image stabilizer (IS) that is fabricated using micro-electro-mechanical system (MEMS) technology and is designed to counteract the vibrations when human using cellular phone cameras. The proposed IS has dimensions of 8.8 × 8.8 × 0.3 mm 3 and is strong enough to suspend an image sensor. The processes that is utilized to fabricate the IS includes inductive coupled plasma (ICP) processes, reactive ion etching (RIE) processes and the flip-chip bonding method. The IS is designed to enable the electrical signals from the suspended image sensor to be successfully emitted out using signal output beams, and the maximum actuating distance of the stage exceeds 24.835 µm when the driving current is 155 mA. Depending on integration of MEMS device and designed controller, the proposed IS can decrease the hand tremor by 72.5%. (paper)

  3. Regularization Techniques for ECG Imaging during Atrial Fibrillation: a Computational Study

    Directory of Open Access Journals (Sweden)

    Carlos Figuera

    2016-10-01

    Full Text Available The inverse problem of electrocardiography is usually analyzed during stationary rhythms. However, the performance of the regularization methods under fibrillatory conditions has not been fully studied. In this work, we assessed different regularization techniques during atrial fibrillation (AF for estimating four target parameters, namely, epicardial potentials, dominant frequency (DF, phase maps, and singularity point (SP location. We use a realistic mathematical model of atria and torso anatomy with three different electrical activity patterns (i.e. sinus rhythm, simple AF and complex AF. Body surface potentials (BSP were simulated using Boundary Element Method and corrupted with white Gaussian noise of different powers. Noisy BSPs were used to obtain the epicardial potentials on the atrial surface, using fourteen different regularization techniques. DF, phase maps and SP location were computed from estimated epicardial potentials. Inverse solutions were evaluated using a set of performance metrics adapted to each clinical target. For the case of SP location, an assessment methodology based on the spatial mass function of the SP location and four spatial error metrics was proposed. The role of the regularization parameter for Tikhonov-based methods, and the effect of noise level and imperfections in the knowledge of the transfer matrix were also addressed. Results showed that the Bayes maximum-a-posteriori method clearly outperforms the rest of the techniques but requires a priori information about the epicardial potentials. Among the purely non-invasive techniques, Tikhonov-based methods performed as well as more complex techniques in realistic fibrillatory conditions, with a slight gain between 0.02 and 0.2 in terms of the correlation coefficient. Also, the use of a constant regularization parameter may be advisable since the performance was similar to that obtained with a variable parameter (indeed there was no difference for the zero

  4. Using 3 Tesla magnetic resonance imaging in the pre-operative evaluation of tongue carcinoma.

    Science.gov (United States)

    Moreno, K F; Cornelius, R S; Lucas, F V; Meinzen-Derr, J; Patil, Y J

    2017-09-01

    This study aimed to evaluate the role of 3 Tesla magnetic resonance imaging in predicting tongue tumour thickness via direct and reconstructed measures, and their correlations with corresponding histological measures, nodal metastasis and extracapsular spread. A prospective study was conducted of 25 patients with histologically proven squamous cell carcinoma of the tongue and pre-operative 3 Tesla magnetic resonance imaging from 2009 to 2012. Correlations between 3 Tesla magnetic resonance imaging and histological measures of tongue tumour thickness were assessed using the Pearson correlation coefficient: r values were 0.84 (p Tesla magnetic resonance imaging had 83 per cent sensitivity, 82 per cent specificity, 82 per cent accuracy and a 90 per cent negative predictive value for detecting cervical lymph node metastasis. In this cohort, 3 Tesla magnetic resonance imaging measures of tumour thickness correlated highly with the corresponding histological measures. Further, 3 Tesla magnetic resonance imaging was an effective method of detecting malignant adenopathy with extracapsular spread.

  5. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    Science.gov (United States)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  6. Dynamics of Stability of Orientation Maps Recorded with Optical Imaging.

    Science.gov (United States)

    Shumikhina, S I; Bondar, I V; Svinov, M M

    2018-03-15

    Orientation selectivity is an important feature of visual cortical neurons. Optical imaging of the visual cortex allows for the generation of maps of orientation selectivity that reflect the activity of large populations of neurons. To estimate the statistical significance of effects of experimental manipulations, evaluation of the stability of cortical maps over time is required. Here, we performed optical imaging recordings of the visual cortex of anesthetized adult cats. Monocular stimulation with moving clockwise square-wave gratings that continuously changed orientation and direction was used as the mapping stimulus. Recordings were repeated at various time intervals, from 15 min to 16 h. Quantification of map stability was performed on a pixel-by-pixel basis using several techniques. Map reproducibility showed clear dynamics over time. The highest degree of stability was seen in maps recorded 15-45 min apart. Averaging across all time intervals and all stimulus orientations revealed a mean shift of 2.2 ± 0.1°. There was a significant tendency for larger shifts to occur at longer time intervals. Shifts between 2.8° (mean ± 2SD) and 5° were observed more frequently at oblique orientations, while shifts greater than 5° appeared more frequently at cardinal orientations. Shifts greater than 5° occurred rarely overall (5.4% of cases) and never exceeded 11°. Shifts of 10-10.6° (0.7%) were seen occasionally at time intervals of more than 4 h. Our findings should be considered when evaluating the potential effect of experimental manipulations on orientation selectivity mapping studies. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring

    Directory of Open Access Journals (Sweden)

    Naixue Xiong

    2017-01-01

    Full Text Available Single-image blind deblurring for imaging sensors in the Internet of Things (IoT is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  8. Real-time feedback for spatiotemporal field stabilization in MR systems.

    Science.gov (United States)

    Duerst, Yolanda; Wilm, Bertram J; Dietrich, Benjamin E; Vannesjo, S Johanna; Barmet, Christoph; Schmid, Thomas; Brunner, David O; Pruessmann, Klaas P

    2015-02-01

    MR imaging and spectroscopy require a highly stable, uniform background field. The field stability is typically limited by hardware imperfections, external perturbations, or field fluctuations of physiological origin. The purpose of the present work is to address these issues by introducing spatiotemporal field stabilization based on real-time sensing and feedback control. An array of NMR field probes is used to sense the field evolution in a whole-body MR system concurrently with regular system operation. The field observations serve as inputs to a proportional-integral controller that governs correction currents in gradient and higher-order shim coils such as to keep the field stable in a volume of interest. The feedback system was successfully set up, currently reaching a minimum latency of 20 ms. Its utility is first demonstrated by countering thermal field drift during an EPI protocol. It is then used to address respiratory field fluctuations in a T2 *-weighted brain exam, resulting in substantially improved image quality. Feedback field control is an effective means of eliminating dynamic field distortions in MR systems. Third-order spatial control at an update time of 100 ms has proven sufficient to largely eliminate thermal and breathing effects in brain imaging at 7 Tesla. © 2014 Wiley Periodicals, Inc.

  9. Technical Note: Regularization performances with the error consistency method in the case of retrieved atmospheric profiles

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2007-01-01

    Full Text Available The retrieval of concentration vertical profiles of atmospheric constituents from spectroscopic measurements is often an ill-conditioned problem and regularization methods are frequently used to improve its stability. Recently a new method, that provides a good compromise between precision and vertical resolution, was proposed to determine analytically the value of the regularization parameter. This method is applied for the first time to real measurements with its implementation in the operational retrieval code of the satellite limb-emission measurements of the MIPAS instrument and its performances are quantitatively analyzed. The adopted regularization improves the stability of the retrieval providing smooth profiles without major degradation of the vertical resolution. In the analyzed measurements the retrieval procedure provides a vertical resolution that, in the troposphere and low stratosphere, is smaller than the vertical field of view of the instrument.

  10. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  11. A Superresolution Image Reconstruction Algorithm Based on Landweber in Electrical Capacitance Tomography

    Directory of Open Access Journals (Sweden)

    Chen Deyun

    2013-01-01

    Full Text Available According to the image reconstruction accuracy influenced by the “soft field” nature and ill-conditioned problems in electrical capacitance tomography, a superresolution image reconstruction algorithm based on Landweber is proposed in the paper, which is based on the working principle of the electrical capacitance tomography system. The method uses the algorithm which is derived by regularization of solutions derived and derives closed solution by fast Fourier transform of the convolution kernel. So, it ensures the certainty of the solution and improves the stability and quality of image reconstruction results. Simulation results show that the imaging precision and real-time imaging of the algorithm are better than Landweber algorithm, and this algorithm proposes a new method for the electrical capacitance tomography image reconstruction algorithm.

  12. A magnetic nanoparticle stabilized gas containing emulsion for multimodal imaging and triggered drug release.

    Science.gov (United States)

    Guo, Wei; Li, Diancheng; Zhu, Jia-an; Wei, Xiaohui; Men, Weiwei; Yin, Dazhi; Fan, Mingxia; Xu, Yuhong

    2014-06-01

    To develop a multimodal imaging guided and triggered drug delivery system based on a novel emulsion formulation composed of iron oxide nanoparticles, nanoscopic bubbles, and oil containing drugs. Iron oxide paramagnetic nanoparticles were synthesized and modified with surface conjugation of polyethylenimide (PEI) or Bovine Serum Albumin (BSA). Both particles were used to disperse and stabilize oil in water emulsions containing coumarin-6 as the model drug. Sulfur hexafluoride was introduced into the oil phase to form nanoscopic bubbles inside the emulsions. The resulted gas containing emulsions were evaluated for their magnetic resonance (MR) and ultrasound (US) imaging properties. The drug release profile triggered by ultrasound was also examined. We have successfully prepared the highly integrated multi-component emulsion system using the surface modified iron oxide nanoparticles to stabilize the interfaces. The resulted structure had distinctive MR and US imaging properties. Upon application of ultrasound waves, the gas containing emulsion would burst and encapsulated drug could be released. The integrated emulsion formulation was multifunctional with paramagnetic, sono-responsive and drug-carrying characteristics, which may have potential applications for disease diagnosis and imaging guided drug release.

  13. Problems in the optimum display of SPECT images

    International Nuclear Information System (INIS)

    Fielding, S.L.

    1988-01-01

    The instrumentation, computer hardware and software, and the image display system are all very important in the production of diagnostically useful SPECT images. Acquisition and processing parameters are discussed which can affect the quality of SPECT images. Regular quality control of the gamma camera and computer is important to keep the artifacts due to instrumentation to a minimum. The choice of reconstruction method will depend on the statistics in the study. The paper has shown that for high count rate studies, a high pass filter can be used to enhance the reconstructions. For lower count rate studies, pre-filtering is useful and the data can be reconstructed into thicker slices to reduce the effect of image noise. Finally, the optimum display for the images must be chosen, so that the information contained in the SPECT data can be easily perceived by the clinician. (orig.) [de

  14. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany).

    2017-10-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  15. Optimization of the alpha image reconstruction. An iterative CT-image reconstruction with well-defined image quality metrics

    International Nuclear Information System (INIS)

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelriess, Marc

    2017-01-01

    Optimization of the AIR-algorithm for improved convergence and performance. TThe AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  16. Contribution to restoration of degraded images by a space-variant system: use of an a priori model of the image

    International Nuclear Information System (INIS)

    Barakat, Valerie

    1998-01-01

    Imaging systems often present shift-variant point spread functions which are usually approximated by shift-invariant ones, in order to simplify the restoration problem. The aim of this thesis is to show that, if this shift-variant degradation is taken into account, it may increase strongly the quality of restoration. The imaging system is a pinhole, used to acquire images of high energy beams. Three restoration methods have been studied and compared: the Tikhonov-Miller regularization, the Markov-fields and the Maximum-Entropy methods. These methods are based on the incorporation of an a priori knowledge into the restoration process, to achieve stability of the solution. An improved restoration method is proposed: this approach is based on the Tikhonov-Miller regularization, combined with an a priori model of the solution. The idea of such a model is to express local characteristics to be reconstructed. The concept of parametric models described by a set of parameters (shape of the object, amplitude values,...) is used. A parametric optimization is used to find the optimal estimation of parameters close to the correct a priori information data of the expected solution. Several criteria have been proposed to measure the restoration quality. (author) [fr

  17. Contour Propagation With Riemannian Elasticity Regularization

    DEFF Research Database (Denmark)

    Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.

    2011-01-01

    Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...

  18. A complex of Cas proteins 5, 6, and 7 is required for the biogenesis and stability of clustered regularly interspaced short palindromic repeats (crispr)-derived rnas (crrnas) in Haloferax volcanii.

    Science.gov (United States)

    Brendel, Jutta; Stoll, Britta; Lange, Sita J; Sharma, Kundan; Lenz, Christof; Stachler, Aris-Edda; Maier, Lisa-Katharina; Richter, Hagen; Nickel, Lisa; Schmitz, Ruth A; Randau, Lennart; Allers, Thorsten; Urlaub, Henning; Backofen, Rolf; Marchfelder, Anita

    2014-03-07

    The clustered regularly interspaced short palindromic repeats/CRISPR-associated (CRISPR-Cas) system is a prokaryotic defense mechanism against foreign genetic elements. A plethora of CRISPR-Cas versions exist, with more than 40 different Cas protein families and several different molecular approaches to fight the invading DNA. One of the key players in the system is the CRISPR-derived RNA (crRNA), which directs the invader-degrading Cas protein complex to the invader. The CRISPR-Cas types I and III use the Cas6 protein to generate mature crRNAs. Here, we show that the Cas6 protein is necessary for crRNA production but that additional Cas proteins that form a CRISPR-associated complex for antiviral defense (Cascade)-like complex are needed for crRNA stability in the CRISPR-Cas type I-B system in Haloferax volcanii in vivo. Deletion of the cas6 gene results in the loss of mature crRNAs and interference. However, cells that have the complete cas gene cluster (cas1-8b) removed and are transformed with the cas6 gene are not able to produce and stably maintain mature crRNAs. crRNA production and stability is rescued only if cas5, -6, and -7 are present. Mutational analysis of the cas6 gene reveals three amino acids (His-41, Gly-256, and Gly-258) that are essential for pre-crRNA cleavage, whereas the mutation of two amino acids (Ser-115 and Ser-224) leads to an increase of crRNA amounts. This is the first systematic in vivo analysis of Cas6 protein variants. In addition, we show that the H. volcanii I-B system contains a Cascade-like complex with a Cas7, Cas5, and Cas6 core that protects the crRNA.

  19. Stability of regularly prescribed oral liquids formulated with SyrSpend® SF.

    Science.gov (United States)

    Uriel, M; Gómez-Rincón, C; Marro, D

    2018-04-02

    The purpose of this research was to evaluate the stability of 12 oral liquid formulations frequently compounded in hospital and community settings formulated in a specific vehicle: SyrSpend® SF. The stability of melatonin, glycopyrrolate, ciclosporin, chloral hydrate, flecainide acetate, tiagabine HCl, labetalol HCl, ciprofloxacin HCl, spironolactone/hydrochlorothiazide, hydrocortisone, itraconazole and celecoxib in SyrSpend SF PH4 (liquid) was investigated at 0, 30, 60 and 90 days and stored at both controlled room temperature and refrigerated. Itraconazole samples were also investigated at 15 and 45 days. No change in odor, color or appearance was observed in the formulations during the test period. Based on the results, a beyond-use date of 30 days can be assigned to tiagabine HCl 1.0 mg/ml in SyrSpend SF when stored at controlled room temperature, and 90 days under refrigeration, improving stability data previously published using other vehicles. A beyond-use date of 60 days can be assigned to chloral hydrate 100.0 mg/ml. In this case, stability is not enhanced by refrigeration. With the rest of the formulations, less than 10% API loss occurred over 90 days at either controlled room temperature or under refrigeration. Including for example itraconazole 20.0 mg/ml, thus providing extended stability compared to simple syrup and other oral liquid vehicles. The findings of this study show that SyrSpend SF is an appropriate suspending vehicle to be used for personalized formulations of the APIs studied here.

  20. Gravitational lensing by a regular black hole

    International Nuclear Information System (INIS)

    Eiroa, Ernesto F; Sendra, Carlos M

    2011-01-01

    In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.

  1. Gravitational lensing by a regular black hole

    Energy Technology Data Exchange (ETDEWEB)

    Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)

    2011-04-21

    In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.

  2. Breast imaging using an amorphous silicon-based full-field digital mammographic system: stability of a clinical prototype.

    Science.gov (United States)

    Vedantham, S; Karellas, A; Suryanarayanan, S; D'Orsi, C J; Hendrick, R E

    2000-11-01

    An amorphous silicon-based full-breast imager for digital mammography was evaluated for detector stability over a period of 1 year. This imager uses a structured CsI:TI scintillator coupled to an amorphous silicon layer with a 100-micron pixel pitch and read out by special purpose electronics. The stability of the system was characterized using the following quantifiable metrics: conversion factor (mean number of electrons generated per incident x-ray), presampling modulation transfer function (MTF), detector linearity and sensitivity, detector signal-to-noise ratio (SNR), and American College of Radiology (ACR) accreditation phantom scores. Qualitative metrics such as flat field uniformity, geometric distortion, and Society of Motion Picture and Television Engineers (SMPTE) test pattern image quality were also used to study the stability of the system. Observations made over this 1-year period indicated that the maximum variation from the average of the measurements were less than 0.5% for conversion factor, 3% for presampling MTF over all spatial frequencies, 5% for signal response, linearity and sensitivity, 12% for SNR over seven locations for all 3 target-filter combinations, and 0% for ACR accreditation phantom scores. ACR mammographic accreditation phantom images indicated the ability to resolve 5 fibers, 4 speck groups, and 5 masses at a mean glandular dose of 1.23 mGy. The SMPTE pattern image quality test for the display monitors used for image viewing indicated ability to discern all contrast steps and ability to distinguish line-pair images at the center and corners of the image. No bleeding effects were observed in the image. Flat field uniformity for all 3 target-filter combinations displayed no artifacts such as gridlines, bad detector rows or columns, horizontal or vertical streaks, or bad pixels. Wire mesh screen images indicated uniform resolution and no geometric distortion.

  3. On a continuation approach in Tikhonov regularization and its application in piecewise-constant parameter identification

    International Nuclear Information System (INIS)

    Melicher, V; Vrábel’, V

    2013-01-01

    We present a new approach to the convexification of the Tikhonov regularization using a continuation method strategy. We embed the original minimization problem into a one-parameter family of minimization problems. Both the penalty term and the minimizer of the Tikhonov functional become dependent on a continuation parameter. In this way we can independently treat two main roles of the regularization term, which are the stabilization of the ill-posed problem and introduction of the a priori knowledge. For zero continuation parameter we solve a relaxed regularization problem, which stabilizes the ill-posed problem in a weaker sense. The problem is recast to the original minimization by the continuation method and so the a priori knowledge is enforced. We apply this approach in the context of topology-to-shape geometry identification, where it allows us to avoid the convergence of gradient-based methods to a local minima. We present illustrative results for magnetic induction tomography which is an example of PDE-constrained inverse problem. (paper)

  4. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  5. A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-11-01

    This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.

  6. An analysis of electrical impedance tomography with applications to Tikhonov regularization

    KAUST Repository

    Jin, Bangti

    2012-01-16

    This paper analyzes the continuum model/complete electrode model in the electrical impedance tomography inverse problem of determining the conductivity parameter from boundary measurements. The continuity and differentiability of the forward operator with respect to the conductivity parameter in L p-norms are proved. These analytical results are applied to several popular regularization formulations, which incorporate a priori information of smoothness/sparsity on the inhomogeneity through Tikhonov regularization, for both linearized and nonlinear models. Some important properties, e.g., existence, stability, consistency and convergence rates, are established. This provides some theoretical justifications of their practical usage. © EDP Sciences, SMAI, 2012.

  7. An analysis of electrical impedance tomography with applications to Tikhonov regularization

    KAUST Repository

    Jin, Bangti; Maass, Peter

    2012-01-01

    This paper analyzes the continuum model/complete electrode model in the electrical impedance tomography inverse problem of determining the conductivity parameter from boundary measurements. The continuity and differentiability of the forward operator with respect to the conductivity parameter in L p-norms are proved. These analytical results are applied to several popular regularization formulations, which incorporate a priori information of smoothness/sparsity on the inhomogeneity through Tikhonov regularization, for both linearized and nonlinear models. Some important properties, e.g., existence, stability, consistency and convergence rates, are established. This provides some theoretical justifications of their practical usage. © EDP Sciences, SMAI, 2012.

  8. SparseBeads data: benchmarking sparsity-regularized computed tomography

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Sauer; Coban, Sophia B.; Lionheart, William R. B.

    2017-01-01

    -regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels...

  9. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  10. Regularization destriping of remote sensing imagery

    Science.gov (United States)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  11. Biopharmaceutical formulations for pre-filled delivery devices.

    Science.gov (United States)

    Jezek, Jan; Darton, Nicholas J; Derham, Barry K; Royle, Nikki; Simpson, Iain

    2013-06-01

    Pre-filled syringes are becoming an increasingly popular format for delivering biotherapeutics conveniently and cost effectively. The device design and stable liquid formulations required to enable this pre-filled syringe format are technically challenging. In choosing the materials and process conditions to fabricate the syringe unit, their compatibility with the biotherapeutic needs to be carefully assessed. The biothereaputic stability demanded for the production of syringe-compatible low-viscosity liquid solutions requires critical excipient choices to be made. The purpose of this review is to discuss key issues related to the stability aspects of biotherapeutics in pre-filled devices. This includes effects on both physical and chemical stability due to a number of stress conditions the product is subjected to, as well as interactions with the packaging system. Particular attention is paid to the control of stability by formulation. We anticipate that there will be a significant move towards polymer primary packaging for most drugs in the longer term. The timescales for this will depend on a number of factors and hence will be hard to predict. Formulation will play a critical role in developing successful products in the pre-filled syringe format, particularly with the trend towards concentrated biotherapeutics. Development of novel, smart formulation technologies will, therefore, be increasingly important.

  12. Regularization iteration imaging algorithm for electrical capacitance tomography

    Science.gov (United States)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  13. 3D first-arrival traveltime tomography with modified total variation regularization

    Science.gov (United States)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  14. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2016-01-01

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  15. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-11-29

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  16. A Novel, Aqueous Surface Treatment To Thermally Stabilize High Resolution Positive Photoresist Images*

    Science.gov (United States)

    Grunwald, John J.; Spencer, Allen C.

    1986-07-01

    The paper describes a new approach to thermally stabilize the already imaged profile of high resolution positive photoresists such as ULTRAMAC" PR-914. ***XD-4000, an aqueous emulsion of a blend of fluorine-bearing compounds is spun on top of the developed, positive photoresist-imaged wafer, and baked. This allows the photoresist to withstand temperatures up to at least 175 deg. C. while essentially maintaining vertical edge profiles. Also, adverse effects of "outgassing" in harsh environments, ie., plasma and ion implant are greatly minimized by allowing the high resolution imaged photoresist to be post-baked at "elevated" temperatures. Another type of product that accomplishes the same effect is ***XD-4005, an aqueous emulsion of a high temperature-resistant polymer. While the exact mechanism is yet to be identified, it is postulated that absorption of the "polymeric" species into the "skin" of the imaged resist forms a temperature resistant "envelope", thereby allowing high resolution photoresists to also serve in a "high temperature" mode, without reticulation, or other adverse effects due to thermal degradation. SEM's are presented showing imaged ULTRAMAC" PR-914 and ULTRAMAC" **EPA-914 geometries coated with XD-4000 or XD-4005 and followed by plasma etched oxide,polysilicon and aluminum. Selectivity ratios are compared with and without the novel treatment and are shown to be significantly better with the treatment. The surface-treated photoresist for thermal resistance remains easily strippable in solvent-based or plasma media, unlike photoresists that have undergone "PRIST" or other gaseous thermal stabilization methods.

  17. Describing chaotic attractors: Regular and perpetual points

    Science.gov (United States)

    Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz

    2018-03-01

    We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.

  18. Regularization based on steering parameterized Gaussian filters and a Bhattacharyya distance functional

    Science.gov (United States)

    Lopes, Emerson P.

    2001-08-01

    Template regularization embeds the problem of class separability. In the machine vision perspective, this problem is critical when a textural classification procedure is applied to non-stationary pattern mosaic images. These applications often present low accuracy performance due to disturbance of the classifiers produced by exogenous or endogenous signal regularity perturbations. Natural scene imaging, where the images present certain degree of homogeneity in terms of texture element size or shape (primitives) shows a variety of behaviors, especially varying the preferential spatial directionality. The space-time image pattern characterization is only solved if classification procedures are designed considering the most robust tools within a parallel and hardware perspective. The results to be compared in this paper are obtained using a framework based on multi-resolution, frame and hypothesis approach. Two strategies for the bank of Gabor filters applications are considered: adaptive strategy using the KL transform and fix configuration strategy. The regularization under discussion is accomplished in the pyramid building system instance. The filterings are steering Gaussians controlled by free parameters which are adjusted in accordance with a feedback process driven by hints obtained from sequence of frames interaction functionals pos-processed in the training process and including classification of training set samples as examples. Besides these adjustments there is continuous input data sensitive adaptiveness. The experimental result assessments are focused on two basic issues: Bhattacharyya distance as pattern characterization feature and the combination of KL transform as feature selection and adaptive criterion with the regularization of the pattern Bhattacharyya distance functional (BDF) behavior, using the BDF state separability and symmetry as the main indicators of an optimum framework parameter configuration.

  19. A Complex of Cas Proteins 5, 6, and 7 Is Required for the Biogenesis and Stability of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)-derived RNAs (crRNAs) in Haloferax volcanii*

    Science.gov (United States)

    Brendel, Jutta; Stoll, Britta; Lange, Sita J.; Sharma, Kundan; Lenz, Christof; Stachler, Aris-Edda; Maier, Lisa-Katharina; Richter, Hagen; Nickel, Lisa; Schmitz, Ruth A.; Randau, Lennart; Allers, Thorsten; Urlaub, Henning; Backofen, Rolf; Marchfelder, Anita

    2014-01-01

    The clustered regularly interspaced short palindromic repeats/CRISPR-associated (CRISPR-Cas) system is a prokaryotic defense mechanism against foreign genetic elements. A plethora of CRISPR-Cas versions exist, with more than 40 different Cas protein families and several different molecular approaches to fight the invading DNA. One of the key players in the system is the CRISPR-derived RNA (crRNA), which directs the invader-degrading Cas protein complex to the invader. The CRISPR-Cas types I and III use the Cas6 protein to generate mature crRNAs. Here, we show that the Cas6 protein is necessary for crRNA production but that additional Cas proteins that form a CRISPR-associated complex for antiviral defense (Cascade)-like complex are needed for crRNA stability in the CRISPR-Cas type I-B system in Haloferax volcanii in vivo. Deletion of the cas6 gene results in the loss of mature crRNAs and interference. However, cells that have the complete cas gene cluster (cas1–8b) removed and are transformed with the cas6 gene are not able to produce and stably maintain mature crRNAs. crRNA production and stability is rescued only if cas5, -6, and -7 are present. Mutational analysis of the cas6 gene reveals three amino acids (His-41, Gly-256, and Gly-258) that are essential for pre-crRNA cleavage, whereas the mutation of two amino acids (Ser-115 and Ser-224) leads to an increase of crRNA amounts. This is the first systematic in vivo analysis of Cas6 protein variants. In addition, we show that the H. volcanii I-B system contains a Cascade-like complex with a Cas7, Cas5, and Cas6 core that protects the crRNA. PMID:24459147

  20. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  1. lop-DWI: A Novel Scheme for Pre-Processing of Diffusion-Weighted Images in the Gradient Direction Domain.

    Science.gov (United States)

    Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi

    2014-01-01

    We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.

  2. Graph-cut based discrete-valued image reconstruction.

    Science.gov (United States)

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  3. Magnetic resonance imaging pre- and postoperative evaluation of tetralogy of Fallot

    International Nuclear Information System (INIS)

    Bernardes, Renata Junqueira Moll; Simoes, Luiz Carlos

    2004-01-01

    The purpose of this study was to assess the usefulness of magnetic resonance imaging (MRI) in the pre- and postoperative evaluation of patients with tetralogy of Fallot. Twenty patients aged 1 to 29 years were prospectively evaluated with black-blood and contrast-enhanced angiographic techniques, 11 with the classic form of tetralogy of Fallot and 9 with tetralogy of Fallot and pulmonary atresia. MRI studies provided adequate visualization of the aorta that was classified as dilated or not dilated, and definition of its position in all cases. The use of contrast-enhanced MR angiographic techniques provided excellent imaging of the main right and left pulmonary arteries. The results suggest that MRI, including contrast-enhanced angiography techniques, is a useful tool in the evaluation of patients with tetralogy of Fallot before and after cardiac surgery since it provides important anatomical information that is not always obtained with echocardiography. MRI can be considered an alternative to cardiac catheterization, particularly in the evaluation of the pulmonary vascular anatomy. (author)

  4. Pre-operative radiotherapy in soft tissue tumors: Assessment of response by static post-contrast MR imaging compared to histopathology

    International Nuclear Information System (INIS)

    Einarsdottir, H.; Wejde, J.; Bauer, H.C.F.

    2000-01-01

    To evaluate if static post-contrast MR imaging was adequate to assess tumor viability after pre-operative radiotherapy in soft tissue sarcoma. Post-contrast MR imaging of 36 soft tissue sarcomas performed 0 - 54 days (median 13 days) after pre-operative radiotherapy, were retrospectively reviewed and compared to post-operative histopathology reports. The contrast enhancement of the tumor was visually graded as minor, moderate or extensive. From the post-operative histopathology reports, three types of tumor response to radiotherapy were defined: Poor, intermediate or good. The size of the tumors before and after radiation was compared. Even if most viable tumors enhanced more than non-viable tumors, there was major overlapping and significant contrast enhancement could be seen in tumors where histopathological examination revealed no viable tumor tissue. Based on histopathology, there were 12 good responders; 8 of these showed minor, 3 moderate and 1 extensive contrast enhancement on MR imaging. Sixteen tumors had an intermediate response; 3 showed minor, 8 moderate and 5 extensive enhancement. Eight tumors had poor response; none showed minor enhancement, 3 moderate and 5 extensive enhancement. Both increase and Decrease in tumor size was seen in lesions with a good therapy response. Static post-contrast MR imaging cannot reliably assess tumor viability after pre-operative radiotherapy in soft tissue sarcoma. In tumors with no viable tumor tissue, moderate and extensive contrast enhancement can be seen

  5. Joint Segmentation and Shape Regularization with a Generalized Forward Backward Algorithm.

    Science.gov (United States)

    Stefanoiu, Anca; Weinmann, Andreas; Storath, Martin; Navab, Nassir; Baust, Maximilian

    2016-05-11

    This paper presents a method for the simultaneous segmentation and regularization of a series of shapes from a corresponding sequence of images. Such series arise as time series of 2D images when considering video data, or as stacks of 2D images obtained by slicewise tomographic reconstruction. We first derive a model where the regularization of the shape signal is achieved by a total variation prior on the shape manifold. The method employs a modified Kendall shape space to facilitate explicit computations together with the concept of Sobolev gradients. For the proposed model, we derive an efficient and computationally accessible splitting scheme. Using a generalized forward-backward approach, our algorithm treats the total variation atoms of the splitting via proximal mappings, whereas the data terms are dealt with by gradient descent. The potential of the proposed method is demonstrated on various application examples dealing with 3D data. We explain how to extend the proposed combined approach to shape fields which, for instance, arise in the context of 3D+t imaging modalities, and show an application in this setup as well.

  6. Assessments of whole body scan images (PCI) obtained in patients undergoing treatment of radioiodine (pre and post-treatment)

    International Nuclear Information System (INIS)

    Costa, Fernanda Karolina Mendonca da; Lopes Filho, Ferdinand de Jesus; Vieira, Jose Wilson; Souza, Milena Thays Barbosa de

    2014-01-01

    Nuclear medicine is a medical specialty used for diagnosis and therapy of some diseases. For the treatment of differentiated thyroid carcinoma (papillary and follicular) Radioiodine therapy is employed, in order to eliminate the rest of thyroid tissue after removal of the thyroid (thyroidectomy). In radioiodine therapy is used radioisotope iodine-131 ( 131 I) as Sodium Iodide (NaI). The amount of the activity (dose) of 131 I administered is generally the responsibility of nuclear medicine, which is based on an image Research Length of the patient (pre-dose therapy PCI). PCI is also used after treatment (post-PCI therapeutic dose) to evaluate possible metastasis. The purpose of this study was to investigate the distribution of biokinetic 131 I at length and in some organs of the patient, in order to note any similarity. Exams PCI pre-dose and post-dose were analyzed, the anterior and posterior projections of ten patients. Contours in these images (ROI - Region Of Interest) were made in the whole body and in areas with high uptake of 131 I. The total score was used in the calculation to obtain the percentage distribution of 13I in the organs of the patient. The results showed that there similarity on the biodistribution of 131 I between pre-dose and post-dose PCI. Therefore, it was found that it is valuable images of PCI pre-dose therapy as a way to assist the nuclear medicine physician in choosing the best activity to be administered to the patient in order to minimize the dose to adjacent organs. (author)

  7. A strategy for Local Surface Stability Monitoring Using SAR Imagery

    Science.gov (United States)

    Kim, J.; Lan, C. W.; Lin, S. Y.; vanGasselt, S.; Yun, H.

    2017-12-01

    In order to provide sufficient facilities to satisfy a growing number of residents, nowadays there are many constructions and maintenance of infrastructures or buildings undergoing above and below the surface of urban area. In some cases we have learned that disasters might happen if the developments were conducted on unknown or geologically unstable ground or in over-developed areas. To avoid damages caused by such settings, it is essential to perform a regular monitoring scheme to understand the ground stability over the whole urban area. Through long-term monitoring, we firstly aim to observe surface stability over the construction sites. Secondly, we propose to implement an automatic extraction and tracking of suspicious unstable area. To achieve this, we used 12-days-interval C-band Sentinel-1A Synthetic Aperture Radar (SAR) images as the main source to perform regular monitoring. Differential Interferometric SAR (D-InSAR) technique was applied to generate interferograms. Together with the accumulation of updated Sentinel-1A SAR images, time series interferograms were formed accordingly. For the purpose of observing surface stability over known construction sites, the interferograms and the unwrapped products could be used to identify the surface displacement occurring before and after specific events. In addition, Small Baseline Subset (SBAS) and Permanent Scatterers (PS) approaches combining a set of unwrapped D-InSAR interferograms were also applied to derive displacement velocities over long-term periods. For some cases, we conducted the ascending and descending mode time series analysis to decompose three surface migration vectors and to precisely identify the risk pattern. Regarding the extraction of suspicious unstable areas, we propose to develop an automatic pattern recognition algorithm for the identification of specific fringe patterns involving various potential risks. The detected fringes were tracked in the time series interferograms and

  8. Retrieval-based Face Annotation by Weak Label Regularized Local Coordinate Coding.

    Science.gov (United States)

    Wang, Dayong; Hoi, Steven C H; He, Ying; Zhu, Jianke; Mei, Tao; Luo, Jiebo

    2013-08-02

    Retrieval-based face annotation is a promising paradigm of mining massive web facial images for automated face annotation. This paper addresses a critical problem of such paradigm, i.e., how to effectively perform annotation by exploiting the similar facial images and their weak labels which are often noisy and incomplete. In particular, we propose an effective Weak Label Regularized Local Coordinate Coding (WLRLCC) technique, which exploits the principle of local coordinate coding in learning sparse features, and employs the idea of graph-based weak label regularization to enhance the weak labels of the similar facial images. We present an efficient optimization algorithm to solve the WLRLCC task. We conduct extensive empirical studies on two large-scale web facial image databases: (i) a Western celebrity database with a total of $6,025$ persons and $714,454$ web facial images, and (ii)an Asian celebrity database with $1,200$ persons and $126,070$ web facial images. The encouraging results validate the efficacy of the proposed WLRLCC algorithm. To further improve the efficiency and scalability, we also propose a PCA-based approximation scheme and an offline approximation scheme (AWLRLCC), which generally maintains comparable results but significantly saves much time cost. Finally, we show that WLRLCC can also tackle two existing face annotation tasks with promising performance.

  9. Structural properties of templated Ge quantum dot arrays: impact of growth and pre-pattern parameters.

    Science.gov (United States)

    Tempeler, J; Danylyuk, S; Brose, S; Loosen, P; Juschkin, L

    2018-07-06

    In this study we analyze the impact of process and growth parameters on the structural properties of germanium (Ge) quantum dot (QD) arrays. The arrays were deposited by molecular-beam epitaxy on pre-patterned silicon (Si) substrates. Periodic arrays of pits with diameters between 120 and 20 nm and pitches ranging from 200 nm down to 40 nm were etched into the substrate prior to growth. The structural perfection of the two-dimensional QD arrays was evaluated based on SEM images. The impact of two processing steps on the directed self-assembly of Ge QD arrays is investigated. First, a thin Si buffer layer grown on a pre-patterned substrate reshapes the pre-pattern pits and determines the nucleation and initial shape of the QDs. Subsequently, the deposition parameters of the Ge define the overall shape and uniformity of the QDs. In particular, the growth temperature and the deposition rate are relevant and need to be optimized according to the design of the pre-pattern. Applying this knowledge, we are able to fabricate regular arrays of pyramid shaped QDs with dot densities up to 7.2 × 10 10 cm -2 .

  10. Intraoperative computed tomography with integrated navigation system in spinal stabilizations.

    Science.gov (United States)

    Zausinger, Stefan; Scheder, Ben; Uhl, Eberhard; Heigl, Thomas; Morhard, Dominik; Tonn, Joerg-Christian

    2009-12-15

    STUDY DESIGN.: A prospective interventional case-series study plus a retrospective analysis of historical patients for comparison of data. OBJECTIVE.: To evaluate workflow, feasibility, and clinical outcome of navigated stabilization procedures with data acquisition by intraoperative computed tomography. SUMMARY OF BACKGROUND DATA.: Routine fluoroscopy to assess pedicle screw placement is not consistently reliable. Our hypothesis was that image-guided spinal navigation using an intraoperative CT-scanner can improve the safety and precision of spinal stabilization surgery. METHODS.: CT data of 94 patients (thoracolumbar [n = 66], C1/2 [n = 12], cervicothoracic instability [n = 16]) were acquired after positioning the patient in the final surgical position. A sliding gantry 40-slice CT was used for image acquisition. Data were imported to a frameless infrared-based neuronavigation workstation. Intraoperative CT was obtained to assess the accuracy of instrumentation and, if necessary, the extent of decompression. All patients were clinically evaluated by Odom-criteria after surgery and after 3 months. RESULTS.: Computed accuracy of the navigation system reached /=2 mm without persistent neurologic or vascular damage in 20/414 screws (4.8%) leading to immediate correction of 10 screws (2.4%). Control-iCT changed the course of surgery in 8 cases (8.5% of all patients). The overall revision rate was 8.5% (4 wound revisions, 2 CSF fistulas, and 2 epidural hematomas). There was no reoperation due to implant malposition. According to Odom-criteria all patients experienced a clinical improvement. A retrospective analysis of 182 patients with navigated thoracolumbar transpedicular stabilizations in the preiCT era revealed an overall revision rate of 10.4% with 4.4% of patients requiring screw revision. CONCLUSION.: Intraoperative CT in combination with neuronavigation provides high accuracy of screw placement and thus safety for patients undergoing spinal stabilization

  11. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    International Nuclear Information System (INIS)

    Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T; Cooper, Benjamin J; Keall, Paul J; Kuncic, Zdenka

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp–Davis–Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and

  12. Achilles tendon structure improves on UTC imaging over a 5-month pre-season in elite Australian football players.

    Science.gov (United States)

    Docking, S I; Rosengarten, S D; Cook, J

    2016-05-01

    Pre-season injuries are common and may be due to a reintroduction of training loads. Tendons are sensitive to changes in load, making them vulnerable to injury in the pre-season. This study investigated changes in Achilles tendon structure on ultrasound tissue characterization (UTC) over the course of a 5-month pre-season in elite male Australian football players. Eighteen elite male Australian football players with no history of Achilles tendinopathy and normal Achilles tendons were recruited. The left Achilles tendon was scanned with UTC to quantify the stability of the echopattern. Participants were scanned at the start and completion of a 5-month pre-season. Fifteen players remained asymptomatic over the course of the pre-season. All four echo-types were significantly different at the end of the pre-season, with the overall echopattern suggesting an improvement in Achilles tendon structure. Three of the 18 participants developed Achilles tendon pain that coincided with a change in the UTC echopattern. This study demonstrates that the UTC echopattern of the Achilles tendon improves over a 5-month pre-season training period, representing increased fibrillar alignment. However, further investigation is needed to elucidate with this alteration in the UTC echopattern results in improved tendon resilience and load capacity. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Stability of pre-orthodontic orthognathic surgery depending on mandibular surgical techniques: SSRO vs IVRO.

    Science.gov (United States)

    Choi, Sung-Hwan; Yoo, Ho Jin; Lee, Jang-Yeol; Jung, Young-Soo; Choi, Jong-Woo; Lee, Kee-Joon

    2016-09-01

    The aim of this retrospective cohort study was to evaluate the postoperative stability of sagittal split ramus osteotomy (SSRO) and intraoral vertical ramus osteotomy (IVRO) in pre-orthodontic orthognathic surgery (POGS) for skeletal Class III malocclusion. Thirty-seven patients (SSRO, n = 18; IVRO, n = 19) who underwent bimaxillary surgery were divided into two groups according to the type of surgery. During the postoperative period, there were no significant differences in anterior and superior movements of the mandible at point B between the two groups, but occlusal plane angle of the SSRO group significantly decreased more than that of the IVRO group (P = 0.003). Only the SSRO group showed a linear relationship between the amount of postoperative horizontal and vertical movements of the mandible (R(2) = 0.254; P = 0.033), indicating that the amount of postoperative upward movement of the mandible increased as the amount of postoperative forward movement increased (r = -0.504; P = 0.033). The mandible after SSRO in POGS rotated counterclockwise due to rigid fixation between two segments, whereas the mandible after IVRO without rigid fixation in POGS moved mainly in a superior direction. These differences must be considered before surgery to ensure postsurgical stability for patients with mandibular prognathism. Copyright © 2016. Published by Elsevier Ltd.

  14. Control Design and Digital Implementation of a Fast 2-Degree-of-Freedom Translational Optical Image Stabilizer for Image Sensors in Mobile Camera Phones.

    Science.gov (United States)

    Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P

    2017-10-13

    This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.

  15. Stabilization of prescribed values and periodic orbits with regular and pulse target oriented control

    International Nuclear Information System (INIS)

    Braverman, E.; Chan, B.

    2014-01-01

    Investigating a method of chaos control for one-dimensional maps, where the intervention is proportional to the difference between a fixed value and a current state, we demonstrate that stabilization is possible in one of the two following cases: (1) for small values, the map is increasing and the slope of the line connecting the points on the line with the origin is decreasing; (2) the chaotic map is locally Lipschitz. Moreover, in the latter case we prove that any point of the map can be stabilized. In addition, we study pulse stabilization when the intervention occurs each m-th step and illustrate that stabilization is possible for the first type of maps. In the context of population dynamics, we notice that control with a positive target, even if stabilization is not achieved, leads to persistent solutions and prevents extinction in models which experience the Allee effect

  16. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  17. Essential pre-treatment imaging examinations in patients with endoscopically-diagnosed early gastric cancer

    Directory of Open Access Journals (Sweden)

    Tokunaga Mari

    2010-06-01

    Full Text Available Abstract Background There have been no reports discussing which imaging procedures are truly necessary before treatment of endoscopically-diagnosed early gastric cancer (eEGC. The aim of this pilot study was to show which imaging examinations are essential to select indicated treatment or appropriate strategy in patients with eEGC. Methods In 140 consecutive patients (95 men, 45 women; age, 66.4 +/- 11.3 years [mean +/- standard deviation], range, 33-90 with eEGC which were diagnosed during two years, the pre-treatment results of ultrasonography (US and contrast-enhanced computed tomography (CT of the abdomen, barium enema (BE and chest radiography (CR were retrospectively reviewed. Useful findings that might affect indication or strategy were evaluated. Results US demonstrated useful findings in 13 of 140 patients (9.3%: biliary tract stones (n = 11 and other malignant tumors (n = 2. Only one useful finding was demonstrated on CT (pancreatic intraductal papillary mucinous tumor but not on US (0.7%; 95% confidential interval [CI], 2.1%. BE demonstrated colorectal carcinomas in six patients and polyps in 10 patients, altering treatment strategy (11.4%; 95%CI, 6.1-16.7%. Of these, only two colorectal carcinomas were detected on CT. CR showed three relevant findings (2.1%: pulmonary carcinoma (n = 1 and cardiomegaly (n = 2. Seventy-nine patients (56% were treated surgically and 56 patients were treated by endoscopic intervention. The remaining five patients received no treatment due to various reasons. Conclusions US, BE and CR may be essential as pre-treatment imaging examinations because they occasionally detect findings which affect treatment indication and strategy, although abdominal contrast-enhanced CT rarely provide additional information.

  18. Sleep Spindles in the Right Hemisphere Support Awareness of Regularities and Reflect Pre-Sleep Activations.

    Science.gov (United States)

    Yordanova, Juliana; Kolev, Vasil; Bruns, Eike; Kirov, Roumen; Verleger, Rolf

    2017-11-01

    The present study explored the sleep mechanisms which may support awareness of hidden regularities. Before sleep, 53 participants learned implicitly a lateralized variant of the serial response-time task in order to localize sensorimotor encoding either in the left or right hemisphere and induce implicit regularity representations. Electroencephalographic (EEG) activity was recorded at multiple electrodes during both task performance and sleep, searching for lateralized traces of the preceding activity during learning. Sleep EEG analysis focused on region-specific slow (9-12 Hz) and fast (13-16 Hz) sleep spindles during nonrapid eye movement sleep. Fast spindle activity at those motor regions that were activated during learning increased with the amount of postsleep awareness. Independently of side of learning, spindle activity at right frontal and fronto-central regions was involved: there, fast spindles increased with the transformation of sequence knowledge from implicit before sleep to explicit after sleep, and slow spindles correlated with individual abilities of gaining awareness. These local modulations of sleep spindles corresponded to regions with greater presleep activation in participants with postsleep explicit knowledge. Sleep spindle mechanisms are related to explicit awareness (1) by tracing the activation of motor cortical and right-hemisphere regions which had stronger involvement already during learning and (2) by recruitment of individually consolidated processing modules in the right hemisphere. The integration of different sleep spindle mechanisms with functional states during wake collectively supports the gain of awareness of previously experienced regularities, with a special role for the right hemisphere. © Sleep Research Society 2017. Published by Oxford University Press [on behalf of the Sleep Research Society].

  19. Deconstructing Barbie: Using Creative Drama as a Tool for Image Making in Pre-Adolescent Girls.

    Science.gov (United States)

    O'Hara, Elizabeth; Lanoux, Carol

    1999-01-01

    Discusses the dilemma of self-concept in pre-adolescent girls, as they revise their self-images based on information that the culture dictates as the norm. Argues that drama education can offer creative activities to help girls find their voice and bring them into their power. Includes two group drama activities and a short annotated bibliography…

  20. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    International Nuclear Information System (INIS)

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  1. INCREASE OF STABILITY AT JPEG COMPRESSION OF THE DIGITAL WATERMARKS EMBEDDED IN STILL IMAGES

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-07-01

    Full Text Available Subject of Research. The paper deals with creation and research of method for increasing stability at JPEG compressing of digital watermarks embedded in still images. Method. A new algorithm of digital watermarking for still images which embeds digital watermark into a still image via modification of frequency coefficients for Hadamard discrete transformation is presented. The choice of frequency coefficients for embedding of a digital watermark is based on existence of sharp change of their values after modification at the maximum compression of JPEG. The choice of blocks of pixels for embedding is based on the value of their entropy. The new algorithm was subjected to the analysis of resistance to an image compression, noising, filtration, change of size, color and histogram equalization. Elham algorithm possessing a good resistance to JPEG compression was chosen for comparative analysis. Nine gray-scale images were selected as objects for protection. Obscurity of the distortions embedded in them was defined on the basis of the peak value of a signal to noise ratio which should be not lower than 43 dB for obscurity of the brought distortions. Resistibility of embedded watermark was determined by the Pearson correlation coefficient, which value should not be below 0.5 for the minimum allowed stability. The algorithm of computing experiment comprises: watermark embedding into each test image by the new algorithm and Elham algorithm; introducing distortions to the object of protection; extracting of embedded information with its subsequent comparison with the original. Parameters of the algorithms were chosen so as to provide approximately the same level of distortions introduced into the images. Main Results. The method of preliminary processing of digital watermark presented in the paper makes it possible to reduce significantly the volume of information embedded in the still image. The results of numerical experiment have shown that the

  2. Image registration of naval IR images

    Science.gov (United States)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  3. Recent Developments in Instrumentation for Pre-Clinical Imaging Studies

    International Nuclear Information System (INIS)

    Meikle, S.R.

    2002-01-01

    Full text: Recent advances in imaging instrumentation have led to a variety of tomograph designs for dedicated pre clinical imaging of laboratory animals. These advances make it possible to image and quantify the kinetics of radiolabelled pharmaceuticals in a wide range of animal models from rodents to non-human primates. Applications include evaluation of promising new radiopharmaceuticals, study of the molecular origins of human disease and evaluation of new forms of therapy. These applications and advances in instrumentation are equally applicable to positron emitters and single photon emitters. This paper provides an overview of recent advances which have led to the current state-of-the-art in pre clinical imaging. The common inorganic scintillators that have been used for SPECT and PET, including some of the promising materials recently studied. The current crystal of choice for SPECT imaging is NaI(Tl) because of its high light output and density which make it well suited to imaging photons in the 100-200 keV range. However, NaI(Tl) has the disadvantage that it must be hermetically sealed to prevent absorption of moisture from the environment. Therefore, investigators have explored a number of alternative inorganic crystals, including CsI(Tl) and cerium-doped yttrium aluminium perovskite (YAP), as well as solid state detectors such as cadmium zinc telluride (CZT). Many of the crystals used in SPECT have also been tried for PET, including NaI(Tl) and YAP. However these crystals have lower stopping power than BGO and NaI(Tl) is also relatively slow. A very promising scintillator for PET is cerium-doped lutetium oxyorthosilicate (LSO) (1) which has similar stopping power to BGO and relatively high light output and fast decay. The first PET scanner to use LSO was the UCLA animal scanner, microPET, which also makes use of a number of other new technologies and unique design features. Recently, improvements in multi-anode and crossed wire position sensitive

  4. A stability comparison of redox-active layers produced by chemical coupling of an osmium redox complex to pre-functionalized gold and carbon electrodes

    International Nuclear Information System (INIS)

    Boland, Susan; Foster, Kevin; Leech, Donal

    2009-01-01

    The production of stable redox active layers on electrode surfaces is a key factor for the development of practical electronic and electrochemical devices. Here, we report on a comparison of the stability of redox layers formed by covalently coupling an osmium redox complex to pre-functionalized gold and graphite electrode surfaces. Pre-treatment of gold and graphite electrodes to provide surface carboxylic acid groups is achieved via classical thiolate self-assembled monolayer formation on gold surfaces and the electro-reduction of an in situ generated aryldiazonium salt from 4-aminobenzoic acid on gold, glassy carbon and graphite surfaces. These surfaces have been characterized by AFM and electrochemical blocking studies. The surface carboxylate is then used to tether an osmium complex, [Os(2,2'-bipyridyl) 2 (4-aminomethylpyridine)Cl]PF 6 , to provide a covalently bound redox active layer, E 0 '' of 0.29 V (vs. Ag/AgCl in phosphate buffer, pH 7.4), on the pre-treated electrodes. The aryldiazonium salt-treated carbon-based surfaces showed the greatest stability, represented by a decrease of <5% in the peak current for the Os(II/III) redox transition of the immobilized complex over a 3-day period, compared to a decrease of 19% and 14% for the aryldiazonium salt treated and thiolate treated gold surfaces, respectively, over the same period

  5. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  6. Effect of von Karman Vortex Shedding on Regular and Open-slit V-gutter Stabilized Turbulent Premixed Flames

    Science.gov (United States)

    2012-04-01

    Both flame lengths shrink and large scale disruptions occur downstream with vortex shedding carrying reaction zones. Flames in both flameholders...9) the flame structure changes dramatically for both regular and open-slit V-gutter. Both flame lengths shrink and large scale disruptions occur...reduces the flame length . However, qualitatively the open-slit V-gutter appears to be more sensitive than the regular V-gutter. Both flames remain

  7. CT Image Reconstruction in a Low Dimensional Manifold

    OpenAIRE

    Cong, Wenxiang; Wang, Ge; Yang, Qingsong; Hsieh, Jiang; Li, Jia; Lai, Rongjie

    2017-01-01

    Regularization methods are commonly used in X-ray CT image reconstruction. Different regularization methods reflect the characterization of different prior knowledge of images. In a recent work, a new regularization method called a low-dimensional manifold model (LDMM) is investigated to characterize the low-dimensional patch manifold structure of natural images, where the manifold dimensionality characterizes structural information of an image. In this paper, we propose a CT image reconstruc...

  8. Anterior Cruciate Ligament Tear: Reliability of MR Imaging to Predict Stability after Conservative Treatment

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hye Won; Ahn, Jin Hwan; Ahn, Joong Mo; Yoon, Young Cheol; Hong, Hyun Pyo; Yoo, So Young; Kim, Seon Woo [Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2007-06-15

    The aim of this study is to evaluate the reliability of MR imaging to predict the stability of the torn anterior cruciate ligament (ACL) after complete recovery of the ligament's continuity. Twenty patients with 20 knee injuries (13 males and 7 females; age range, 20 54) were enrolled in the study. The inclusion criteria were a positive history of acute trauma, diagnosis of the ACL tear by both the physical examination and the MR imaging at the initial presentation, conservative treatment, complete recovery of the continuity of the ligament on the follow up (FU) MR images and availability of the KT-2000 measurements. Two radiologists, who worked in consensus, graded the MR findings with using a 3-point system for the signal intensity, sharpness, straightness and the thickness of the healed ligament. The insufficiency of ACL was categorized into three groups according to the KT-2000 measurements. The statistic correlations between the grades of the MR findings and the degrees of ACL insufficiency were analyzed using the Cochran-Mantel-Haenszel test (p < 0.05). The p-values for each category of the MR findings according to the different groups of the KT-2000 measurements were 0.9180 for the MR signal intensity, 1.0000 for sharpness, 0.5038 for straightness and 0.2950 for thickness of the ACL. The MR findings were not significantly different between the different KT-2000 groups. MR imaging itself is not a reliable examination to predict stability of the ACL rupture outcome, even when the MR images show an intact appearance of the ACL.

  9. Low-dose 4D cone-beam CT via joint spatiotemporal regularization of tensor framelet and nonlocal total variation

    Science.gov (United States)

    Han, Hao; Gao, Hao; Xing, Lei

    2017-08-01

    Excessive radiation exposure is still a major concern in 4D cone-beam computed tomography (4D-CBCT) due to its prolonged scanning duration. Radiation dose can be effectively reduced by either under-sampling the x-ray projections or reducing the x-ray flux. However, 4D-CBCT reconstruction under such low-dose protocols is prone to image artifacts and noise. In this work, we propose a novel joint regularization-based iterative reconstruction method for low-dose 4D-CBCT. To tackle the under-sampling problem, we employ spatiotemporal tensor framelet (STF) regularization to take advantage of the spatiotemporal coherence of the patient anatomy in 4D images. To simultaneously suppress the image noise caused by photon starvation, we also incorporate spatiotemporal nonlocal total variation (SNTV) regularization to make use of the nonlocal self-recursiveness of anatomical structures in the spatial and temporal domains. Under the joint STF-SNTV regularization, the proposed iterative reconstruction approach is evaluated first using two digital phantoms and then using physical experiment data in the low-dose context of both under-sampled and noisy projections. Compared with existing approaches via either STF or SNTV regularization alone, the presented hybrid approach achieves improved image quality, and is particularly effective for the reconstruction of low-dose 4D-CBCT data that are not only sparse but noisy.

  10. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  11. Evaluation of Image-Guided Positioning for Frameless Intracranial Radiosurgery

    International Nuclear Information System (INIS)

    Lamba, Michael; Breneman, John C.; Warnick, Ronald E.

    2009-01-01

    Purpose: The standard for target alignment and immobilization in intracranial radiosurgery is frame-based alignment and rigid immobilization using a stereotactic head ring. Recent improvements in image-guidance systems have introduced the possibility of image-guided radiosurgery with nonrigid immobilization. We present data on the alignment accuracy and patient stability of a frameless image-guided system. Methods and Materials: Isocenter alignment errors were measured for in vitro studies in an anthropomorphic phantom for both frame-based stereotactic and frameless image-guided alignment. Subsequently, in vivo studies assessed differences between frame-based and image-guided alignment in patients who underwent frame-based intracranial radiosurgery. Finally, intratreatment target stability was determined by image-guided alignment performed before and after image-guided mask immobilized radiosurgery. Results: In vitro hidden target localization errors were comparable for the framed (0.7 ± 0.5 mm) and image-guided (0.6 ± 0.2 mm) techniques. The in vivo differences in alignment were 0.9 ± 0.5 mm (anteroposterior), -0.2 ± 0.4 mm (superoinferior), and 0.3 ± 0.5 mm (lateral). For in vivo stability tests, the mean distance differed between the pre- and post-treatment positions with mask-immobilized radiosurgery by 0.5 ± 0.3 mm. Conclusion: Frame-based and image-guided alignment accuracy in vitro was comparable for the system tested. In vivo tests showed a consistent trend in the difference of alignment in the anteroposterior direction, possibly due to torque to the ring and mounting system with frame-based localization. The mask system as used appeared adequate for patient immobilization.

  12. Use of the geometric mean of opposing planar projections in pre-reconstruction restoration of SPECT images

    International Nuclear Information System (INIS)

    Boulfelfel, D.; Rangayyan, R.M.; Hahn, L.J.; Kloiber, R.

    1992-01-01

    This paper presents a restoration scheme for single photon emission computed tomography (SPECT) images that performs restoration before reconstruction (pre-reconstruction restoration) from planar (projection) images. In this scheme, the pixel-by-pixel geometric mean of each pair of opposing (conjugate) planar projections is computed prior to the reconstruction process. The averaging process is shown to help in making the degradation phenomenon less dependent on the distance of each point of the object from the camera. The restoration filters investigated are the Wiener and power spectrum equalization filters. (author)

  13. 78 FR 2418 - Privacy Act; Notification of New Privacy Act System of Records, Pre-Purchase Homeownership...

    Science.gov (United States)

    2013-01-11

    ...); prior experience completing pre-purchase counseling or education; regular access to a computer and... New Privacy Act System of Records, Pre-Purchase Homeownership Counseling Demonstration and Impact... of record is the Pre-Purchase Homeownership Counseling Demonstration and Impact Evaluation Random...

  14. Toward robust high resolution fluorescence tomography: a hybrid row-action edge preserving regularization

    Science.gov (United States)

    Behrooz, Ali; Zhou, Hao-Min; Eftekhar, Ali A.; Adibi, Ali

    2011-02-01

    Depth-resolved localization and quantification of fluorescence distribution in tissue, called Fluorescence Molecular Tomography (FMT), is highly ill-conditioned as depth information should be extracted from limited number of surface measurements. Inverse solvers resort to regularization algorithms that penalize Euclidean norm of the solution to overcome ill-posedness. While these regularization algorithms offer good accuracy, their smoothing effects result in continuous distributions which lack high-frequency edge-type features of the actual fluorescence distribution and hence limit the resolution offered by FMT. We propose an algorithm that penalizes the total variation (TV) norm of the solution to preserve sharp transitions and high-frequency components in the reconstructed fluorescence map while overcoming ill-posedness. The hybrid algorithm is composed of two levels: 1) An Algebraic Reconstruction Technique (ART), performed on FMT data for fast recovery of a smooth solution that serves as an initial guess for the iterative TV regularization, 2) A time marching TV regularization algorithm, inspired by the Rudin-Osher-Fatemi TV image restoration, performed on the initial guess to further enhance the resolution and accuracy of the reconstruction. The performance of the proposed method in resolving fluorescent tubes inserted in a liquid tissue phantom imaged by a non-contact CW trans-illumination FMT system is studied and compared to conventional regularization schemes. It is observed that the proposed method performs better in resolving fluorescence inclusions at higher depths.

  15. The Nature of Stability in Replicating Systems

    Directory of Open Access Journals (Sweden)

    Addy Pross

    2011-02-01

    Full Text Available We review the concept of dynamic kinetic stability, a type of stability associated specifically with replicating entities, and show how it differs from the well-known and established (static kinetic and thermodynamic stabilities associated with regular chemical systems. In the process we demonstrate how the concept can help bridge the conceptual chasm that continues to separate the physical and biological sciences by relating the nature of stability in the animate and inanimate worlds, and by providing additional insights into the physicochemical nature of abiogenesis.

  16. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  17. Product of the SNPP VIIRS SD Screen Transmittance and the SD BRDF (RSB) From Both Yaw Maneuver and Regular On-Orbit Data

    Science.gov (United States)

    Lei, Ning; Xiong, Xiaoxiong

    2016-01-01

    To assure data quality, the Earth-observing Visible Infrared Imaging Radiometer Suite (VIIRS) regularly performs on-orbit radiometric calibrations of its 22 spectral bands. The primary calibration radiance source for the reflective solar bands (RSBs) is a sunlit solar diffuser (SD). During the calibration process, sunlight goes through a perforated plate (the SD screen) and then strikes the SD. The SD scattered sunlight is used for the calibration, with the spectral radiance proportional to the product of the SD screen transmittance and the SD bidirectional reflectance distribution function (BRDF). The BRDF is decomposed to the product of its value at launch and a numerical factor quantifying its change since launch. Therefore, the RSB calibration requires accurate knowledge of the product of the SD screen transmittance and the BRDF (RSB; launch time). Previously, we calculated the product with yaw maneuver data and found that the product had improved accuracy over the prelaunch one. With both yaw maneuver and regular on orbit data, we were able to improve the accuracy of the SDSM screen transmittance and the product for the solar diffuser stability monitor SD view. In this study, we use both yaw maneuver and a small portion of regular on-orbit data to determine the product for the RSB SD view.

  18. MR immuno-imaging study using avidin-biotin pre-targeting system on nude mice grafted with human colorectal carcinoma

    International Nuclear Information System (INIS)

    Chai Qingfen; Huang Qiliu; Xu Yikai; Liu Xian; Wu Yuankui

    2001-01-01

    Objective: To further improve the amount of gadolinium located on tumor, a gadolinium chelate enhanced magnetic resonance imaging pre-targeting with avidin-biotin system technique was adopted and the enhancing characteristics of difference of signal intensity at various scan timing were investigated in author's experiment. Methods: (1) Anti-CEA antibody CL -3 was biotinylated in a mixture with antibody to NHS-LS-biotin with a molar ratio of 1/30-50. (2) After the reaction of GdCl 3 and DTPA-B, the unconjugated gadolinium was removed by chromatography on G-10 column. (3) Steps for pre-targeting tumor: First step, McAb-B was injected intravenously into nude mice on the first day. Second step, avidin (Av) and streptavidin (SA) were injected intraperitoneally 24 hours later. Third step, Gd-DTPA-Bt was injected intravenously 48 hours after the first injection. MRI was performed with plain scans, enhanced scans at 20 minutes, 2 hours, 8 hours, and 24 hours after the third step. Signal intensities of tumor and muscles were measured. The pre-targeting effect was compared with those of Gd-DTPA-McAb and Gd-DTPA. Results: (1) Each monoclonal antibody conjugated with 11-23 biotin and the immuno-activity of biotinylated antibody with 12 biotin/antibody was 94.9%. (2) The enhancing effect of pre-targeting approach was tumor specific. Contrarily that of Gd-DTPA was not. (3) The enhancing rate of signal intensity specificity of pre-targeting approach was 43%, while that of McAb-Gd-DTPA was 17.9% only, so the enhancing ratio was 2.4. Conclusion: Pre-targeting approach using avidin-biotin system improves the amounts of gadolinium locating on tumors and yields a specific enhancing effect. It is a promising modality which promotes the ability of Gd labelled magnetic resonance immuno-imaging in the detection of colon cancer and its recurrence

  19. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  20. An imaging checklist for pre-FESS CT: framing a surgically relevant report

    Energy Technology Data Exchange (ETDEWEB)

    Vaid, S., E-mail: vaids@vsnl.co [Department of Radiology and Imaging, Grant Medical Foundation, Pune (India); Vaid, N. [Department of Otorhinolaryngology, K.E.M. Hospital, Pune (India); Rawat, S. [Department of Radiology and Imaging, Grant Medical Foundation, Pune (India); Ahuja, A.T. [Department of Diagnostic Radiology and Organ Imaging, The Chinese University of Hong Kong (Hong Kong)

    2011-05-15

    The reference standard for preoperative imaging in functional endoscopic sinus surgery (FESS) is multiplanar high-resolution computed tomography (HRCT). Surgeons require a precise preoperative anatomical road map, and hence it is essential for radiologists to be familiar with the normal three-dimensional sinonasal anatomy and the normal variants encountered in this region. Sagittal imaging has recently emerged as an important tool to visualize additional details in this critical anatomical region. Radiologists also need to report these examinations with special focus on the surgeon's expectations. Constant communication between the radiologist and the surgeon helps to resolve specific issues and improve the overall quality of reports. This results in better preoperative patient counselling and in predicting postoperative improvement in clinical status. This review provides a basic structured format for reporting pre-FESS CT, which can be tailored to meet individual requirements. The CT reporting format follows the order in which the sinonasal structures are approached during surgery.

  1. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  2. Influence of combination hemodialysis/hemoperfusion against score of depression in regular hemodialysis patients

    Science.gov (United States)

    Permatasari, T. D.; Thamrin, A.; Hanum, H.

    2018-03-01

    Patients with chronic kidney disease, have a higher risk for psychological distress such as anxiety, depression and cognitive decline. Combination of Hemodialysis (HD)/hemoperfusion (HP) regularly able to eliminate uremic toxin with mild-to-large molecular weight better. HD/HP can remove metabolites, toxin, and pathogenic factors and regulate the water, electrolyte and acid-base balance to improve the quality of patient’s sleep and appetite also reduces itching of the skin, which in turn improve the quality and life expectancy. This research was a cross sectional research with a pre-experimental design conducted from July to September 2015 with 17 regular hemodialysis patients as samples. Inclusion criteria were regular hemodialysis patients and willingly participated in the research. The assessmentwas conducted using BDI to assess depression. To obtained the results, data were analyzed using T-Test and showed that that the average BDI score before the combination of HD/HP 18.59±9 to 8.18±2.83 after the combination (p<0.001). In conclusion, combination HD/HP can lower depression scores in patients with regular HD.

  3. Functional imaging of larynx via 256-Slice Multi-Detector Computed Tomography in patients with laryngeal tumors: A faster, better and more reliable pre-therapeutic evaluation

    International Nuclear Information System (INIS)

    Celebi, Irfan; Basak, Muzaffer; Ucgul, Ayhan; Yildirim, Hakan; Oz, Aysel; Vural, Cetin

    2012-01-01

    Objective: To determine the clinical utility of using dynamic maneuvers during imaging of larynx via 256-Slice Multi-Detector Computed Tomography in the pre-therapeutic evaluation of laryngeal tumors. Materials and methods: A total of 27 patients (7 women, 20 men; aged 53–76 years) diagnosed with laryngeal squamous cell carcinoma were evaluated pre-therapeutically via contrast enhanced axial CT scans during consecutive phases of phonation (PP), inspiration (IP) and Valsalva maneuver (VP). Results: In 2 of 5 patients diagnosed with T1a glottic tumor, scans obtained during VP and PP were normal while the CT scans obtained during IP clearly showed a mass. In all patients (27/27) PP provided visualization of the ventricle, on coronal plane images and the pyriform sinus apices, on axial plane images. Involvement of the anterior commissure was best assessable on axial plane IP images (sensitivity 93%, specificity 92%). In cases of stage T1–T3 tumors use of dynamic maneuvers during laryngeal CT imaging showed the location and extension of the tumor better than the single phase CT scans did. We did not find a significant improvement in the pre-therapeutic evaluation in stage T4 tumors. Conclusion: Providing markedly clearer and more detailed evaluation of mucosal surfaces and deep structures of the larynx and mobility of the cords than do conventional scans, use of dynamic laryngeal maneuvers during laryngeal CT imaging seems to be an useful alternative in the pre-therapeutic assessment of laryngeal tumors.

  4. The Danish Registry on Regular Dialysis and Transplantation:completeness and validity of incident patient registration

    DEFF Research Database (Denmark)

    Hommel, Kristine; Rasmussen, Søren; Madsen, Mette

    2010-01-01

    BACKGROUND: The Danish National Registry on Regular Dialysis and Transplantation (NRDT) provides systematic information on the epidemiology and treatment of end-stage chronic kidney disease in Denmark. It is therefore of major importance that the registry is valid and complete. The aim of the pre...

  5. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  6. On the MSE Performance and Optimization of Regularized Problems

    KAUST Repository

    Alrashdi, Ayed

    2016-11-01

    The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.

  7. Are pre-treatment psychological characteristics influenced by pre-surgical orthodontics?

    Science.gov (United States)

    Cunningham, S J; Gilthorpe, M S; Hunt, N P

    2001-12-01

    A number of investigations have looked at psychological changes occurring in association with orthognathic treatment. However, most of these studies have used a pre-surgery questionnaire as the baseline measurement. There is little data relating to the true baseline, i.e. that prior to any active treatment. Until this aspect is investigated, it is not possible to assume that pre-surgery is an acceptable baseline. This questionnaire based study aimed to assess changes in six psychological outcome measures between T1 (prior to any active treatment) and T2 (following pre-surgical orthodontics/prior to surgery). The outcome variables were: state anxiety, trait anxiety, depression, self-esteem, body image, and facial body image. Sixty-two patients (39 females and 23 males) completed both questionnaires. The results showed that intervention, in the form of orthodontic treatment, had a minimal effect on the chosen psychometric outcome variables. There was a significant reduction in satisfaction with body image amongst patients who initially reported mild to moderate dental/facial problems, whilst a moderate increase in satisfaction occurred in those patients reporting severe conditions initially. Also of note were significant increases in state anxiety amongst older patients whilst trait anxiety showed greater increases in females than males.

  8. Novel silica stabilization method for the analysis of fine nanocrystals using coherent X-ray diffraction imaging

    Energy Technology Data Exchange (ETDEWEB)

    Monteforte, Marianne; Estandarte, Ana K.; Chen, Bo; Harder, Ross; Huang, Michael H.; Robinson, Ian K.

    2016-06-23

    High-energy X-ray Bragg coherent diffraction imaging (BCDI) is a well established synchrotron-based technique used to quantitatively reconstruct the three-dimensional morphology and strain distribution in nanocrystals. The BCDI technique has become a powerful analytical tool for quantitative investigations of nanocrystals, nanotubes, nanorods and more recently biological systems. BCDI has however typically failed for fine nanocrystals in sub-100?nm size regimes ? a size routinely achievable by chemical synthesis ? despite the spatial resolution of the BCDI technique being 20?30?nm. The limitations of this technique arise from the movement of nanocrystals under illumination by the highly coherent beam, which prevents full diffraction data sets from being acquired. A solution is provided here to overcome this problem and extend the size limit of the BCDI technique, through the design of a novel stabilization method by embedding the fine nanocrystals into a silica matrix. Chemically synthesized FePt nanocrystals of maximum dimension 20?nm and AuPd nanocrystals in the size range 60?65?nm were investigated with BCDI measurement at beamline 34-ID-C of the APS, Argonne National Laboratory. Novel experimental methodologies to elucidate the presence of strain in fine nanocrystals are a necessary pre-requisite in order to better understand strain profiles in engineered nanocrystals for novel device development.

  9. A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.

    Science.gov (United States)

    Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas

    2015-12-01

    Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.

  10. Dynamical Stability of Imaged Planetary Systems in Formation: Application to HL Tau

    OpenAIRE

    Tamayo, Daniel; Triaud, Amaury H. M. J.; Menou, Kristen; Rein, Hanno

    2015-01-01

    A recent ALMA image revealed several concentric gaps in the protoplanetary disk surrounding the young star HL Tau. We consider the hypothesis that these gaps are carved by planets, and present a general framework for understanding the dynamical stability of such systems over typical disk lifetimes, providing estimates for the maximum planetary masses. We collect these easily evaluated constraints into a workflow that can help guide the design and interpretation of new observational campaigns ...

  11. Pre-clinical evaluation of a nanoparticle-based blood-pool contrast agent for MR imaging of the placenta.

    Science.gov (United States)

    Ghaghada, Ketan B; Starosolski, Zbigniew A; Bhayana, Saakshi; Stupin, Igor; Patel, Chandreshkumar V; Bhavane, Rohan C; Gao, Haijun; Bednov, Andrey; Yallampalli, Chandrasekhar; Belfort, Michael; George, Verghese; Annapragada, Ananth V

    2017-09-01

    Non-invasive 3D imaging that enables clear visualization of placental margins is of interest in the accurate diagnosis of placental pathologies. This study investigated if contrast-enhanced MRI performed using a liposomal gadolinium blood-pool contrast agent (liposomal-Gd) enables clear visualization of the placental margins and the placental-myometrial interface (retroplacental space). Non-contrast MRI and contrast-enhanced MRI using a clinically approved conventional contrast agent were used as comparators. Studies were performed in pregnant rats under an approved protocol. MRI was performed at 1T using a permanent magnet small animal scanner. Pre-contrast and post-liposomal-Gd contrast images were acquired using T1-weighted and T2-weighted sequences. Dynamic Contrast enhanced MRI (DCE-MRI) was performed using gadoterate meglumine (Gd-DOTA, Dotarem ® ). Visualization of the retroplacental clear space, a marker of normal placentation, was judged by a trained radiologist. Signal-to-noise (SNR) and contrast-to-noise (CNR) ratios were calculated for both single and averaged acquisitions. Images were reviewed by a radiologist and scored for the visualization of placental features. Contrast-enhanced CT (CE-CT) imaging using a liposomal CT agent was performed for confirmation of the MR findings. Transplacental transport of liposomal-Gd was evaluated by post-mortem elemental analysis of tissues. Ex-vivo studies in perfused human placentae from normal, GDM, and IUGR pregnancies evaluated the transport of liposomal agent across the human placental barrier. Post-contrast T1w images acquired with liposomal-Gd demonstrated significantly higher SNR (p = 0.0002) in the placenta compared to pre-contrast images (28.0 ± 4.7 vs. 6.9 ± 1.8). No significant differences (p = 0.39) were noted between SNR in pre-contrast and post-contrast liposomal-Gd images of the amniotic fluid, indicating absence of transplacental passage of the agent. The placental margins were

  12. Noninvasive technique for measurement of heartbeat regularity in zebrafish (Danio rerio embryos

    Directory of Open Access Journals (Sweden)

    Cheng Shuk

    2009-02-01

    Full Text Available Abstract Background Zebrafish (Danio rerio, due to its optical accessibility and similarity to human, has emerged as model organism for cardiac research. Although various methods have been developed to assess cardiac functions in zebrafish embryos, there lacks a method to assess heartbeat regularity in blood vessels. Heartbeat regularity is an important parameter for cardiac function and is associated with cardiotoxicity in human being. Using stereomicroscope and digital video camera, we have developed a simple, noninvasive method to measure the heart rate and heartbeat regularity in peripheral blood vessels. Anesthetized embryos were mounted laterally in agarose on a slide and the caudal blood circulation of zebrafish embryo was video-recorded under stereomicroscope and the data was analyzed by custom-made software. The heart rate was determined by digital motion analysis and power spectral analysis through extraction of frequency characteristics of the cardiac rhythm. The heartbeat regularity, defined as the rhythmicity index, was determined by short-time Fourier Transform analysis. Results The heart rate measured by this noninvasive method in zebrafish embryos at 52 hour post-fertilization was similar to that determined by direct visual counting of ventricle beating (p > 0.05. In addition, the method was validated by a known cardiotoxic drug, terfenadine, which affects heartbeat regularity in humans and induces bradycardia and atrioventricular blockage in zebrafish. A significant decrease in heart rate was found by our method in treated embryos (p p Conclusion The data support and validate this rapid, simple, noninvasive method, which includes video image analysis and frequency analysis. This method is capable of measuring the heart rate and heartbeat regularity simultaneously via the analysis of caudal blood flow in zebrafish embryos. With the advantages of rapid sample preparation procedures, automatic image analysis and data analysis, this

  13. Aespoe HRL - Geoscientific evaluation 1997/3. Results from pre-investigation and detailed site characterization. Comparison of predictions and observations. Geology and mechanical stability

    International Nuclear Information System (INIS)

    Stanfors, R.; Olsson, Paer; Stille, H.

    1997-05-01

    Prior to excavation of the laboratory in 1990 predictions were made for the excavation phase. The predictions concern five key issues: Geology, groundwater flow, groundwater chemistry, transport of solutes, and mechanical stability. Comparisons between predictions and observations were made during excavation in order to verify the reliability of the pre-investigations. This report presents a comparison between the geological and mechanical stability predictions and observations and an evaluation of data and investigation methods used for the 700-2874 m section of the tunnel. The report is specially highlighting the following conclusions: It is possible to localize major fracture zones during the pre-investigation at shallow (<200 m) depths; A number of minor fracture zones striking NNW-NNE were predicted to be hydraulically important and penetrate the southern area. A number of narrow fracture zone indications - 0.1-1 m wide - striking WNW-NE were mapped in the tunnel and pre-grouted sections confirm hydraulic conductors; It has not been possible to confirm the gently dipping zone EW-5, which was predicted as 'possible', with data from the tunnel; Predictions of the amount of different rock types were generally reliable as regards the major rocks, but the prediction of the distribution in space were poor as regards the minor rock types; The prediction of rock stress orientation corresponds well to the outcome; The prediction of rock quality for the tunnel, while applying the RMR-system, shows good correspondence to the observations made in the tunnel

  14. Total-variation regularization with bound constraints

    International Nuclear Information System (INIS)

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  15. Effect of high carbon dioxide atmosphere packaging and soluble gas stabilization pre-treatment on the shelf-life and quality of chicken drumsticks.

    Science.gov (United States)

    Al-Nehlawi, A; Saldo, J; Vega, L F; Guri, S

    2013-05-01

    The effects of an aerobic modified atmosphere packaging (MAP) (70% CO2, 15% O2 and 15% N2) with and without a CO2 3-h soluble gas stabilization (SGS) pre-treatment of chicken drumsticks were determined for various package and product quality characteristics. The CO2 dissolved into drumsticks was determined. The equilibrium between CO2 dissolved in drumsticks and CO2 in head space was reached within 48h after packaging, showing highest values of CO2 in SGS pre-treated samples. This greater availability of CO2 resulted in lower counts of TAB and Pseudomonas in SGS than in MAP drumsticks. Package collapse was significantly reduced in SGS samples. The average of CO2 dissolved in the MAP treatment was 567mg CO2kg(-1) of chicken and, 361mg CO2kg(-1) of chicken during the MAP treatment, in SGS pre-treated samples. This difference could be the quantity of CO2 dissolved during SGS pre-treatment. These results highlight the advantages of using SGS versus traditional MAP for chicken products preservation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Smoothly Clipped Absolute Deviation (SCAD) regularization for compressed sensing MRI Using an augmented Lagrangian scheme

    NARCIS (Netherlands)

    Mehranian, Abolfazl; Rad, Hamidreza Saligheh; Rahmim, Arman; Ay, Mohammad Reza; Zaidi, Habib

    2013-01-01

    Purpose: Compressed sensing (CS) provides a promising framework for MR image reconstruction from highly undersampled data, thus reducing data acquisition time. In this context, sparsity-promoting regularization techniques exploit the prior knowledge that MR images are sparse or compressible in a

  17. Pre- and postmortem imaging of transplanted cells

    Directory of Open Access Journals (Sweden)

    Andrzejewska A

    2015-09-01

    Full Text Available Anna Andrzejewska,1 Adam Nowakowski,1 Miroslaw Janowski,1–4 Jeff WM Bulte,3–7 Assaf A Gilad,3,4 Piotr Walczak,3,4,8 Barbara Lukomska11NeuroRepair Department, 2Department of Neurosurgery, Mossakowski Medical Research Centre, Polish Academy of Sciences, Warsaw, Poland; 3Russell H Morgan Department of Radiology and Radiological Science, Division of MR Research, 4Cellular Imaging Section and Vascular Biology Program, Institute for Cell Engineering, 5Department of Biomedical Engineering, 6Department of Chemical & Biomolecular Engineering, 7Department of Oncology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA; 8Department of Radiology, Faculty of Medical Sciences, University of Warmia and Mazury, Olsztyn, PolandAbstract: Therapeutic interventions based on the transplantation of stem and progenitor cells have garnered increasing interest. This interest is fueled by successful preclinical studies for indications in many diseases, including the cardiovascular, central nervous, and musculoskeletal system. Further progress in this field is contingent upon access to techniques that facilitate an unambiguous identification and characterization of grafted cells. Such methods are invaluable for optimization of cell delivery, improvement of cell survival, and assessment of the functional integration of grafted cells. Following is a focused overview of the currently available cell detection and tracking methodologies that covers the entire spectrum from pre- to postmortem cell identification.Keywords: stem cells, transplantation, SPECT, MRI, bioluminescence, cell labeling

  18. Label-free vascular imaging in a spontaneous hamster cheek pouch carcinogen model for pre-cancer detection (Conference Presentation)

    Science.gov (United States)

    Hu, Fangyao; Morhard, Robert; Liu, Heather; Murphy, Helen; Farsiu, Sina; Ramanujam, Nimmi

    2016-03-01

    Inducing angiogenesis is one hallmark of cancer. Tumor induced neovasculature is often characterized as leaky, tortuous and chaotic, unlike a highly organized normal vasculature. Additionally, in the course of carcinogenesis, angiogenesis precedes a visible lesion. Tumor cannot grow beyond 1-2 mm in diameter without inducing angiogenesis. Therefore, capturing the event of angiogenesis may aid early detection of pre-cancer -important for better treatment prognoses in regions that lack the resources to manage invasive cancer. In this study, we imaged the neovascularization in vivo in a spontaneous hamster cheek pouch carcinogen model using a, non-invasive, label-free, high resolution, reflected-light spectral darkfield microscope. Hamsters' cheek pouches were painted with 7,12-Dimethylbenz[a]anthracene (DMBA) to induce pre-cancerous to cancerous changes, or mineral oil as control. High resolution spectral darkfield images were obtained over the course of pre-cancer development and in control cheek pouches. The vasculature was segmented with a multi-scale Gabor filter with an 85% accuracy compared with manually traced masks. Highly tortuous vasculature was observed only in the DMBA treated cheek pouches as early as 6 weeks of treatment. In addition, the highly tortuous vessels could be identified before a visible lesion occurred later during the treatment. The vessel patterns as determined by the tortuosity index were significantly different from that of the control cheek pouch. This preliminary study suggests that high-resolution darkfield microscopy is promising tool for pre-cancer and early cancer detection in low resource settings.

  19. Dual-purpose linker for alpha helix stabilization and imaging agent conjugation to glucagon-like peptide-1 receptor ligands.

    Science.gov (United States)

    Zhang, Liang; Navaratna, Tejas; Liao, Jianshan; Thurber, Greg M

    2015-02-18

    Peptides display many characteristics of efficient imaging agents such as rapid targeting, fast background clearance, and low non-specific cellular uptake. However, poor stability, low affinity, and loss of binding after labeling often preclude their use in vivo. Using glucagon-like peptide-1 receptor (GLP-1R) ligands exendin and GLP-1 as a model system, we designed a novel α-helix-stabilizing linker to simultaneously address these limitations. The stabilized and labeled peptides showed an increase in helicity, improved protease resistance, negligible loss or an improvement in binding affinity, and excellent in vivo targeting. The ease of incorporating azidohomoalanine in peptides and efficient reaction with the dialkyne linker enable this technique to potentially be used as a general method for labeling α helices. This strategy should be useful for imaging beta cells in diabetes research and in developing and testing other peptide targeting agents.

  20. New definitions of pointing stability - ac and dc effects. [constant and time-dependent pointing error effects on image sensor performance

    Science.gov (United States)

    Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.

    1992-01-01

    For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.

  1. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn; Huang, Jing; Zhang, Hua; Lu, Lijun; Lyu, Wenbing; Feng, Qianjin; Chen, Wufan; Ma, Jianhua, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515 (China); Zhang, Jing [Department of Radiology, Tianjin Medical University General Hospital, Tianjin 300052 (China)

    2016-05-15

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivatives of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.

  2. Model-based satellite image fusion

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg

    2008-01-01

    A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...... neighborhood regularization is presented. This framework enables the formulation of the regularization in a way that corresponds well with our prior assumptions of the image data. The proposed method is validated and compared with other approaches on several data sets. Lastly, the intensity......-hue-saturation method is revisited in order to gain additional insight of what implications the spectral consistency has for an image fusion method....

  3. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  4. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    Science.gov (United States)

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  5. Cleaning OCR'd text with Regular Expressions

    Directory of Open Access Journals (Sweden)

    Laura Turner O'Hara

    2013-05-01

    Full Text Available Optical Character Recognition (OCR—the conversion of scanned images to machine-encoded text—has proven a godsend for historical research. This process allows texts to be searchable on one hand and more easily parsed and mined on the other. But we’ve all noticed that the OCR for historic texts is far from perfect. Old type faces and formats make for unique OCR. How might we improve poor quality OCR? The answer is Regular Expressions or “regex.”

  6. Cleaning OCR'd text with Regular Expressions

    OpenAIRE

    Laura Turner O'Hara

    2013-01-01

    Optical Character Recognition (OCR)—the conversion of scanned images to machine-encoded text—has proven a godsend for historical research. This process allows texts to be searchable on one hand and more easily parsed and mined on the other. But we’ve all noticed that the OCR for historic texts is far from perfect. Old type faces and formats make for unique OCR. How might we improve poor quality OCR? The answer is Regular Expressions or “regex.”

  7. Borderline personality disorder and regularly drinking alcohol before sex.

    Science.gov (United States)

    Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S

    2017-07-01

    Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol

  8. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  9. Linear deflectometry - Regularization and experimental design [Lineare deflektometrie - Regularisierung und experimentelles design

    KAUST Repository

    Balzer, Jonathan

    2011-01-01

    Specular surfaces can be measured with deflectometric methods. The solutions form a one-parameter family whose properties are discussed in this paper. We show in theory and experiment that the shape sensitivity of solutions decreases with growing distance from the optical center of the imaging component of the sensor system and propose a novel regularization strategy. Recommendations for the construction of a measurement setup aim for benefiting this strategy as well as the contrarian standard approach of regularization by specular stereo. © Oldenbourg Wissenschaftsverlag.

  10. Classification of normal and abnormal images of lung cancer

    Science.gov (United States)

    Bhatnagar, Divyesh; Tiwari, Amit Kumar; Vijayarajan, V.; Krishnamoorthy, A.

    2017-11-01

    To find the exact symptoms of lung cancer is difficult, because of the formation of the most cancers tissues, wherein large structure of tissues is intersect in a different way. This problem can be evaluated with the help of digital images. In this strategy images will be examined with basic operation of PCA Algorithm. In this paper, GLCM method is used for pre-processing of the snap shots and function extraction system and to test the level of diseases of a patient in its premature stage get to know it is regular or unusual. With the help of result stage of cancer will be evaluated. With the help of dataset and result survival rate of cancer patient can be estimated. Result is based totally on the precise and wrong arrangement of the patterns of tissues.

  11. Interactive facades analysis and synthesis of semi-regular facades

    KAUST Repository

    AlHalawani, Sawsan; Yang, Yongliang; Liu, Han; Mitra, Niloy J.

    2013-01-01

    Urban facades regularly contain interesting variations due to allowed deformations of repeated elements (e.g., windows in different open or close positions) posing challenges to state-of-the-art facade analysis algorithms. We propose a semi-automatic framework to recover both repetition patterns of the elements and their individual deformation parameters to produce a factored facade representation. Such a representation enables a range of applications including interactive facade images, improved multi-view stereo reconstruction, facade-level change detection, and novel image editing possibilities. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  12. Interactive facades analysis and synthesis of semi-regular facades

    KAUST Repository

    AlHalawani, Sawsan

    2013-05-01

    Urban facades regularly contain interesting variations due to allowed deformations of repeated elements (e.g., windows in different open or close positions) posing challenges to state-of-the-art facade analysis algorithms. We propose a semi-automatic framework to recover both repetition patterns of the elements and their individual deformation parameters to produce a factored facade representation. Such a representation enables a range of applications including interactive facade images, improved multi-view stereo reconstruction, facade-level change detection, and novel image editing possibilities. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  13. A tri-modal molecular imaging agent for sentinel lymph node mapping

    International Nuclear Information System (INIS)

    Qin, Zhengtao; Hoh, Carl K.; Hall, David J.; Vera, David R.

    2015-01-01

    Introduction: We report an “instant kit” method to radiolabel fluorescent-tilmanocept with 68 Ga and 99m Tc for tri-modal molecular imaging of sentinel lymph nodes (SLNs). Methods: Solutions of sodium acetate, 68 GaCl 3 and Na 99m TcO 4 were added successively to a “kit vial” containing lyophilized 800CW-tilmanocept, SnCl 2 , trehalose and ascorbic acid. After a 30-min incubation, the pH was neutralized with PBS. No purification was required. Radiochemical and fluorescence purity was measured by HPLC and ITLC techniques. In vitro stability was measured by standing gel chromatography (SGC) and ITLC by a 100-fold dilution 0.25 h after radiolabeling. In vivo stability was measured by SGC and ITLC after an 11 h incubation in human plasma. A dose (0.1 nmol, ~ 1 MBq 68 Ga, ~ 25 MBq 99m Tc) was injected to the footpad of 4 mice. Popliteal SLNs were imaged by PET and fluorescence imaging systems at 0.5, 24, 48, 72 h, then excised and assayed for 99m Tc. Results: Radiochemical and fluorescent purity exceeded 98%. The in vitro stability assay demonstrated high irreversibility of both radiolabels and the fluorescent label, and in vivo stability assay demonstrated high stability of the technetium and fluorescent labels to plasma metabolism. Popliteal SLNs were identified by PET and fluorescence imaging within 0.5 h of injection. SLN fluorescence intensity remained constant for 72 h, when ~ 1% of the injected dose resided in the SLN. Conclusions: Fluorescent-labeled tilmanocept can be radiolabeled with 68 Ga and 99m Tc by the sequential addition of each generator eluate to a lyophilized kit. The resulting tri-modal agent provides: PET images for pre-operative SLN mapping, fluorescence imaging up to 72 hours after injection, and quantitative radiometric measurement of SLN accumulation after excision.

  14. Accelerated whole brain intracranial vessel wall imaging using black blood fast spin echo with compressed sensing (CS-SPACE).

    Science.gov (United States)

    Zhu, Chengcheng; Tian, Bing; Chen, Luguang; Eisenmenger, Laura; Raithel, Esther; Forman, Christoph; Ahn, Sinyeob; Laub, Gerhard; Liu, Qi; Lu, Jianping; Liu, Jing; Hess, Christopher; Saloner, David

    2018-06-01

    Develop and optimize an accelerated, high-resolution (0.5 mm isotropic) 3D black blood MRI technique to reduce scan time for whole-brain intracranial vessel wall imaging. A 3D accelerated T 1 -weighted fast-spin-echo prototype sequence using compressed sensing (CS-SPACE) was developed at 3T. Both the acquisition [echo train length (ETL), under-sampling factor] and reconstruction parameters (regularization parameter, number of iterations) were first optimized in 5 healthy volunteers. Ten patients with a variety of intracranial vascular disease presentations (aneurysm, atherosclerosis, dissection, vasculitis) were imaged with SPACE and optimized CS-SPACE, pre and post Gd contrast. Lumen/wall area, wall-to-lumen contrast ratio (CR), enhancement ratio (ER), sharpness, and qualitative scores (1-4) by two radiologists were recorded. The optimized CS-SPACE protocol has ETL 60, 20% k-space under-sampling, 0.002 regularization factor with 20 iterations. In patient studies, CS-SPACE and conventional SPACE had comparable image scores both pre- (3.35 ± 0.85 vs. 3.54 ± 0.65, p = 0.13) and post-contrast (3.72 ± 0.58 vs. 3.53 ± 0.57, p = 0.15), but the CS-SPACE acquisition was 37% faster (6:48 vs. 10:50). CS-SPACE agreed with SPACE for lumen/wall area, ER measurements and sharpness, but marginally reduced the CR. In the evaluation of intracranial vascular disease, CS-SPACE provides a substantial reduction in scan time compared to conventional T 1 -weighted SPACE while maintaining good image quality.

  15. Noise suppression for dual-energy CT via penalized weighted least-square optimization with similarity-based regularization

    Energy Technology Data Exchange (ETDEWEB)

    Harms, Joseph; Wang, Tonghe; Petrongolo, Michael; Zhu, Lei, E-mail: leizhu@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Niu, Tianye [Sir Run Run Shaw Hospital, Zhejiang University School of Medicine (China); Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, 310016 (China)

    2016-05-15

    Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is based on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan{sup ©}600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise

  16. Linear deflectometry - Regularization and experimental design [Lineare deflektometrie - Regularisierung und experimentelles design

    KAUST Repository

    Balzer, Jonathan; Werling, Stefan; Beyerer, Jü rgen

    2011-01-01

    distance from the optical center of the imaging component of the sensor system and propose a novel regularization strategy. Recommendations for the construction of a measurement setup aim for benefiting this strategy as well as the contrarian standard

  17. The images of scientists and science among Hebrew- and Arabic-speaking pre-service teachers in Israel

    Science.gov (United States)

    Rubin, Edna; Cohen, Ariel

    2003-07-01

    This study investigated the image of scientists held by Israeli pre-service teachers, the majority of whom were female. The population consisted of students belonging to two cultures, Hebrew-speaking and Arabic-speaking. The DAST ('Draw-a-Scientist-Test') tool and other tools, some of which were developed specifically for this research, tested the image of the scientist as perceived by the participants. It was found that the image of the scientist is perceived as predominantly male, a physicist or a chemist, working in a laboratory typical of the eighteenth, nineteenth or the early-twentieth century. Students did not differentiate between scientists and inventors. Different images were held in the two cultures. Most of the Arabic-speaking students put Classical Islamic scientists near the top of their lists and thought of the scientist as an Arab male, while the Hebrew-speaking students' was as a typical Western male. Recommendations, resulting from the findings, for developing a new learning unit for the purpose of altering stereotypes are suggested.

  18. Hermite regularization of the lattice Boltzmann method for open source computational aeroacoustics.

    Science.gov (United States)

    Brogi, F; Malaspinas, O; Chopard, B; Bonadonna, C

    2017-10-01

    The lattice Boltzmann method (LBM) is emerging as a powerful engineering tool for aeroacoustic computations. However, the LBM has been shown to present accuracy and stability issues in the medium-low Mach number range, which is of interest for aeroacoustic applications. Several solutions have been proposed but are often too computationally expensive, do not retain the simplicity and the advantages typical of the LBM, or are not described well enough to be usable by the community due to proprietary software policies. An original regularized collision operator is proposed, based on the expansion of Hermite polynomials, that greatly improves the accuracy and stability of the LBM without significantly altering its algorithm. The regularized LBM can be easily coupled with both non-reflective boundary conditions and a multi-level grid strategy, essential ingredients for aeroacoustic simulations. Excellent agreement was found between this approach and both experimental and numerical data on two different benchmarks: the laminar, unsteady flow past a 2D cylinder and the 3D turbulent jet. Finally, most of the aeroacoustic computations with LBM have been done with commercial software, while here the entire theoretical framework is implemented using an open source library (palabos).

  19. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  20. Aespoe HRL - Geoscientific evaluation 1997/3. Results from pre-investigation and detailed site characterization. Comparison of predictions and observations. Geology and mechanical stability

    Energy Technology Data Exchange (ETDEWEB)

    Stanfors, R. [RS Consulting, Lund (Sweden); Olsson, Paer [Skanska AB Stockholm (Sweden); Stille, H. [Royal Inst. of Tech., Stockholm (Sweden)

    1997-05-01

    Prior to excavation of the laboratory in 1990 predictions were made for the excavation phase. The predictions concern five key issues: Geology, groundwater flow, groundwater chemistry, transport of solutes, and mechanical stability. Comparisons between predictions and observations were made during excavation in order to verify the reliability of the pre-investigations. This report presents a comparison between the geological and mechanical stability predictions and observations and an evaluation of data and investigation methods used for the 700-2874 m section of the tunnel. The report is specially highlighting the following conclusions: It is possible to localize major fracture zones during the pre-investigation at shallow (<200 m) depths; A number of minor fracture zones striking NNW-NNE were predicted to be hydraulically important and penetrate the southern area. A number of narrow fracture zone indications - 0.1-1 m wide - striking WNW-NE were mapped in the tunnel and pre-grouted sections confirm hydraulic conductors; It has not been possible to confirm the gently dipping zone EW-5, which was predicted as `possible`, with data from the tunnel; Predictions of the amount of different rock types were generally reliable as regards the major rocks, but the prediction of the distribution in space were poor as regards the minor rock types; The prediction of rock stress orientation corresponds well to the outcome; The prediction of rock quality for the tunnel, while applying the RMR-system, shows good correspondence to the observations made in the tunnel. 59 refs, 51 figs, 21 tabs.

  1. Sparsity-regularized HMAX for visual recognition.

    Directory of Open Access Journals (Sweden)

    Xiaolin Hu

    Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.

  2. Regularity in the changes of the thermodynamic functions associated with the formation of mononuclear complexes

    International Nuclear Information System (INIS)

    Mihailov, M.H.; Mihailova, V.T.; Strezov, A.S.; Taskaeva, M.I.

    1979-01-01

    Regularities for the changes of the free energy ΔG, enthalpy ΔH enthropy ΔS have been derived, associated with the complex formation processes in metal-ligand systems whose stability constants of the consecutive mononuclear compelxes ML, ML 2 , ML 3 , ML 4 ...MLsub(n) satisfy the relation βn = A an/n (n = 1,2,3... N) where βn is the overall stability constant of the MLsub(n) complex, n is the number of ligands (1 [de

  3. Regularization of DT-MRI Using 3D Median Filtering Methods

    Directory of Open Access Journals (Sweden)

    Soondong Kwon

    2014-01-01

    Full Text Available DT-MRI (diffusion tensor magnetic resonance imaging tractography is a method to determine the architecture of axonal fibers in the central nervous system by computing the direction of the principal eigenvectors obtained from tensor matrix, which is different from the conventional isotropic MRI. Tractography based on DT-MRI is known to need many computations and is highly sensitive to noise. Hence, adequate regularization methods, such as image processing techniques, are in demand. Among many regularization methods we are interested in the median filtering method. In this paper, we extended two-dimensional median filters already developed to three-dimensional median filters. We compared four median filtering methods which are two-dimensional simple median method (SM2D, two-dimensional successive Fermat method (SF2D, three-dimensional simple median method (SM3D, and three-dimensional successive Fermat method (SF3D. Three kinds of synthetic data with different altitude angles from axial slices and one kind of human data from MR scanner are considered for numerical implementation by the four filtering methods.

  4. WE-FG-207B-03: Multi-Energy CT Reconstruction with Spatial Spectral Nonlocal Means Regularization

    Energy Technology Data Exchange (ETDEWEB)

    Li, B [University of Texas Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou, Guangdong (China); Shen, C; Ouyang, L; Yang, M; Jiang, S; Jia, X [University of Texas Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: Multi-energy computed tomography (MECT) is an emerging application in medical imaging due to its ability of material differentiation and potential for molecular imaging. In MECT, image correlations at different spatial and channels exist. It is desirable to incorporate these correlations in reconstruction to improve image quality. For this purpose, this study proposes a MECT reconstruction technique that employes spatial spectral non-local means (ssNLM) regularization. Methods: We consider a kVp-switching scanning method in which source energy is rapidly switched during data acquisition. For each energy channel, this yields projection data acquired at a number of angles, whereas projection angles among channels are different. We formulate the reconstruction task as an optimziation problem. A least square term enfores data fidelity. A ssNLM term is used as regularization to encourage similarities among image patches at different spatial locations and channels. When comparing image patches at different channels, intensity difference were corrected by a transformation estimated via histogram equalization during the reconstruction process. Results: We tested our method in a simulation study with a NCAT phantom and an experimental study with a Gammex phantom. For comparison purpose, we also performed reconstructions using conjugate-gradient least square (CGLS) method and conventional NLM method that only considers spatial correlation in an image. ssNLM is able to better suppress streak artifacts. The streaks are along different projection directions in images at different channels. ssNLM discourages this dissimilarity and hence removes them. True image structures are preserved in this process. Measurements in regions of interests yield 1.1 to 3.2 and 1.5 to 1.8 times higher contrast to noise ratio than the NLM approach. Improvements over CGLS is even more profound due to lack of regularization in the CGLS method and hence amplified noise. Conclusion: The

  5. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    Science.gov (United States)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  6. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  7. An entropy regularization method applied to the identification of wave distribution function for an ELF hiss event

    Science.gov (United States)

    Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé

    2006-06-01

    An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.

  8. A Dual-Purpose Linker for Alpha Helix Stabilization and Imaging Agent Conjugation to Glucagon-Like Peptide-1 Receptor Ligands

    Science.gov (United States)

    Zhang, Liang; Navaratna, Tejas; Liao, Jianshan; Thurber, Greg M.

    2016-01-01

    Peptides display many characteristics of efficient imaging agents such as rapid targeting, fast background clearance, and low non-specific cellular uptake. However, poor stability, low affinity, and loss of binding after labeling often preclude their use in vivo. Using the glucagon-like peptide-1 receptor (GLP-1R) ligands exendin and GLP-1 as a model system, we designed a novel alpha helix stabilizing linker to simultaneously address these limitations. The stabilized and labeled peptides showed an increase in helicity, improved protease resistance, negligible loss or an improvement in binding affinity, and excellent in vivo targeting. The ease of incorporating azidohomoalanine in peptides and efficient reaction with the dialkyne linker enables this technique to potentially be used as a general method for labeling alpha helices. This strategy should be useful for imaging beta cells in diabetes research and in developing and testing other peptide targeting agents. PMID:25594741

  9. Regularization of the double period method for experimental data processing

    Science.gov (United States)

    Belov, A. A.; Kalitkin, N. N.

    2017-11-01

    In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.

  10. Thermodynamical stability of the Bardeen black hole

    Energy Technology Data Exchange (ETDEWEB)

    Bretón, Nora [Dpto. de Física, Centro de Investigación y de Estudios Avanzados del I. P. N., Apdo. 14-740, D.F. (Mexico); Perez Bergliaffa, Santiago E. [Dpto. de Física, U. Estado do Rio de Janeiro (Brazil)

    2014-01-14

    We analyze the stability of the regular magnetic Bardeen black hole both thermodynamically and dynamically. For the thermodynamical analysis we consider a microcanonical ensemble and apply the turning point method. This method allows to decide a change in stability (or instability) of a system, requiring only the assumption of smoothness of the area functional. The dynamical stability is asserted using criteria based on the signs of the Lagrangian and its derivatives. It turns out from our analysis that the Bardeen black hole is both thermodynamically and dynamically stable.

  11. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  12. Magnetic resonance imaging of olfactory neuroblastoma

    International Nuclear Information System (INIS)

    Iio, Mitsuhiro; Homma, Akihiro; Furuta, Yasushi; Fukuda, Satoshi

    2006-01-01

    Olfactory neuroblastoma is an uncommon intranasal tumor originating from olfactory neuroepithelium. Despite the development of electron microscopy and immunohistochemical testing, the pathological diagnosis of this tumor is still difficult because of the wide range of histological features. Magnetic resonance imaging (MR) of this tumor and the pattern of contrast enhancement have not been well described. The purpose of this report was to analyze the MR characteristics of olfactory neuroblastomas. The MR signal, pattern of contrast enhancement, and correlation with high-resolution computed tomography (CT) imaging were examined. Seventeen patients with olfactory neuroblastoma were treated at Hokkaido University Hospital and a related hospital during the past 25 years. MR images taken in 12 patients and CT images taken in 9 patients with histologically confirmed olfactory neuroblastoma were retrospectively reviewed. Compared with brain gray matter, 11 tumors were hypointense on T1-weighted images, 9 homogeneously and 2 heterogeneously. Eight tumors were hyperintense on T2-weighted images, 3 homogeneously and 5 heterogeneously, although their appearance was less intense than that of sinusitis. Gadolinium enhancement was moderate in one case and marked in 10 of the 11 cases, 9 homogeneously and 2 heterogeneously. Nine of the 11 tumors showed smooth regular shaped margins; 2 of these tumors exhibited irregular infiltrating margins on gadolinium-enhanced images, compared to the pre-contrast T1-weighted images. Eight of the 11 tumors had clearly demarcated margins, while 3 of the 11 tumors did not exhibit gadolinium enhancement. Six of the 12 cases (50%) exhibited intracranial cysts on the gadolinium-enhanced images. T2-weighted or gadolinium-enhanced images successfully distinguished sinusitis from tumors in 4 cases whereas the CT images failed. Gadolinium enhancement, particularly in the tangential plane, demonstrated intracranial extension not apparent on the CT images

  13. Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

    International Nuclear Information System (INIS)

    Vatankhah, Saeed; Ardestani, Vahid E; Renaut, Rosemary A

    2014-01-01

    The χ 2 principle generalizes the Morozov discrepancy principle to the augmented residual of the Tikhonov regularized least squares problem. For weighting of the data fidelity by a known Gaussian noise distribution on the measured data, when the stabilizing, or regularization, term is considered to be weighted by unknown inverse covariance information on the model parameters, the minimum of the Tikhonov functional becomes a random variable that follows a χ 2 -distribution with m+p−n degrees of freedom for the model matrix G of size m×n, m⩾n, and regularizer L of size p × n. Then, a Newton root-finding algorithm, employing the generalized singular value decomposition, or singular value decomposition when L = I, can be used to find the regularization parameter α. Here the result and algorithm are extended to the underdetermined case, m 2 algorithms when m 2 and unbiased predictive risk estimator of the regularization parameter are used for the first time in this context. For a simulated underdetermined data set with noise, these regularization parameter estimation methods, as well as the generalized cross validation method, are contrasted with the use of the L-curve and the Morozov discrepancy principle. Experiments demonstrate the efficiency and robustness of the χ 2 principle and unbiased predictive risk estimator, moreover showing that the L-curve and Morozov discrepancy principle are outperformed in general by the other three techniques. Furthermore, the minimum support stabilizer is of general use for the χ 2 principle when implemented without the desirable knowledge of the mean value of the model. (paper)

  14. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  15. WE-AB-202-11: Radiobiological Modeling of Tumor Response During Radiotherapy Based On Pre-Treatment Dynamic PET Imaging Data

    Energy Technology Data Exchange (ETDEWEB)

    Crispin-Ortuzar, M; Grkovski, M; Beattie, B; Lee, N; Riaz, N; Humm, J; Jeong, J; Fontanella, A; Deasy, J [Memorial Sloan Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To evaluate the ability of a multiscale radiobiological model of tumor response to predict mid-treatment hypoxia images, based on pretreatment imaging of perfusion and hypoxia with [18-F]FMISO dynamic PET and glucose metabolism with [18-F]FDG PET. Methods: A mechanistic tumor control probability (TCP) radiobiological model describing the interplay between tumor cell proliferation and hypoxia (Jeong et al., PMB 2013) was extended to account for intra-tumor nutrient heterogeneity, dynamic cell migration due to nutrient gradients, and stromal cells. This extended model was tested on 10 head and neck cancer patients treated with chemoradiotherapy, randomly drawn from a larger MSKCC protocol involving baseline and mid-therapy dynamic PET scans. For each voxel, initial fractions of proliferative and hypoxic tumor cells were obtained by finding an approximate solution to a system of linear equations relating cell fractions to voxel-level FDG uptake, perfusion (FMISO K{sub 1}) and hypoxia (FMISO k{sub 3}). The TCP model then predicted their evolution over time up until the mid treatment scan. Finally, the linear model was reapplied to predict each lesion’s median hypoxia level (k{sub 3}[med,sim]) which in turn was compared to the FMISO k{sub 3}[med] measured at mid-therapy. Results: The average k3[med] of the tumors in pre-treatment scans was 0.0035 min{sup −1}, with an inter-tumor standard deviation of σ[pre]=0.0034 min{sup −1}. The initial simulated k{sub 3}[med,sim] of each tumor agreed with the corresponding measurements within 0.1σ[pre]. In 7 out of 10 lesions, the mid-treatment k{sub 3}[med,sim] prediction agreed with the data within 0.3σ[pre]. The remaining cases corresponded to the most extreme relative changes in k{sub 3}[med]. Conclusion: This work presents a method to personalize the prediction of a TCP model using pre-treatment kinetic imaging data, and validates the modeling of radiotherapy response by predicting changes in median hypoxia

  16. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  17. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  18. Long-Term Stability of Pre-Orthodontic Orthognathic Bimaxillary Surgery Using Intraoral Vertical Ramus Osteotomy Versus Conventional Surgery.

    Science.gov (United States)

    Jeong, Jeong-Hwa; Choi, Sung-Hwan; Kim, Kee-Deog; Hwang, Chung-Ju; Lee, Sang-Hwy; Yu, Hyung-Seog

    2018-02-20

    The aim of the present study was to compare the long-term stability of bimaxillary surgery using an intraoral vertical ramus osteotomy (IVRO) with and without presurgical orthodontic treatment. The present retrospective study included 31 consecutive patients with skeletal Class III malocclusions who had undergone bimaxillary surgery (Le Fort I osteotomy and bilateral IVRO). Patients were divided into 2 groups based on treatment type: pre-orthodontic orthognathic surgery (POGS; n = 17) and conventional surgery with presurgical orthodontic treatment (CS; n = 14). Lateral cephalograms were obtained before surgery, 1 day after surgery, 1 month after surgery, 1 year after surgery, and 2 years after surgery to evaluate skeletal and soft tissue changes between the 2 groups. Data were analyzed using χ 2 tests, Mann-Whitney U tests, repeated-measures analyses of variance, and independent t tests. There was no significant difference in skeletal or soft tissue measurements-with the exception of the angle between the sella-and-nasion plane and the occlusal plane (SN-OP; P surgery. These findings suggest that POGS and CS have similar long-term stability in patients with skeletal Class III malocclusion. Copyright © 2018. Published by Elsevier Inc.

  19. Critical object recognition in millimeter-wave images with robustness to rotation and scale.

    Science.gov (United States)

    Mohammadzade, Hoda; Ghojogh, Benyamin; Faezi, Sina; Shabany, Mahdi

    2017-06-01

    Locating critical objects is crucial in various security applications and industries. For example, in security applications, such as in airports, these objects might be hidden or covered under shields or secret sheaths. Millimeter-wave images can be utilized to discover and recognize the critical objects out of the hidden cases without any health risk due to their non-ionizing features. However, millimeter-wave images usually have waves in and around the detected objects, making object recognition difficult. Thus, regular image processing and classification methods cannot be used for these images and additional pre-processings and classification methods should be introduced. This paper proposes a novel pre-processing method for canceling rotation and scale using principal component analysis. In addition, a two-layer classification method is introduced and utilized for recognition. Moreover, a large dataset of millimeter-wave images is collected and created for experiments. Experimental results show that a typical classification method such as support vector machines can recognize 45.5% of a type of critical objects at 34.2% false alarm rate (FAR), which is a drastically poor recognition. The same method within the proposed recognition framework achieves 92.9% recognition rate at 0.43% FAR, which indicates a highly significant improvement. The significant contribution of this work is to introduce a new method for analyzing millimeter-wave images based on machine vision and learning approaches, which is not yet widely noted in the field of millimeter-wave image analysis.

  20. Object Tracking via 2DPCA and l2-Regularization

    Directory of Open Access Journals (Sweden)

    Haijun Wang

    2016-01-01

    Full Text Available We present a fast and robust object tracking algorithm by using 2DPCA and l2-regularization in a Bayesian inference framework. Firstly, we model the challenging appearance of the tracked object using 2DPCA bases, which exploit the strength of subspace representation. Secondly, we adopt the l2-regularization to solve the proposed presentation model and remove the trivial templates from the sparse tracking method which can provide a more fast tracking performance. Finally, we present a novel likelihood function that considers the reconstruction error, which is concluded from the orthogonal left-projection matrix and the orthogonal right-projection matrix. Experimental results on several challenging image sequences demonstrate that the proposed method can achieve more favorable performance against state-of-the-art tracking algorithms.

  1. Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction

    International Nuclear Information System (INIS)

    Hintermüller, Michael; Rautenberg, Carlos N; Hahn, Jooyoung

    2014-01-01

    Variable splitting schemes for the function space version of the image reconstruction problem with total variation regularization (TV-problem) in its primal and pre-dual formulations are considered. For the primal splitting formulation, while existence of a solution cannot be guaranteed, it is shown that quasi-minimizers of the penalized problem are asymptotically related to the solution of the original TV-problem. On the other hand, for the pre-dual formulation, a family of parametrized problems is introduced and a parameter dependent contraction of an associated fixed point iteration is established. Moreover, the theory is validated by numerical tests. Additionally, the augmented Lagrangian approach is studied, details on an implementation on a staggered grid are provided and numerical tests are shown. (paper)

  2. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  3. The role of group brain checkups using magnetic resonance imaging in pre-elderly with hypertension

    International Nuclear Information System (INIS)

    Kanai-Iwai, Eri; Ogawa, Fumiaki; Nakagawa, Masanori; Nishimura, Tsunehiko; Matsubara, Hiroaki; Naruse, Shoji

    2006-01-01

    Care of elderly people is an important socio-economical issue for industrial nations. Stroke is the leading cause of elderly care and its major risk facters are silent brain infarcts and metabolic disorders such as hypertension. Recent progress in brain imaging techniques has enabled early detection of cerebrovascular disease. However, brain imaging of numerous patients is not feasible because the test is time consuming and costly. Furthermore, the epidemiology of silent cerebrovascular disease in hypertensive elderly people is not well-known. Thus, the present study aims to establish whether group brain check-up is effective and to assess the incidence of silent cerebrovascular disease in pre-elderly individuals with hypertension. We randomly enrolled 224 participants, aged 50- to 65-years-old, with hypertension detected during routine medical check-ups. All participants were free of neurological symptoms, or dementia as determined by the Mini Mental Status Examination. MRI was carried out by the simplified method of fast spin echo (FSE)-T2-weighted image (T2WI) and 3D-time of flight (TOF) MRA with Toshiba VISART1.5T, and diagnosed by 3 radiologists. Each imaging test required only 10 minutes and the cost was reduced to about 40-80% number of the usual cost for brain MRIs. The detection rate of abnormal findings was 77.6% (n=174) and that of cerebrovascular changes was 70.1% (n=159; 102 lacunae, 64 intracranial artery stenosis, and 27 cerebral aneurysm), which was much higher than previously reported in a study of random participants. In addition, follow-up questionnaires after the brain check-ups revealed that 86% of participants improved their awareness about health-related life-style. These findings indicate that the pre-elderly population with hypertension is at high-risk for silent cerebrovascular disease, and mass screening of this group using our simplified MRI may be an effective medical health strategy in aging society. (author)

  4. On convergence rates for iteratively regularized procedures with linear penalty terms

    International Nuclear Information System (INIS)

    Smirnova, Alexandra

    2012-01-01

    The impact of this paper is twofold. First, we study convergence rates of the iteratively regularized Gauss–Newton (IRGN) algorithm with a linear penalty term under a generalized source assumption and show how the regularizing properties of new iterations depend on the solution smoothness. Secondly, we introduce an adaptive IRGN procedure, which is investigated under a relaxed smoothness condition. The introduction and analysis of a more general penalty term are of great importance since, apart from bringing stability to the numerical scheme designed for solving a large class of applied inverse problems, it allows us to incorporate various types of a priori information available on the model. Both a priori and a posteriori stopping rules are investigated. For the a priori stopping rule, optimal convergence rates are derived. A numerical example illustrating convergence rates is considered. (paper)

  5. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  6. Pediatric CT: implementation of ASIR for substantial radiation dose reduction while maintaining pre-ASIR image noise.

    Science.gov (United States)

    Brady, Samuel L; Moore, Bria M; Yee, Brian S; Kaufman, Robert A

    2014-01-01

    To determine a comprehensive method for the implementation of adaptive statistical iterative reconstruction (ASIR) for maximal radiation dose reduction in pediatric computed tomography (CT) without changing the magnitude of noise in the reconstructed image or the contrast-to-noise ratio (CNR) in the patient. The institutional review board waived the need to obtain informed consent for this HIPAA-compliant quality analysis. Chest and abdominopelvic CT images obtained before ASIR implementation (183 patient examinations; mean patient age, 8.8 years ± 6.2 [standard deviation]; range, 1 month to 27 years) were analyzed for image noise and CNR. These measurements were used in conjunction with noise models derived from anthropomorphic phantoms to establish new beam current-modulated CT parameters to implement 40% ASIR at 120 and 100 kVp without changing noise texture or magnitude. Image noise was assessed in images obtained after ASIR implementation (492 patient examinations; mean patient age, 7.6 years ± 5.4; range, 2 months to 28 years) the same way it was assessed in the pre-ASIR analysis. Dose reduction was determined by comparing size-specific dose estimates in the pre- and post-ASIR patient cohorts. Data were analyzed with paired t tests. With 40% ASIR implementation, the average relative dose reduction for chest CT was 39% (2.7/4.4 mGy), with a maximum reduction of 72% (5.3/18.8 mGy). The average relative dose reduction for abdominopelvic CT was 29% (4.8/6.8 mGy), with a maximum reduction of 64% (7.6/20.9 mGy). Beam current modulation was unnecessary for patients weighing 40 kg or less. The difference between 0% and 40% ASIR noise magnitude was less than 1 HU, with statistically nonsignificant increases in patient CNR at 100 kVp of 8% (15.3/14.2; P = .41) for chest CT and 13% (7.8/6.8; P = .40) for abdominopelvic CT. Radiation dose reduction at pediatric CT was achieved when 40% ASIR was implemented as a dose reduction tool only; no net change to the magnitude

  7. Canards in stiction: on solutions of a friction oscillator by regularization

    DEFF Research Database (Denmark)

    Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall

    2017-01-01

    We study the solutions of a friction oscillator subject to stiction. This discontinuous model is nonFilippov, and the concept of Filippov solution cannot be used. Furthermore some Carath´eodory solutions are unphysical. Therefore we introduce the concept of stiction solutions: these are the Carat...... that this family has a saddle stability and that it connects, in the rigid body limit, the two regular, slip-stick branches of the discontinuous problem, that were otherwise disconnected....

  8. Color correction optimization with hue regularization

    Science.gov (United States)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  9. Gravitational Quasinormal Modes of Regular Phantom Black Hole

    Directory of Open Access Journals (Sweden)

    Jin Li

    2017-01-01

    Full Text Available We investigate the gravitational quasinormal modes (QNMs for a type of regular black hole (BH known as phantom BH, which is a static self-gravitating solution of a minimally coupled phantom scalar field with a potential. The studies are carried out for three different spacetimes: asymptotically flat, de Sitter (dS, and anti-de Sitter (AdS. In order to consider the standard odd parity and even parity of gravitational perturbations, the corresponding master equations are derived. The QNMs are discussed by evaluating the temporal evolution of the perturbation field which, in turn, provides direct information on the stability of BH spacetime. It is found that in asymptotically flat, dS, and AdS spacetimes the gravitational perturbations have similar characteristics for both odd and even parities. The decay rate of perturbation is strongly dependent on the scale parameter b, which measures the coupling strength between phantom scalar field and the gravity. Furthermore, through the analysis of Hawking radiation, it is shown that the thermodynamics of such regular phantom BH is also influenced by b. The obtained results might shed some light on the quantum interpretation of QNM perturbation.

  10. Dynamic MRI Using SmooThness Regularization on Manifolds (SToRM).

    Science.gov (United States)

    Poddar, Sunrita; Jacob, Mathews

    2016-04-01

    We introduce a novel algorithm to recover real time dynamic MR images from highly under-sampled k- t space measurements. The proposed scheme models the images in the dynamic dataset as points on a smooth, low dimensional manifold in high dimensional space. We propose to exploit the non-linear and non-local redundancies in the dataset by posing its recovery as a manifold smoothness regularized optimization problem. A navigator acquisition scheme is used to determine the structure of the manifold, or equivalently the associated graph Laplacian matrix. The estimated Laplacian matrix is used to recover the dataset from undersampled measurements. The utility of the proposed scheme is demonstrated by comparisons with state of the art methods in multi-slice real-time cardiac and speech imaging applications.

  11. Hierarchical pre-segmentation without prior knowledge

    NARCIS (Netherlands)

    Kuijper, A.; Florack, L.M.J.

    2001-01-01

    A new method to pre-segment images by means of a hierarchical description is proposed. This description is obtained from an investigation of the deep structure of a scale space image – the input image and the Gaussian filtered ones simultaneously. We concentrate on scale space critical points –

  12. Universal Regularizers For Robust Sparse Coding and Modeling

    OpenAIRE

    Ramirez, Ignacio; Sapiro, Guillermo

    2010-01-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding...

  13. Novel approaches to tumor imaging in mice: pre targeting with radiolabeled peptide nucleic acid

    International Nuclear Information System (INIS)

    Hnatowich, D.J.; Qu, T.; Chang, F.; Rusckowski, M.

    1997-01-01

    Full text.Since targeting of tumour by conventional methods is not consistently favorable, we have considered pre targeting with separate administrations of anti tumour antibody and radiolabel. As an alternative to streptavidin and biotin for this application, we earlier considered single stranded peptide nucleic acid (PNA) bound to an irrelevant protein administered first and allowed to diffuse non specifically into tumour. This was followed later by the administration of 99 m Tc labeled complementary PNA. We now report on the first studies with PNA conjugated anti tumour antibody to allow specific binding. PNA was conjugated to the NRLU-10 IgG antibody while the complementary PNA (amine derivatized) was labeled with ((m Tc using MAG3. LS174T tumour-bearing nude mice received IV 200 ug of the PNA-antibody conjugate and 20 h later, received IV 100 ug (130 uCl) of 99m Tc- complementary PNA. Animals were imaged and sacrificed 5 h later. Because of rapid clearance, at sacrifice all tissue levels of 99 m Tc were low, the highest being kidneys at about 4%ID/gm. Tumour uptake was 0.55%ID/gm for the study animals vs. 0. 13 for controls and tumour/muscle ratios were 9.8 vs. 3.6 respectively. These values represent a 2.5-fold improvement in localization over the nonspecific study. The whole body images also reflected the superior targeting of study vs. control animals. We conclude that single-stranded PNAs should be a useful alternative to streptavidin and biotin for pre targeting studies

  14. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  15. Magnetic resonance imaging pre- and postoperative evaluation of tetralogy of Fallot; Avaliacao pre e pos-operatoria da tetralogia de Fallot por ressonancia magnetica

    Energy Technology Data Exchange (ETDEWEB)

    Bernardes, Renata Junqueira Moll; Simoes, Luiz Carlos [Instituto Nacional de Cardiologia (INC), Rio de Janeiro, RJ (Brazil). Servico de Cardiologia da Crianca e do Adolescente; Marchiori, Edson [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Faculdade de Medicina. Dept. de Radiologia]. E-mail: edmarchiori@bol.com.br; Bernardes, Paulo Manuel de Barros; Gonzaga, Maria Beatriz Albano Monzo [Rede Labs/D' Or, Rio de Janeiro, RJ (Brazil)

    2004-08-01

    The purpose of this study was to assess the usefulness of magnetic resonance imaging (MRI) in the pre- and postoperative evaluation of patients with tetralogy of Fallot. Twenty patients aged 1 to 29 years were prospectively evaluated with black-blood and contrast-enhanced angiographic techniques, 11 with the classic form of tetralogy of Fallot and 9 with tetralogy of Fallot and pulmonary atresia. MRI studies provided adequate visualization of the aorta that was classified as dilated or not dilated, and definition of its position in all cases. The use of contrast-enhanced MR angiographic techniques provided excellent imaging of the main right and left pulmonary arteries. The results suggest that MRI, including contrast-enhanced angiography techniques, is a useful tool in the evaluation of patients with tetralogy of Fallot before and after cardiac surgery since it provides important anatomical information that is not always obtained with echocardiography. MRI can be considered an alternative to cardiac catheterization, particularly in the evaluation of the pulmonary vascular anatomy. (author)

  16. Assessment of image display of contrast enhanced T1W images with fat suppression

    International Nuclear Information System (INIS)

    Miyazaki, Isao; Ishizaki, Keiko; Kobayashi, Kuninori; Katou, Masanobu

    2006-01-01

    The effects of imaging conditions and measures for their improvement were examined with regard to recognition of the effects of contrast on images when T 1 -weighted imaging with selective fat suppression was applied. Luminance at the target region was examined before and after contrast imaging using phantoms assuming pre- and post-imaging conditions. A clinical examination was performed on tumors revealed by breast examination, including those surrounded by mammary gland and by fat tissue. When fat suppression was used and imaging contrast was enhanced, the luminance level of fat tumors with the same structure as the prepared phantoms appeared to be high both before and after contrast imaging, and the effects of contrast were not distinguishable. This observation is attributable to the fact that the imaging conditions before and after contrast imaging were substantially different. To make a comparison between pre- and post-contrast images, it is considered necessary to perform imaging with fixed receiver gain and to apply the same imaging method for pre- and post-contrast images by adjusting post-contrast imaging conditions to those of pre-contrast imaging. (author)

  17. Mirror-Imaged Rapid Prototype Skull Model and Pre-Molded Synthetic Scaffold to Achieve Optimal Orbital Cavity Reconstruction.

    Science.gov (United States)

    Park, Sung Woo; Choi, Jong Woo; Koh, Kyung S; Oh, Tae Suk

    2015-08-01

    Reconstruction of traumatic orbital wall defects has evolved to restore the original complex anatomy with the rapidly growing use of computer-aided design and prototyping. This study evaluated a mirror-imaged rapid prototype skull model and a pre-molded synthetic scaffold for traumatic orbital wall reconstruction. A single-center retrospective review was performed of patients who underwent orbital wall reconstruction after trauma from 2012 to 2014. Patients were included by admission through the emergency department after facial trauma or by a tertiary referral for post-traumatic orbital deformity. Three-dimensional (3D) computed tomogram-based mirror-imaged reconstruction images of the orbit and an individually manufactured rapid prototype skull model by a 3D printing technique were obtained for each case. Synthetic scaffolds were anatomically pre-molded using the skull model as guide and inserted at the individual orbital defect. Postoperative complications were assessed and 3D volumetric measurements of the orbital cavity were performed. Paired samples t test was used for statistical analysis. One hundred four patients with immediate orbital defect reconstructions and 23 post-traumatic orbital deformity reconstructions were included in this study. All reconstructions were successful without immediate postoperative complications, although there were 10 cases with mild enophthalmos and 2 cases with persistent diplopia. Reoperations were performed for 2 cases of persistent diplopia and secondary touchup procedures were performed to contour soft tissue in 4 cases. Postoperative volumetric measurement of the orbital cavity showed nonsignificant volume differences between the damaged orbit and the reconstructed orbit (21.35 ± 1.93 vs 20.93 ± 2.07 cm(2); P = .98). This protocol was extended to severe cases in which more than 40% of the orbital frame was lost and combined with extensive soft tissue defects. Traumatic orbital reconstruction can be optimized and

  18. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  19. Stochastic dynamic modeling of regular and slow earthquakes

    Science.gov (United States)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  20. Inverse problems with Poisson data: statistical regularization theory, applications and algorithms

    International Nuclear Information System (INIS)

    Hohage, Thorsten; Werner, Frank

    2016-01-01

    Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years. (topical review)

  1. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    Science.gov (United States)

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  2. Analytic regularization of uniform cubic B-spline deformation fields.

    Science.gov (United States)

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  3. Constraints on the progenitor system of the type Ia supernova 2014J from pre-explosion Hubble space telescope imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Patrick L.; Fox, Ori D.; Filippenko, Alexei V.; Shen, Ken J.; Zheng, WeiKang; Graham, Melissa L.; Tucker, Brad E. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Cenko, S. Bradley [NASA/Goddard Space Flight Center, Code 662, Greenbelt, MD 20771 (United States); Prato, Lisa [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Schaefer, Gail, E-mail: pkelly@astro.berkeley.edu [The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023 (United States)

    2014-07-20

    We constrain the properties of the progenitor system of the highly reddened Type Ia supernova (SN Ia) 2014J in Messier 82 (M82; d ≈ 3.5 Mpc). We determine the supernova (SN) location using Keck-II K-band adaptive optics images, and we find no evidence for flux from a progenitor system in pre-explosion near-ultraviolet through near-infrared Hubble Space Telescope (HST) images. Our upper limits exclude systems having a bright red giant companion, including symbiotic novae with luminosities comparable to that of RS Ophiuchi. While the flux constraints are also inconsistent with predictions for comparatively cool He-donor systems (T ≲ 35,000 K), we cannot preclude a system similar to V445 Puppis. The progenitor constraints are robust across a wide range of R{sub V} and A{sub V} values, but significantly greater values than those inferred from the SN light curve and spectrum would yield proportionally brighter luminosity limits. The comparatively faint flux expected from a binary progenitor system consisting of white dwarf stars would not have been detected in the pre-explosion HST imaging. Infrared HST exposures yield more stringent constraints on the luminosities of very cool (T < 3000 K) companion stars than was possible in the case of SN Ia 2011fe.

  4. Constraints on the progenitor system of the type Ia supernova 2014J from pre-explosion Hubble space telescope imaging

    International Nuclear Information System (INIS)

    Kelly, Patrick L.; Fox, Ori D.; Filippenko, Alexei V.; Shen, Ken J.; Zheng, WeiKang; Graham, Melissa L.; Tucker, Brad E.; Cenko, S. Bradley; Prato, Lisa; Schaefer, Gail

    2014-01-01

    We constrain the properties of the progenitor system of the highly reddened Type Ia supernova (SN Ia) 2014J in Messier 82 (M82; d ≈ 3.5 Mpc). We determine the supernova (SN) location using Keck-II K-band adaptive optics images, and we find no evidence for flux from a progenitor system in pre-explosion near-ultraviolet through near-infrared Hubble Space Telescope (HST) images. Our upper limits exclude systems having a bright red giant companion, including symbiotic novae with luminosities comparable to that of RS Ophiuchi. While the flux constraints are also inconsistent with predictions for comparatively cool He-donor systems (T ≲ 35,000 K), we cannot preclude a system similar to V445 Puppis. The progenitor constraints are robust across a wide range of R V and A V values, but significantly greater values than those inferred from the SN light curve and spectrum would yield proportionally brighter luminosity limits. The comparatively faint flux expected from a binary progenitor system consisting of white dwarf stars would not have been detected in the pre-explosion HST imaging. Infrared HST exposures yield more stringent constraints on the luminosities of very cool (T < 3000 K) companion stars than was possible in the case of SN Ia 2011fe.

  5. Effects of dose reduction on multi-detector computed tomographic images in evaluating the maxilla and mandible for pre-surgical implant planning: a cadaveric study.

    Science.gov (United States)

    Koizumi, Hiroshi; Sur, Jaideep; Seki, Kenji; Nakajima, Koh; Sano, Tsukasa; Okano, Tomohiro

    2010-08-01

    To assess effects of dose reduction on image quality in evaluating maxilla and mandible for pre-surgical implant planning using cadavers. Six cadavers were used for the study using multi-detector computed tomography (CT) operated at 120 kV and the variable tube current of 80, 40, 20 and 10 mA. A slice thickness of 0.625 mm and pitch 1 were used. Multi-planar images perpendicular and parallel to dentitions were created. The images were evaluated by five oral radiologists in terms of visibility of the anatomical landmarks including alveolar crest, mandibular canal, floors of the maxillary sinus and nasal cavity, contours/cortical layer of jaw bones and the details of trabecular bone. Observers were asked to determine the quality of the images in comparison with 80 mA images based on the criteria: excellent, good, fair or non-diagnostic. The average scores of all observers were calculated for each specimen in all exposure conditions. The 40 mA images could visualize such landmarks and were evaluated to be same or almost equivalent in quality to the 80 mA images. Even the 20 mA images could be accepted just for diagnostic purpose for implant with substantial deterioration of the image quality. The 10 mA images may not be accepted because of the obscured contour caused by image noise. Significant dose reduction by lowering mA can be utilized for pre-surgical implant planning in multi-detector CT.

  6. Very high stability systems: LMJ target alignment system and MTG imager test setup

    Science.gov (United States)

    Compain, Eric; Maquet, Philippe; Kunc, Thierry; Marque, Julien; Lauer-Solelhac, Maxime; Delage, Laurent; Lanternier, Catherine

    2015-09-01

    Most of space instruments and research facilities require test equipment with demanding opto-mechanical stability. In some specific cases, when the stability performance directly drives the final performance of the scientific mission and when feasibility is questionable, specific methods must be implemented for the associated technical risk management. In present paper, we will present our heritage in terms of methodology, design, test and the associated results for two specific systems : the SOPAC-POS and the MOTA, generating new references for future developments. From a performance point of view, we will emphasis on following key parameters : design symmetry, thermal load management, and material and structural choices. From a method point of view the difficulties arise first during design, from the strong coupling between the thermal, mechanical and optical performance models, and then during testing, from the difficulty of conceiving test setup having appropriate performance level. We will present how these limitations have been overcome. SOPAC-POS is the target alignment system of the LMJ, Laser Mega Joule, the French inertial confinement fusion research center. Its stability has been demonstrated by tests in 2014 after 10 years of research and development activities, achieving 1μm stability @ 6m during one hour periods. MOTA is an Optical Ground Support Equipment aiming at qualifying by tests the Flexible Combined Imager (FCI). FCI is an instrument for the meteorological satellite MTG-I, a program of and funded by the European Space Agency and under prime contractorship of Thales Alenia Space. Optimized design will allow to get better than 0.2 μrad stability for one hour periods, as required for MTF measurement.

  7. Sparsity regularization for parameter identification problems

    International Nuclear Information System (INIS)

    Jin, Bangti; Maass, Peter

    2012-01-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  8. Some regularities of Ce(3) and Ce(4) stabilization in their compounds with β-diketones

    International Nuclear Information System (INIS)

    Pechurova, N.I.; Martynenko, L.I.; Snezhko, N.I.; Anufrieva, S.I.

    1985-01-01

    Adduct formation of cerium (3) and cerium (4) β-diketonates (acetylacetonate, benzoylacetonate, dibenzoylmethanate and thenoyltrifluoroacetonate) with oxygen- and nitrogen-donor ligands (Q-α, α'-dipyridyl, o-phenanthroline, trioctylphosphine oxide and triphenylphosphine oxide) is studied. The compounds obtained as a results of the reactions are studied by means of IR-spectroscopic, derivatographic and X-ray phase methods. It is concluded that composition and thermodynamic stability of adducts of Ce(3) tris-β-diketonates are determined by correlation of donor properties of the basis and additional ligand and stability of adducts to oxidation - as well as by their solubility. Introduction of the additional ligand to the system Ce(4)-β-diketones even in the presence of air oxygen stabilizes Ce(3) and destabilizes Ce(4)

  9. Dendrimer-stabilized bismuth sulfide nanoparticles: synthesis, characterization, and potential computed tomography imaging applications.

    Science.gov (United States)

    Fang, Yi; Peng, Chen; Guo, Rui; Zheng, Linfeng; Qin, Jinbao; Zhou, Benqing; Shen, Mingwu; Lu, Xinwu; Zhang, Guixiang; Shi, Xiangyang

    2013-06-07

    We report here a general approach to synthesizing dendrimer-stabilized bismuth sulfide nanoparticles (Bi2S3 DSNPs) for potential computed tomography (CT) imaging applications. In this study, ethylenediamine core glycidol hydroxyl-terminated generation 4 poly(amidoamine) dendrimers (G4.NGlyOH) were used as stabilizers to first complex the Bi(III) ions, followed by reaction with hydrogen sulfide to generate Bi2S3 DSNPs. By varying the molar ratio of Bi atom to dendrimer, stable Bi2S3 DSNPs with an average size range of 5.2-5.7 nm were formed. The formed Bi2S3 DSNPs were characterized via different techniques. X-ray absorption coefficient measurements show that the attenuation of Bi2S3 DSNPs is much higher than that of iodine-based CT contrast agent at the same molar concentration of the active element (Bi versus iodine). 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) cell viability assay and hemolysis assay reveal that the formed Bi2S3 DSNPs are noncytotoxic and have a negligible hemolysis effect in the studied concentration range. Furthermore, we show that cells incubated with the Bi2S3 DSNPs are able to be imaged using CT, a prominent enhancement at the point of rabbit injected subcutaneously with the Bi2S3 DSNPs is able to be visualized via CT scanning, and the mouse's pulmonary vein can be visualized via CT after intravenous injection of the Bi2S3 DSNPs. With the good biocompatibility, enhanced X-ray attenuation property, and tunable dendrimer chemistry, the designed Bi2S3 DSNPs should be able to be further functionalized, allowing them to be used as a highly efficient contrast agent for CT imaging of different biological systems.

  10. Convex blind image deconvolution with inverse filtering

    Science.gov (United States)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  11. Rotational inhomogeneities from pre-big bang?

    International Nuclear Information System (INIS)

    Giovannini, Massimo

    2005-01-01

    The evolution of the rotational inhomogeneities is investigated in the specific framework of four-dimensional pre-big bang models. While minimal (dilaton-driven) scenarios do not lead to rotational fluctuations, in the case of non-minimal (string-driven) models, fluid sources are present in the pre-big bang phase. The rotational modes of the geometry, coupled to the divergenceless part of the velocity field, can then be amplified depending upon the value of the barotropic index of the perfect fluids. In the light of a possible production of rotational inhomogeneities, solutions describing the coupled evolution of the dilaton field and of the fluid sources are scrutinized in both the string and Einstein frames. In semi-realistic scenarios, where the curvature divergences are regularized by means of a non-local dilaton potential, the rotational inhomogeneities are amplified during the pre-big bang phase but they decay later on. Similar analyses can also be performed when a contraction occurs directly in the string frame metric

  12. Rotational inhomogeneities from pre-big bang?

    Energy Technology Data Exchange (ETDEWEB)

    Giovannini, Massimo [Department of Physics, Theory Division, CERN, 1211 Geneva 23 (Switzerland)

    2005-01-21

    The evolution of the rotational inhomogeneities is investigated in the specific framework of four-dimensional pre-big bang models. While minimal (dilaton-driven) scenarios do not lead to rotational fluctuations, in the case of non-minimal (string-driven) models, fluid sources are present in the pre-big bang phase. The rotational modes of the geometry, coupled to the divergenceless part of the velocity field, can then be amplified depending upon the value of the barotropic index of the perfect fluids. In the light of a possible production of rotational inhomogeneities, solutions describing the coupled evolution of the dilaton field and of the fluid sources are scrutinized in both the string and Einstein frames. In semi-realistic scenarios, where the curvature divergences are regularized by means of a non-local dilaton potential, the rotational inhomogeneities are amplified during the pre-big bang phase but they decay later on. Similar analyses can also be performed when a contraction occurs directly in the string frame metric.

  13. Initial data release of regular blood drip stain created by varying fall height, angle of impact and source dimension

    Directory of Open Access Journals (Sweden)

    Nabanita Basu

    2016-09-01

    Full Text Available The dataset developed consists of 108 blood drip stains developed with fresh porcine blood, blood admixed with different dosage of Warfarin and Heparin, respectively. For each particular blood type (i.e. fresh blood, blood admixed with Warfarin at different dosage and blood admixed with Heparin at varied dosage stain patterns were created by passive dripping of blood from a 2.5 cm3 subcutaneous syringe with needle filled to capacity, at 30°, 60° and 90° angle of impact with corresponding fall height of 20, 40 and 60 cm respectively. In the other dataset of 162 datapoints, 81 regular drip stains were formed from blood that had dripped passively from a subcutaneous syringe without needle at the aforementioned angle of impact and fall height, while the other stains were formed as a result of dripping of blood from a subcutaneous syringe with needle. In order to compare stains formed, all stains were recorded on the same representative, non-porous, smooth target surface under similar physical conditions. The interpretations relevant to the dataset are available in the article titled ‘2D Source Area prediction based on physical characteristics of a regular, passive blood drip stain’ (Basu and Bandyopadhyay, 2016 [7]. An image pre-processing algorithm for extracting ROI has also been incorporated in this article. Keywords: Drip stain, Bloodstain Pattern Analysis, Source Dimension prediction

  14. An Island of Stability: Art Images and Natural Scenes - but Not Natural Faces - Show Consistent Esthetic Response in Alzheimer's-Related Dementia.

    Science.gov (United States)

    Graham, Daniel J; Stockinger, Simone; Leder, Helmut

    2013-01-01

    Alzheimer's disease (AD) causes severe impairments in cognitive function but there is evidence that aspects of esthetic perception are somewhat spared, at least in early stages of the disease. People with early Alzheimer's-related dementia have been found to show similar degrees of stability over time in esthetic judgment of paintings compared to controls, despite poor explicit memory for the images. Here we expand on this line of inquiry to investigate the types of perceptual judgments involved, and to test whether people in later stages of the disease also show evidence of preserved esthetic judgment. Our results confirm that, compared to healthy controls, there is similar esthetic stability in early stage AD in the absence of explicit memory, and we report here that people with later stages of the disease also show similar stability compared to controls. However, while we find that stability for portrait paintings, landscape paintings, and landscape photographs is not different compared to control group performance, stability for face photographs - which were matched for identity with the portrait paintings - was significantly impaired in the AD group. We suggest that partially spared face-processing systems interfere with esthetic processing of natural faces in ways that are not found for artistic images and landscape photographs. Thus, our work provides a novel form of evidence regarding face-processing in healthy and diseased aging. Our work also gives insights into general theories of esthetics, since people with AD are not encumbered by many of the semantic and emotional factors that otherwise color esthetic judgment. We conclude that, for people with AD, basic esthetic judgment of artistic images represents an "island of stability" in a condition that in most other respects causes profound cognitive disruption. As such, esthetic response could be a promising route to future therapies.

  15. Can pre- and postoperative magnetic resonance imaging predict recurrence-free survival after whole-gland high-intensity focused ablation for prostate cancer?

    Energy Technology Data Exchange (ETDEWEB)

    Rosset, Remy; Bratan, Flavie [Hopital Edouard Herriot, Hospices Civils de Lyon, Department of Urinary and Vascular Radiology, Lyon (France); Crouzet, Sebastien [Hopital Edouard Herriot, Hospices Civils de Lyon, Department of Urology, Lyon (France); Universite de Lyon, Lyon (France); Faculte de Medecine Lyon Est, Universite Lyon 1, Lyon (France); Inserm, U1032, LabTau, Lyon (France); Tonoli-Catez, Helene [Hopital Edouard Herriot, Hospices Civils de Lyon, Department of Urology, Lyon (France); Mege-Lechevallier, Florence [Hopital Edouard Herriot, Hospices Civils de Lyon, Department of Pathology, Lyon (France); Gelet, Albert [Hopital Edouard Herriot, Hospices Civils de Lyon, Department of Urology, Lyon (France); Inserm, U1032, LabTau, Lyon (France); Rouviere, Olivier [Hopital Edouard Herriot, Hospices Civils de Lyon, Department of Urinary and Vascular Radiology, Lyon (France); Universite de Lyon, Lyon (France); Faculte de Medecine Lyon Est, Universite Lyon 1, Lyon (France); Inserm, U1032, LabTau, Lyon (France)

    2017-04-15

    Our aim was to assess whether magnetic resonance imaging (MRI) features predict recurrence-free survival (RFS) after prostate cancer high-intensity focused ultrasound (HIFU) ablation. We retrospectively selected 81 patients who underwent (i) whole-gland HIFU ablation between 2007 and 2011 as first-line therapy or salvage treatment after radiotherapy or brachytherapy, and (ii) pre- and postoperative MRI. On preoperative imaging, two senior (R1, R2) and one junior (R3) readers assessed the number of sectors invaded by the lesion with the highest Likert score (dominant lesion) using a 27-sector diagram. On postoperative imaging, readers assessed destruction of the dominant lesion using a three-level score. Multivariate analysis included the number of sectors invaded by the dominant lesion, its Likert and destruction scores, the pre-HIFU prostate-specific antigen (PSA) level, Gleason score, and the clinical setting (primary/salvage). The most significant predictor was the number of prostate sectors invaded by the dominant lesion for R2 and R3 (p≤0.001) and the destruction score of the dominant lesion for R1 (p = 0.011). The pre-HIFU PSA level was an independent predictor for R2 (p = 0.014), but with only marginal significance for R1 (p = 0.059) and R3 (p = 0.053). The dominant lesion's size and destruction assessed by MRI provide independent prognostic information compared with usual predictors. (orig.)

  16. Similarity regularized sparse group lasso for cup to disc ratio computation.

    Science.gov (United States)

    Cheng, Jun; Zhang, Zhuo; Tao, Dacheng; Wong, Damon Wing Kee; Liu, Jiang; Baskaran, Mani; Aung, Tin; Wong, Tien Yin

    2017-08-01

    Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well.

  17. Gradient waveform pre-emphasis based on the gradient system transfer function.

    Science.gov (United States)

    Stich, Manuel; Wech, Tobias; Slawig, Anne; Ringler, Ralf; Dewdney, Andrew; Greiser, Andreas; Ruyters, Gudrun; Bley, Thorsten A; Köstler, Herbert

    2018-02-25

    The gradient system transfer function (GSTF) has been used to describe the distorted k-space trajectory for image reconstruction. The purpose of this work was to use the GSTF to determine the pre-emphasis for an undistorted gradient output and intended k-space trajectory. The GSTF of the MR system was determined using only standard MR hardware without special equipment such as field probes or a field camera. The GSTF was used for trajectory prediction in image reconstruction and for a gradient waveform pre-emphasis. As test sequences, a gradient-echo sequence with phase-encoding gradient modulation and a gradient-echo sequence with a spiral read-out trajectory were implemented and subsequently applied on a structural phantom and in vivo head measurements. Image artifacts were successfully suppressed by applying the GSTF-based pre-emphasis. Equivalent results are achieved with images acquired using GSTF-based post-correction of the trajectory as a part of image reconstruction. In contrast, the pre-emphasis approach allows reconstruction using the initially intended trajectory. The artifact suppression shown for two sequences demonstrates that the GSTF can serve for a novel pre-emphasis. A pre-emphasis based on the GSTF information can be applied to any arbitrary sequence type. © 2018 International Society for Magnetic Resonance in Medicine.

  18. Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia.

    Science.gov (United States)

    Kim, Junghoe; Calhoun, Vince D; Shim, Eunsoo; Lee, Jong-Hwan

    2016-01-01

    Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was

  19. MO-DE-207A-02: A Feature-Preserving Image Reconstruction Method for Improved Pancreaticlesion Classification in Diagnostic CT Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Xu, J; Tsui, B [Johns Hopkins University, Baltimore, MD (United States); Noo, F [University of Utah, Salt Lake City, UT (United States)

    2016-06-15

    Purpose: To develop a feature-preserving model based image reconstruction (MBIR) method that improves performance in pancreatic lesion classification at equal or reduced radiation dose. Methods: A set of pancreatic lesion models was created with both benign and premalignant lesion types. These two classes of lesions are distinguished by their fine internal structures; their delineation is therefore crucial to the task of pancreatic lesion classification. To reduce image noise while preserving the features of the lesions, we developed a MBIR method with curvature-based regularization. The novel regularization encourages formation of smooth surfaces that model both the exterior shape and the internal features of pancreatic lesions. Given that the curvature depends on the unknown image, image reconstruction or denoising becomes a non-convex optimization problem; to address this issue an iterative-reweighting scheme was used to calculate and update the curvature using the image from the previous iteration. Evaluation was carried out with insertion of the lesion models into the pancreas of a patient CT image. Results: Visual inspection was used to compare conventional TV regularization with our curvature-based regularization. Several penalty-strengths were considered for TV regularization, all of which resulted in erasing portions of the septation (thin partition) in a premalignant lesion. At matched noise variance (50% noise reduction in the patient stomach region), the connectivity of the septation was well preserved using the proposed curvature-based method. Conclusion: The curvature-based regularization is able to reduce image noise while simultaneously preserving the lesion features. This method could potentially improve task performance for pancreatic lesion classification at equal or reduced radiation dose. The result is of high significance for longitudinal surveillance studies of patients with pancreatic cysts, which may develop into pancreatic cancer. The

  20. Dealing with ambiguity: Israeli physician's attitudes and practices regarding pre-exercise certificates: a questionnaire study.

    Science.gov (United States)

    Hoffman, Robert D; Golan, Ron; Vinker, Shlomo

    2016-01-01

    It has become clear in recent years that a healthy lifestyle, including physical exercise is crucial for health maintenance. Nevertheless, most people do not exercise regularly. Physician intervention is beneficial in increasing patient exercise. In Israel, the 1994 "Sports Law" regarding exercising in a gymnasium requires a physician's written authorization, but does not direct the physicians what they should ascertain before issuing the certificate. This pre-exercise certificate has been widely discussed in Israel over the last year as the law is to be revised to enable using a modification of the PAR-Q+ (Physical Activity Readiness questionnaire) patient questionnaire as a screening tool. This will leave the requirement for a pre-exercise certificate for a less healthy population, yet without clear instructions to the primary care physician on criteria for ascertaining fitness. Our aim was to evaluate how primary care physicians deal with the ambiguity of defining health criteria for issuing exercise authorization/certificate. We used an anonymous ten-item attitude/knowledge multiple choice questionnaire with an additional 13 personal/education and employment questions. We assessed each potential predictor of physician attitude and knowledge in univariate models. 135 useable questionnaires were collected. Of these, 43.7 % of the doctors will provide the pre-exercise certificate to all their patients; 63 % were aware of their HMO/employers guidelines for issuing certificates; 62 % stated they complied with these guidelines, and 16 % stated they did not follow them. In addition, 70 % of the physicians reported regular exercise themselves, an average of 4.12 h/week. These physicians tended to provide the pre-exercise certificate to all patients unconditionally, as compared to physicians that did not exercise regularly. (46 % vs. 14.5 %, p exercise in the gym. There is a wide variation as to what physicians check before providing the certificate. The

  1. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  2. Global regularizing flows with topology preservation for active contours and polygons.

    Science.gov (United States)

    Sundaramoorthi, Ganesh; Yezzi, Anthony

    2007-03-01

    Active contour and active polygon models have been used widely for image segmentation. In some applications, the topology of the object(s) to be detected from an image is known a priori, despite a complex unknown geometry, and it is important that the active contour or polygon maintain the desired topology. In this work, we construct a novel geometric flow that can be added to image-based evolutions of active contours and polygons in order to preserve the topology of the initial contour or polygon. We emphasize that, unlike other methods for topology preservation, the proposed geometric flow continually adjusts the geometry of the original evolution in a gradual and graceful manner so as to prevent a topology change long before the curve or polygon becomes close to topology change. The flow also serves as a global regularity term for the evolving contour, and has smoothness properties similar to curvature flow. These properties of gradually adjusting the original flow and global regularization prevent geometrical inaccuracies common with simple discrete topology preservation schemes. The proposed topology preserving geometric flow is the gradient flow arising from an energy that is based on electrostatic principles. The evolution of a single point on the contour depends on all other points of the contour, which is different from traditional curve evolutions in the computer vision literature.

  3. Regularities in Low-Temperature Phosphatization of Silicates

    Science.gov (United States)

    Savenko, A. V.

    2018-01-01

    The regularities in low-temperature phosphatization of silicates are defined from long-term experiments on the interaction between different silicate minerals and phosphate-bearing solutions in a wide range of medium acidity. It is shown that the parameters of the reaction of phosphatization of hornblende, orthoclase, and labradorite have the same values as for clayey minerals (kaolinite and montmorillonite). This effect may appear, if phosphotization proceeds, not after silicate minerals with a different structure and composition, but after a secondary silicate phase formed upon interaction between silicates and water and stable in a certain pH range. Variation in the parameters of the reaction of phosphatization at pH ≈ 1.8 is due to the stability of the silicate phase different from that at higher pH values.

  4. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  5. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  6. kCCA Transformation-Based Radiometric Normalization of Multi-Temporal Satellite Images

    Directory of Open Access Journals (Sweden)

    Yang Bai

    2018-03-01

    Full Text Available Radiation normalization is an essential pre-processing step for generating high-quality satellite sequence images. However, most radiometric normalization methods are linear, and they cannot eliminate the regular nonlinear spectral differences. Here we introduce the well-established kernel canonical correlation analysis (kCCA into radiometric normalization for the first time to overcome this problem, which leads to a new kernel method. It can maximally reduce the image differences among multi-temporal images regardless of the imaging conditions and the reflectivity difference. It also perfectly eliminates the impact of nonlinear changes caused by seasonal variation of natural objects. Comparisons with the multivariate alteration detection (CCA-based normalization and the histogram matching, on Gaofen-1 (GF-1 data, indicate that the kCCA-based normalization can preserve more similarity and better correlation between an image-pair and effectively avoid the color error propagation. The proposed method not only builds the common scale or reference to make the radiometric consistency among GF-1 image sequences, but also highlights the interesting spectral changes while eliminates less interesting spectral changes. Our method enables the application of GF-1 data for change detection, land-use, land-cover change detection etc.

  7. Pediatric dentistry clinical education venues evaluation by pre and post-doctoral students.

    Science.gov (United States)

    Bimstein, E; Mayes, A; Mittal, Hc

    2014-01-01

    To evaluate dental students' perspectives about pre- and post-doctoral pediatric dentistry education venues. Surveys with visual analog scales (from 0 to 100) measuring the educational contribution of pediatric dentistry venues were conducted. The pre-doctoral venues included a 3rd year university twilight clinic (UTC), a 3rd year urban community based clinic (CBC) and 4th year mobile clinics (MCs). The post-doctoral venues included treatment of children under general anesthesia, oral sedations, a regular clinic (no sedations), seminars, journal club, case conferences and studding for the American Board of Pediatric Dentistry. Analyses of variance between the scores indicated that the 3rd year CBC score (68.2 ± 4.5) was statistically significant higher (p= .007) than the one for the 3rd year UTC score (44.9 ± 6.1). The 4th year students' MCs score (61.4 ± 4.0) was statistically significant higher than their retrospective scores for the 3rd year CBC (56.4 ± 4.4) or UTC (42.2 ± 4.9) scores (p= .03 and .004 respectively). Among the didactic or clinical post-doctoral venues, the regular clinic and the seminars received the highest scores (84.3 ± 1.7 and 71.6 ± 2.8 respectively). pre-doctoral community-based clinical education and post-doctoral regular university based clinic are considered by students to provide the main contribution to pediatric dental education.

  8. Application of wavelet based MFDFA on Mueller matrix images for cervical pre-cancer detection

    Science.gov (United States)

    Zaffar, Mohammad; Pradhan, Asima

    2018-02-01

    A systematic study has been conducted on application of wavelet based multifractal de-trended fluctuation analysis (MFDFA) on Mueller matrix (MM) images of cervical tissue sections for early cancer detection. Changes in multiple scattering and orientation of fibers are observed by utilizing a discrete wavelet transform (Daubechies) which identifies fluctuations over polynomial trends. Fluctuation profiles, after 9th level decomposition, for all elements of MM qualitatively establish a demarcation of different grades of cancer from normal tissue. Moreover, applying MFDFA on MM images, Hurst exponent profiles for images of MM qualitatively are seen to display differences. In addition, the values of Hurst exponent increase for the diagonal elements of MM with increasing grades of the cervical cancer, while the value for the elements which correspond to linear polarizance decrease. However, for circular polarizance the value increases with increasing grades. These fluctuation profiles reveal the trend of local variation of refractive -indices and along with Hurst exponent profile, may serve as a useful biological metric in the early detection of cervical cancer. The quantitative measurements of Hurst exponent for diagonal and first column (polarizance governing elements) elements which reflect changes in multiple scattering and structural anisotropy in stroma, may be sensitive indicators of pre-cancer.

  9. Biased discriminant euclidean embedding for content-based image retrieval.

    Science.gov (United States)

    Bian, Wei; Tao, Dacheng

    2010-02-01

    With many potential multimedia applications, content-based image retrieval (CBIR) has recently gained more attention for image management and web search. A wide variety of relevance feedback (RF) algorithms have been developed in recent years to improve the performance of CBIR systems. These RF algorithms capture user's preferences and bridge the semantic gap. However, there is still a big room to further the RF performance, because the popular RF algorithms ignore the manifold structure of image low-level visual features. In this paper, we propose the biased discriminative Euclidean embedding (BDEE) which parameterises samples in the original high-dimensional ambient space to discover the intrinsic coordinate of image low-level visual features. BDEE precisely models both the intraclass geometry and interclass discrimination and never meets the undersampled problem. To consider unlabelled samples, a manifold regularization-based item is introduced and combined with BDEE to form the semi-supervised BDEE, or semi-BDEE for short. To justify the effectiveness of the proposed BDEE and semi-BDEE, we compare them against the conventional RF algorithms and show a significant improvement in terms of accuracy and stability based on a subset of the Corel image gallery.

  10. Contribution to regularizing iterative method development for attenuation correction in gamma emission tomography

    International Nuclear Information System (INIS)

    Cao, A.

    1981-07-01

    This study is concerned with the transverse axial gamma emission tomography. The problem of self-attenuation of radiations in biologic tissues is raised. The regularizing iterative method is developed, as a reconstruction method of 3 dimensional images. The different steps from acquisition to results, necessary to its application, are described. Organigrams relative to each step are explained. Comparison notion between two reconstruction methods is introduced. Some methods used for the comparison or to bring about the characteristics of a reconstruction technique are defined. The studies realized to test the regularizing iterative method are presented and results are analyzed [fr

  11. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  12. An investigation on the effects of brand equity, trust, image and customer satisfaction on regular insurance firm customers’ loyalty

    Directory of Open Access Journals (Sweden)

    Hamid Reza Saeednia

    2014-03-01

    Full Text Available Brand plays essential role on the success of most organizations and it has been considered as organizational assets. Therefore, brand management is important in today’s structure of organizations. A good brand helps gain new customer and future preferences, which leads to customer retention. Brand loyalty is one of the most important components of brand management. It can raise firm’s market share and it has close relationship with firm’s return of investment and profits. This research tries to answer this question and finds out more about the relationship between customer satisfaction, trust, brand equity, brand image and customer loyalty. The study uses a sample of 384 regular customers who use insurance services in Iran. Using Pearson correlation ratio as well as structural equation modeling, the study has detected positive and meaningful relationship between brand equity and other factors such as customer satisfaction, trust, etc.

  13. Multi-component pre-stack time-imaging and migration-based velocity analysis in transversely isotropic media; Imagerie sismique multicomposante et analyse de vitesse de migration en milieu transverse isotrope

    Energy Technology Data Exchange (ETDEWEB)

    Gerea, C.V.

    2001-06-01

    Complementary to the recording of compressional (P-) waves, the observation of P-S converted waves has recently been receiving specific attention. This is mainly due to their tremendous potential as a tool for fracture and lithology characterization, imaging sediments in gas saturated rocks, and imaging shallow sediments with higher resolution than conventional P-P data. In a conventional marine seismic survey, we cannot record P-to-S converted-wave energy since the fluids cannot support shear-wave strain. Thus, to capture the converted-wave energy, we need to record it at the water-bottom casing an ocean-bottom cable (OBC). The S-waves recorded at the seabed are mainly converted from P to S (i.e., PS-waves or C-waves) at the subsurface reflectors. The most accurate way to image seismic data is pre-stack depth migration. In this thesis, I develop a numerically efficient 2.5-D true-amplitude elastic Kirchhoff pre-stack migration algorithm designed to handle OBC data gathered along a single line. All the kinematic and dynamic elastic Green's functions required in the computation of true-amplitude weight term of Kirchhoff summation, are based on the non-hyperbolic explicit approximations of P- and SV-wave travel-times in layered transversely isotropic (VTI) media. Hence, this elastic imaging algorithm is very well-suited for migration-based velocity analysis techniques, for which fast, robust and iterative pre-stack migration is desired. In this thesis, I approach also the topic of anisotropic velocity model building for elastic pre-stack time-imaging. and propose an original methodology for joint PP-PS migration-based velocity analysis (MVA) in layered VTI anisotropic media. Tests on elastic synthetic and real OBC seismic data ascertain the validity of the pre-stack migration algorithm and velocity analysis methodology. (author)

  14. Dendrimer-Stabilized Gold Nanostars as a Multifunctional Theranostic Nanoplatform for CT Imaging, Photothermal Therapy, and Gene Silencing of Tumors.

    Science.gov (United States)

    Wei, Ping; Chen, Jingwen; Hu, Yong; Li, Xin; Wang, Han; Shen, Mingwu; Shi, Xiangyang

    2016-12-01

    Development of versatile nanomaterials combining diagnostic and therapeutic functionalities within one single nanoplatform is extremely important for tumor theranostics. In this work, the authors report the synthesis of a gold nanostar (Au NS)-based theranostic platform stabilized with cyclic arginine-glycine-aspartic (Arg-Gly-Asp, RGD) peptide-modified amine-terminated generation 3 poly(amidoamine) dendrimers. The formed RGD-modified dendrimer-stabilized Au NSs (RGD-Au DSNSs) are used as a gene delivery vector to complex small interfering RNA (siRNA) for computed tomography (CT) imaging, thermal imaging, photothermal therapy (PTT), and gene therapy of tumors. The results show that the RGD-Au DSNSs are able to compact vascular endothelial growth factor siRNA and specifically deliver siRNA to cancer cells overexpressing α v β 3 integrin. Under near-infrared laser irradiation, the viability of cancer cells is only 20.2% after incubation with the RGD-Au DSNS/siRNA polyplexes, which is much lower than that of cells after single PTT or gene therapy treatment. Furthermore, in vivo results show that the RGD-Au DSNS/siRNA polyplexes enable tumor CT imaging, thermal imaging, PTT, and gene therapy after intratumoral injection. These results indicate that the developed multifunctional nanoconstruct is a promising platform for tumor imaging and combinational PTT and gene therapy. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  16. Quantitative Evaluation of Temporal Regularizers in Compressed Sensing Dynamic Contrast Enhanced MRI of the Breast

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2017-01-01

    Full Text Available Purpose. Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI is used in cancer imaging to probe tumor vascular properties. Compressed sensing (CS theory makes it possible to recover MR images from randomly undersampled k-space data using nonlinear recovery schemes. The purpose of this paper is to quantitatively evaluate common temporal sparsity-promoting regularizers for CS DCE-MRI of the breast. Methods. We considered five ubiquitous temporal regularizers on 4.5x retrospectively undersampled Cartesian in vivo breast DCE-MRI data: Fourier transform (FT, Haar wavelet transform (WT, total variation (TV, second-order total generalized variation (TGVα2, and nuclear norm (NN. We measured the signal-to-error ratio (SER of the reconstructed images, the error in tumor mean, and concordance correlation coefficients (CCCs of the derived pharmacokinetic parameters Ktrans (volume transfer constant and ve (extravascular-extracellular volume fraction across a population of random sampling schemes. Results. NN produced the lowest image error (SER: 29.1, while TV/TGVα2 produced the most accurate Ktrans (CCC: 0.974/0.974 and ve (CCC: 0.916/0.917. WT produced the highest image error (SER: 21.8, while FT produced the least accurate Ktrans (CCC: 0.842 and ve (CCC: 0.799. Conclusion. TV/TGVα2 should be used as temporal constraints for CS DCE-MRI of the breast.

  17. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  18. Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.

    Science.gov (United States)

    Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J

    2010-12-01

    Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. The Ratios of Pre-emulsified Duck Skin for Optimized Processing of Restructured Ham.

    Science.gov (United States)

    Shim, Jae-Yun; Kim, Tae-Kyung; Kim, Young-Boong; Jeon, Ki-Hong; Ahn, Kwang-Il; Paik, Hyun-Dong; Choi, Yun-Sang

    2018-02-01

    The purpose of this study was to investigate the quality of duck ham formulated with duck skin through the pre-emulsification process. The experiments to investigate the quality characteristics of duck ham were carried out to measure proximate composition, cooking loss, emulsion stability, pH, color, texture profile analysis, apparent viscosity, and sensory characteristics. Duck ham was prepared with various ratios of duck skin in pre-emulsion as follows: Control (duct skin 30%), T1 (duck skin 20% + pre-emulsified duck skin 10%), T2 (duck skin 15% + pre-emulsified duck skin 15%), T3 (duck skin 10% + pre-emulsified duck skin 20%), and T4 (pre-emulsified duck skin 30%). As the ratio of duck skin to pre-emulsified skin changed, the quality of duck ham in terms of moisture content, fat content, cooking loss, emulsion stability, lightness, textural analysis, apparent viscosity, and overall acceptability changed. The moisture content of T2 was the highest ( p ham and 1:1 ratio of duck skin and pre-emulsified skin was the proper ratio to improve the quality characteristics of duck ham.

  20. Atmospheric temporal variations in the pre-landfall environment of typhoon Nangka (2015) observed by the Himawari-8 AHI

    Science.gov (United States)

    Lee, Yong-Keun; Li, Jun; Li, Zhenglong; Schmit, Timothy

    2017-11-01

    The next generation Geostationary Operational Environmental Satellite-R series (GOES-R) Advanced Baseline Imager (ABI) legacy atmospheric profile (LAP) retrieval algorithm is applied to the Advanced Himawari Imager (AHI) radiance measurements from the Himawari-8 satellite. Derived products included atmospheric temperature/moisture profiles, total precipitable water (TPW), and atmospheric stability indices. Since both AHI and ABI have 9 similar infrared bands, the GOES-R ABI LAP retrieval algorithm can be applied to the AHI measurements with minimal modifications. With the capability of frequent (10-min interval) full disk observations over the East Asia and Western Pacific regions, the AHI measurements are used to investigate the atmospheric temporal variation in the pre-landfall environment for typhoon Nangka (2015). Before its landfall over Japan, heavy rainfalls from Nangka occurred over the southern region of Honshu Island. During the pre-landfall period, the trends of the AHI LAP products indicated the development of the atmospheric environment favorable for heavy rainfall. Even though, the AHI LAP products are generated only in the clear skies, the 10-minute interval AHI measurements provide detailed information on the pre-landfall environment for typhoon Nangka. This study shows the capability of the AHI radiance measurements, together with the derived products, for depicting the detailed temporal features of the pre-landfall environment of a typhoon, which may also be possible for hurricanes and storms with ABI on the GOES-R satellite.

  1. An island of stability: art images and natural scenes—but not natural faces—show consistent aesthetic response in Alzheimer’s-related dementia.

    Directory of Open Access Journals (Sweden)

    Daniel eGraham

    2013-03-01

    Full Text Available Alzheimer’s disease causes severe impairments in cognitive function but there is evidence that aspects of aesthetic perception are somewhat spared, at least in early stages of the disease. People with early Alzheimer’s-related dementia have been found to show similar degrees of stability over time in aesthetic judgment of paintings compared to controls, despite poor explicit memory for the images. Here we expand on this line of inquiry to investigate the types of perceptual judgments involved, and to test whether people in later stages of the disease also show evidence of preserved aesthetic judgment. Our results confirm that, compared to healthy controls, there is similar aesthetic stability in early stage Alzheimer’s disease (AD in the absence of explicit memory, and we report here that people with later stages of the disease also show similar stability compared to controls. However, while we find that stability for portrait paintings, landscape paintings, and landscape photographs is not different compared to control group performance, stability for face photographs—which were matched for identity with the portrait paintings—was significantly impaired in the AD group. We suggest that partially spared face-processing systems interfere with aesthetic processing of natural faces in ways that are not found for artistic images and landscape photographs. Thus, our work provides a novel form of evidence regarding face processing in healthy and diseased ageing. Our work also gives insights into general theories of aesthetics, since people with Alzheimer’s disease are not encumbered by many of the semantic and emotional factors that otherwise color aesthetic judgment. We conclude that, for people with Alzheimer’s disease, basic aesthetic judgment of artistic images represents an island of stability in a condition that in most other respects causes profound cognitive disruption. As such, aesthetic response could be a promising route to

  2. Behaviors study of image registration algorithms in image guided radiation therapy

    International Nuclear Information System (INIS)

    Zou Lian; Hou Qing

    2008-01-01

    Objective: Study the behaviors of image registration algorithms, and analyze the elements which influence the performance of image registrations. Methods: Pre-known corresponding coordinates were appointed for reference image and moving image, and then the influence of region of interest (ROI) selection, transformation function initial parameters and coupled parameter spaces on registration results were studied with a software platform developed in home. Results: Region of interest selection had a manifest influence on registration performance. An improperly chosen ROI resulted in a bad registration. Transformation function initial parameters selection based on pre-known information could improve the accuracy of image registration. Coupled parameter spaces would enhance the dependence of image registration algorithm on ROI selection. Conclusions: It is necessary for clinic IGRT to obtain a ROI selection strategy (depending on specific commercial software) correlated to tumor sites. Three suggestions for image registration technique developers are automatic selection of the initial parameters of transformation function based on pre-known information, developing specific image registration algorithm for specific image feature, and assembling real-time image registration algorithms according to tumor sites selected by software user. (authors)

  3. Regularities of catalytic reactions of hydrogen, ethane and ethylene with elementary sulfur

    International Nuclear Information System (INIS)

    Zazhigalov, V.A.

    1978-01-01

    Shown is the decisive role of metal-sulfur bond stability for activity determination of metal sulfides (WS 2 , MoS 2 , CdS) in interaction reactions of elementary sulfur and hydrogen, ethane and ethylene. Found is the regularity of changing the relative reactiveness of the given substances and a conclusion is made about uniformity of the investigated catalyst processes. The results of hydrogen, ethane and ethylene oxidation by oxygen and sulfur are compared, the semilarity of these processes being pointed out

  4. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  5. Mixed Higher Order Variational Model for Image Recovery

    Directory of Open Access Journals (Sweden)

    Pengfei Liu

    2014-01-01

    Full Text Available A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weighted L1-L2 mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR and restoration quality.

  6. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  7. Regularization of Instantaneous Frequency Attribute Computations

    Science.gov (United States)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  8. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  9. Diagnosis of Alzheimer’s Disease Based on Structural MRI Images Using a Regularized Extreme Learning Machine and PCA Features

    Directory of Open Access Journals (Sweden)

    Ramesh Kumar Lama

    2017-01-01

    Full Text Available Alzheimer’s disease (AD is a progressive, neurodegenerative brain disorder that attacks neurotransmitters, brain cells, and nerves, affecting brain functions, memory, and behaviors and then finally causing dementia on elderly people. Despite its significance, there is currently no cure for it. However, there are medicines available on prescription that can help delay the progress of the condition. Thus, early diagnosis of AD is essential for patient care and relevant researches. Major challenges in proper diagnosis of AD using existing classification schemes are the availability of a smaller number of training samples and the larger number of possible feature representations. In this paper, we present and compare AD diagnosis approaches using structural magnetic resonance (sMR images to discriminate AD, mild cognitive impairment (MCI, and healthy control (HC subjects using a support vector machine (SVM, an import vector machine (IVM, and a regularized extreme learning machine (RELM. The greedy score-based feature selection technique is employed to select important feature vectors. In addition, a kernel-based discriminative approach is adopted to deal with complex data distributions. We compare the performance of these classifiers for volumetric sMR image data from Alzheimer’s disease neuroimaging initiative (ADNI datasets. Experiments on the ADNI datasets showed that RELM with the feature selection approach can significantly improve classification accuracy of AD from MCI and HC subjects.

  10. Manifold regularized multi-task feature selection for multi-modality classification in Alzheimer's disease.

    Science.gov (United States)

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2013-01-01

    Accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment, MCI), is very important for possible delay and early treatment of the disease. Recently, multi-modality methods have been used for fusing information from multiple different and complementary imaging and non-imaging modalities. Although there are a number of existing multi-modality methods, few of them have addressed the problem of joint identification of disease-related brain regions from multi-modality data for classification. In this paper, we proposed a manifold regularized multi-task learning framework to jointly select features from multi-modality data. Specifically, we formulate the multi-modality classification as a multi-task learning framework, where each task focuses on the classification based on each modality. In order to capture the intrinsic relatedness among multiple tasks (i.e., modalities), we adopted a group sparsity regularizer, which ensures only a small number of features to be selected jointly. In addition, we introduced a new manifold based Laplacian regularization term to preserve the geometric distribution of original data from each task, which can lead to the selection of more discriminative features. Furthermore, we extend our method to the semi-supervised setting, which is very important since the acquisition of a large set of labeled data (i.e., diagnosis of disease) is usually expensive and time-consuming, while the collection of unlabeled data is relatively much easier. To validate our method, we have performed extensive evaluations on the baseline Magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) data of Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our experimental results demonstrate the effectiveness of the proposed method.

  11. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  12. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  13. Beta activity in the premotor cortex is increased during stabilized as compared to normal walking

    Directory of Open Access Journals (Sweden)

    Sjoerd M. Bruijn

    2015-10-01

    Full Text Available Walking on two legs is inherently unstable. Still, we humans perform remarkable well at it, mostly without falling. To gain more understanding of the role of the brain in controlling gait stability we measured brain activity using electro-encephalography (EEG during stabilized and normal walking.Subjects walked on a treadmill in two conditions, each lasting 10 minutes; normal, and while being laterally stabilized by elastic cords. Kinematics of trunk and feet, electro-myography (EMG of neck muscles, as well as 64-channel EEG were recorded. To assess gait stability the local divergence exponent, step width, and trunk range of motion were calculated from the kinematic data. We used independent component analysis to remove movement, EMG, and eyeblink artifacts from the EEG, after which dynamic imaging of coherent sources beamformers were determined to identify cortical sources that showed a significant difference between conditions. Stabilized walking led to a significant increase in gait stability, i.e. lower local divergence exponents. Beamforming analysis of the beta band activity revealed significant sources in bilateral pre-motor cortices. Projection of sensor data on these sources showed a significant difference only in the left premotor area, with higher beta power during stabilized walking, specifically around push-off, although only significant around contralateral push-off. It appears that even during steady gait the cortex is involved in the control of stability.

  14. Stability and bioaccessibility of anthocyanins in bakery products enriched with anthocyanins.

    Science.gov (United States)

    Karakaya, Sibel; Simsek, Sebnem; Eker, Alper Tolga; Pineda-Vadillo, Carlos; Dupont, Didier; Perez, Beatriz; Viadel, Blanca; Sanz-Buenhombre, Marisa; Rodriguez, Alberto Guadarrama; Kertész, Zsófia; Hegyi, Adrienn; Bordoni, Alessandra; El, Sedef Nehir

    2016-08-10

    Anthocyanins, water soluble polyphenols, have been associated with several beneficial health effects. The aim of this study was to determine how the baking process and food matrix affect anthocyanin stability and bioaccessibility in bakery products in order to develop functional foods. Three well known regularly consumed bakery products (buns, breadsticks and biscuits) were enriched with anthocyanin (AC) isolated from grape skin alone or in combination with docosahexaenoic acid (AC + DHA) to reveal knowledge on AC as active ingredients in real food systems rather than pure compounds. Anthocyanin amounts added to the formulations of buns, breadsticks and biscuits were 34 mg per 100 g, 40 mg per 100 g and 37 mg per 100 g, respectively. The effect of processing, storage and the food matrix on AC stability and bioaccessibility was investigated. In addition, the sensory properties of bakery products were evaluated. Breadsticks enriched with AC and AC + DHA received the lowest scores in the pre-screening sensory test. Therefore breadsticks were excluded from further analysis. AC retentions, which were monitored by determination of malvidin 3-O-glucoside, in the bun and biscuit after baking were 95.9% (13.6 mg per 100 g) and 98.6% (15.2 mg per 100 g), respectively. Biscuits and buns enriched only with AC showed significantly higher anthocyanin bioaccessibilities (57.26% and 57.30%, respectively) than the same ones enriched with AC + DHA. AC stability in enriched products stored for 21 days was significantly lower than in products stored for 7 days (p products.

  15. Neutrophil to lymphocyte ratio as the main predictor of peripheral artery disease in regular hemodialysis patients

    Science.gov (United States)

    Siregar, R. H.; Muzasti, R. A.

    2018-03-01

    Cardiovascular disease is the most inducer of morbidity and mortality of chronic kidney disease (CKD) patients who have undergone dialysis. Today, neutrophil to lymphocyte ratio (NLR) is considered an indicator of the severity and extent of systemic inflammation and atherosclerosis in patients with renal and cardiovascular disorders. To examine the relationship between NLR with PAD in regular hemodialysis patients, a cross-sectional study, Ankle- Brachial Index (ABI) measurement and peripheral blood examination was on 72 regular hemodialysis patients ≥6 months. The ABI value ≤0.9 is considered PAD. NLR≥ 3.5 is considered abnormal based on some pre-existing research. Prevalence of PAD is 29.16%. Chi- square test showed significant correlation between NLR with PAD (p = 0.0001), multiplication of Calcium and Phosphorus (p = 0.0001), and type 2 Diabetes Mellitus (T2DM) (p = 0.039), multivariate analysis showed that NLR was an independent predictor for PAD in regular hemodialysis patients (RR = 2.271 p = 0.027). In conclusion, NLR, a new inflammatory marker of peripheral blood examination may serve as a marker of PAD in a regular hemodialysis patient, in addition to the multiplication of Calcium and Phosphorus as well as T2DM.

  16. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  17. Self-oriented Ag-based polycrystalline cubic nanostructures through polymer stabilization

    Science.gov (United States)

    Alonso, Amanda; Vigués, Núria; Rodríguez-Rodríguez, Rosalía; Borrisé, Xavier; Muñoz, María; Muraviev, Dmitri N.; Mas, Jordi; Muñoz-Berbel, Xavier

    2016-10-01

    This paper presents the study of the dynamics of the formation of polymer-assisted highly-orientated polycrystalline cubic structures (CS) by a fractal-mediated mechanism. This mechanism involves the formation of seed Ag@Co nanoparticles by InterMatrix Synthesis and subsequent overgrowth after incubation at a low temperature in chloride and phosphate solutions. These ions promote the dissolution and recrystallization in an ordered configuration of pre-synthetized nanoparticles initially embedded in negatively-charged polymeric matrices. During recrystallization, silver ions aggregate in AgCl@Co fractal-like structures, then evolve into regular polycrystalline solid nanostructures (e.g. CS) in a single crystallization step on specific regions of the ion exchange resin (IER) which maintain the integrity of polycrystalline nanocubes. Here, we study the essential role of the IER in the formation of these CS for the maintenance of their integrity and stability. Thus, this synthesis protocol may be easily expanded to the composition of other nanoparticles providing an interesting, cheap and simple alternative for cubic structure formation and isolation.

  18. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  19. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  20. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  1. Statistical regularities in art: Relations with visual coding and perception.

    Science.gov (United States)

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Time-optimized high-resolution readout-segmented diffusion tensor imaging.

    Directory of Open Access Journals (Sweden)

    Gernot Reishofer

    Full Text Available Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min generates results comparable to the un-regularized data with three averages (48 min. This significant reduction in scan time renders high resolution (1 × 1 × 2.5 mm(3 diffusion tensor imaging of the entire brain applicable in a clinical context.

  3. Image denoising by a direct variational minimization

    Directory of Open Access Journals (Sweden)

    Pilipović Stevan

    2011-01-01

    Full Text Available Abstract In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  4. On Landweber–Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces

    International Nuclear Information System (INIS)

    Leitão, A; Alves, M Marques

    2012-01-01

    In this paper, iterative regularization methods of Landweber–Kaczmarz type are considered for solving systems of ill-posed equations modeled (finitely many) by operators acting between Banach spaces. Using assumptions of uniform convexity and smoothness on the parameter space, we are able to prove a monotony result for the proposed method, as well as to establish convergence (for exact data) and stability results (in the noisy data case). (paper)

  5. Pre-targeted tumor imaging with avidin-McAb and 99Tcm-DTPA-Biotin

    International Nuclear Information System (INIS)

    Zhang Jinming; Tian Jiahe; Wang Yuqi; Liu Xi; Sun Xin

    2002-01-01

    Objective: Biotin-avidin is used as pre-targeting system (BAS) in radioimmunoimaging in order to decrease radiation background and dose associated with the use of directly labelled McAb. The authors tried to use 99 Tc m to substitute 111 In to label DTPA-biotin to evaluate the value of the 99 Tc m -DTPA-biotin in BAS. Methods: DTPA-biotin solution was mixed with SnCl 2 and then fresh eluted 99 Tc m . The solution incubated for 10 min at room temperature. Mice bearing lung tumor (LA-795) with and without metastases in lung underwent 3-step pre-targeting test. Briefly, biotin-C50 was injected first, then avidin and 99 Tc m -DTPA-biotin was respectively given 1 day, 2 days later. Directly labelled C50 with 99 Tc m was used as control agent. Results: The labelling yield of 99 Tc m -DTPA-biotin was over 90%. The amount of SnCl 2 was the key feature in labelling. The tumor could be seen at 2 h after injection of 99 Tc m -DTPA-biotin with γ camera in 3- step groups. The tracer uptake in tumor was (1.35 +- 0.45)% ID/g at 2 h after injection, Tumor/Blood (T/B) was 5.86, T/Muscle (T/M) was 8.43. In control group which received 99 Tc m -DTPA-biotin only, the T/B was 0.85, T/M 1.1. For the directly labelled C50, the T/B was 1.65, T/M was 2.0 at 8 h after injection. Conclusion: Avidin-biotin pre-targeting system can be labelled with 99 Tc m , and the BAS can image the tumor as early as 2 h after injection

  6. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    Science.gov (United States)

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  7. Variability in Regularity: Mining Temporal Mobility Patterns in London, Singapore and Beijing Using Smart-Card Data.

    Science.gov (United States)

    Zhong, Chen; Batty, Michael; Manley, Ed; Wang, Jiaqiu; Wang, Zijia; Chen, Feng; Schmitt, Gerhard

    2016-01-01

    To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more amenable.

  8. Variability in Regularity: Mining Temporal Mobility Patterns in London, Singapore and Beijing Using Smart-Card Data

    Science.gov (United States)

    Zhong, Chen; Batty, Michael; Manley, Ed; Wang, Jiaqiu; Wang, Zijia; Chen, Feng; Schmitt, Gerhard

    2016-01-01

    To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more amenable. PMID:26872333

  9. Variability in Regularity: Mining Temporal Mobility Patterns in London, Singapore and Beijing Using Smart-Card Data.

    Directory of Open Access Journals (Sweden)

    Chen Zhong

    Full Text Available To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more

  10. Replication protein A, the laxative that keeps DNA regular: The importance of RPA phosphorylation in maintaining genome stability.

    Science.gov (United States)

    Byrne, Brendan M; Oakley, Gregory G

    2018-04-20

    The eukaryotic ssDNA-binding protein, Replication protein A (RPA), was first discovered almost three decades ago. Since then, much progress has been made to elucidate the critical roles for RPA in DNA metabolic pathways that help promote genomic stability. The canonical RPA heterotrimer (RPA1-3) is an essential coordinator of DNA metabolism that interacts with ssDNA and numerous protein partners to coordinate its roles in DNA replication, repair, recombination and telomere maintenance. An alternative form of RPA, termed aRPA, is formed by a complex of RPA4 with RPA1 and RPA3. aRPA is expressed differentially in cells compared to canonical RPA and has been shown to inhibit canonical RPA function while allowing for regular maintenance of cell viability. Interestingly, while aRPA is defective in DNA replication and cell cycle progression, it was shown to play a supporting role in nucleotide excision repair and recombination. The binding domains of canonical RPA interact with a growing number of partners involved in numerous genome maintenance processes. The protein interactions of the RPA-ssDNA complex are not only governed by competition between the binding proteins but also by post-translation modifications such as phosphorylation. Phosphorylation of RPA2 is an important post-translational modification of the RPA complex, and is essential for directing context-specific functions of the RPA complex in the DNA damage response. Due to the importance of RPA in cellular metabolism, it was identified as an appealing target for chemotherapeutic drug development that could be used in future cancer treatment regimens. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Blind image fusion for hyperspectral imaging with the directional total variation

    Science.gov (United States)

    Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane

    2018-04-01

    Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.

  12. Pediatric cT: Implementation of ASIR for Substantial Radiation Dose Reduction While Maintaining Pre-ASIR Image Noise1

    Science.gov (United States)

    Brady, Samuel L.; Moore, Bria M.; Yee, Brian S.; Kaufman, Robert A.

    2015-01-01

    Purpose To determine a comprehensive method for the implementation of adaptive statistical iterative reconstruction (ASIR) for maximal radiation dose reduction in pediatric computed tomography (CT) without changing the magnitude of noise in the reconstructed image or the contrast-to-noise ratio (CNR) in the patient. Materials and Methods The institutional review board waived the need to obtain informed consent for this HIPAA-compliant quality analysis. Chest and abdominopelvic CT images obtained before ASIR implementation (183 patient examinations; mean patient age, 8.8 years ± 6.2 [standard deviation]; range, 1 month to 27 years) were analyzed for image noise and CNR. These measurements were used in conjunction with noise models derived from anthropomorphic phantoms to establish new beam current–modulated CT parameters to implement 40% ASIR at 120 and 100 kVp without changing noise texture or magnitude. Image noise was assessed in images obtained after ASIR implementation (492 patient examinations; mean patient age, 7.6 years ± 5.4; range, 2 months to 28 years) the same way it was assessed in the pre-ASIR analysis. Dose reduction was determined by comparing size-specific dose estimates in the pre- and post-ASIR patient cohorts. Data were analyzed with paired t tests. Results With 40% ASIR implementation, the average relative dose reduction for chest CT was 39% (2.7/4.4 mGy), with a maximum reduction of 72% (5.3/18.8 mGy). The average relative dose reduction for abdominopelvic CT was 29% (4.8/6.8 mGy), with a maximum reduction of 64% (7.6/20.9 mGy). Beam current modulation was unnecessary for patients weighing 40 kg or less. The difference between 0% and 40% ASIR noise magnitude was less than 1 HU, with statistically nonsignificant increases in patient CNR at 100 kVp of 8% (15.3/14.2; P = .41) for chest CT and 13% (7.8/6.8; P = .40) for abdominopelvic CT. Conclusion Radiation dose reduction at pediatric CT was achieved when 40% ASIR was implemented as a dose

  13. Full-field measurement of micromotion around a cementless femoral stem using micro-CT imaging and radiopaque markers.

    Science.gov (United States)

    Malfroy Camine, V; Rüdiger, H A; Pioletti, D P; Terrier, A

    2016-12-08

    A good primary stability of cementless femoral stems is essential for the long-term success of total hip arthroplasty. Experimental measurement of implant micromotion with linear variable differential transformers is commonly used to assess implant primary stability in pre-clinical testing. But these measurements are often limited to a few distinct points at the interface. New techniques based on micro-computed tomography (micro-CT) have recently been introduced, such as Digital Volume Correlation (DVC) or markers-based approaches. DVC is however limited to measurement around non-metallic implants due to metal-induced imaging artifacts, and markers-based techniques are confined to a small portion of the implant. In this paper, we present a technique based on micro-CT imaging and radiopaque markers to provide the first full-field micromotion measurement at the entire bone-implant interface of a cementless femoral stem implanted in a cadaveric femur. Micromotion was measured during compression and torsion. Over 300 simultaneous measurement points were obtained. Micromotion amplitude ranged from 0 to 24µm in compression and from 0 to 49µm in torsion. Peak micromotion was distal in compression and proximal in torsion. The technique bias was 5.1µm and its repeatability standard deviation was 4µm. The method was thus highly reliable and compared well with results obtained with linear variable differential transformers (LVDTs) reported in the literature. These results indicate that this micro-CT based technique is perfectly relevant to observe local variations in primary stability around metallic implants. Possible applications include pre-clinical testing of implants and validation of patient-specific models for pre-operative planning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Visual attention and stability

    NARCIS (Netherlands)

    Mathot, Sebastiaan; Theeuwes, Jan

    2011-01-01

    In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as

  15. Multi-modality molecular imaging: pre-clinical laboratory configuration

    Science.gov (United States)

    Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.

    2006-02-01

    In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.

  16. Dynamical Stability of Imaged Planetary Systems in Formation: Application to HL Tau

    Science.gov (United States)

    Tamayo, D.; Triaud, A. H. M. J.; Menou, K.; Rein, H.

    2015-06-01

    A recent Atacama Large Millimeter/Submillimeter Array image revealed several concentric gaps in the protoplanetary disk surrounding the young star HL Tau. We consider the hypothesis that these gaps are carved by planets, and present a general framework for understanding the dynamical stability of such systems over typical disk lifetimes, providing estimates for the maximum planetary masses. We collect these easily evaluated constraints into a workflow that can help guide the design and interpretation of new observational campaigns and numerical simulations of gap opening in such systems. We argue that the locations of resonances should be significantly shifted in massive disks like HL Tau, and that theoretical uncertainties in the exact offset, together with observational errors, imply a large uncertainty in the dynamical state and stability in such disks. This presents an important barrier to using systems like HL Tau as a proxy for the initial conditions following planet formation. An important observational avenue to breaking this degeneracy is to search for eccentric gaps, which could implicate resonantly interacting planets. Unfortunately, massive disks like HL Tau should induce swift pericenter precession that would smear out any such eccentric features of planetary origin. This motivates pushing toward more typical, less massive disks. For a nominal non-resonant model of the HL Tau system with five planets, we find a maximum mass for the outer three bodies of approximately 2 Neptune masses. In a resonant configuration, these planets can reach at least the mass of Saturn. The inner two planets’ masses are unconstrained by dynamical stability arguments.

  17. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  18. Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images

    Directory of Open Access Journals (Sweden)

    Sivaramakrishnan Rajaraman

    2018-04-01

    Full Text Available Malaria is a blood disease caused by the Plasmodium parasites transmitted through the bite of female Anopheles mosquito. Microscopists commonly examine thick and thin blood smears to diagnose disease and compute parasitemia. However, their accuracy depends on smear quality and expertise in classifying and counting parasitized and uninfected cells. Such an examination could be arduous for large-scale diagnoses resulting in poor quality. State-of-the-art image-analysis based computer-aided diagnosis (CADx methods using machine learning (ML techniques, applied to microscopic images of the smears using hand-engineered features demand expertise in analyzing morphological, textural, and positional variations of the region of interest (ROI. In contrast, Convolutional Neural Networks (CNN, a class of deep learning (DL models promise highly scalable and superior results with end-to-end feature extraction and classification. Automated malaria screening using DL techniques could, therefore, serve as an effective diagnostic aid. In this study, we evaluate the performance of pre-trained CNN based DL models as feature extractors toward classifying parasitized and uninfected cells to aid in improved disease screening. We experimentally determine the optimal model layers for feature extraction from the underlying data. Statistical validation of the results demonstrates the use of pre-trained CNNs as a promising tool for feature extraction for this purpose.

  19. Adolescent cortical thickness pre- and post marijuana and alcohol initiation.

    Science.gov (United States)

    Jacobus, Joanna; Castro, Norma; Squeglia, Lindsay M; Meloy, M J; Brumback, Ty; Huestis, Marilyn A; Tapert, Susan F

    Cortical thickness abnormalities have been identified in youth using both alcohol and marijuana. However, limited studies have followed individuals pre- and post initiation of alcohol and marijuana use to help identify to what extent discrepancies in structural brain integrity are pre-existing or substance-related. Adolescents (N=69) were followed from ages 13 (pre-initiation of substance use, baseline) to ages 19 (post-initiation, follow-up). Three subgroups were identified, participants that initiated alcohol use (ALC, n=23, >20 alcohol use episodes), those that initiated both alcohol and marijuana use (ALC+MJ, n=23, >50 marijuana use episodes) and individuals that did not initiate either substance regularly by follow-up (CON, n=23, marijuana use episodes). All adolescents underwent neurocognitive testing, neuroimaging, and substance use and mental health interviews. Significant group by time interactions and main effects on cortical thickness estimates were identified for 18 cortical regions spanning the left and right hemisphere (pseffect, in cortical thickness by follow-up for individuals who have not initiated regular substance use or alcohol use only by age 19; modest between-group differences were identified at baseline in several cortical regions (ALC and CON>ALC+MJ). Minimal neurocognitive differences were observed in this sample. Findings suggest pre-existing neural differences prior to marijuana use may contribute to initiation of use and observed neural outcomes. Marijuana use may also interfere with thinning trajectories that contribute to morphological differences in young adulthood that are often observed in cross-sectional studies of heavy marijuana users. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Polarimetric Imaging of Large Cavity Structures in the Pre-transitional Protoplanetary Disk Around PDS 70: Observations of the Disk

    Science.gov (United States)

    Hashimoto, J.; Dong, R.; Kudo, T.; Honda, M.; McClure, M. K.; Zhu, Z.; Muto, T.; Wisniewski, J.; Abe, L.; Brandner, W.; hide

    2012-01-01

    We present high-resolution H-band polarized intensity (FWHM=0".1:14AU) and L'-band imaging data(FWHM= 0".11:15 AU) of the circumstellar disk around the weak-lined T Tauri star PDS 70 in Centaurus at a radial distance of 28 AU (0".2) up to 210 AU (1".5). In both images, a giant inner gap is clearly resolved for the first time, and the radius of the gap is approx.70 AU. Our data show that the geometric center of the disk shifts by approx.6 AU toward the minor axis. We confirm that the brown dwarf companion candidate to the north of PDS 70 is a background star based on its proper motion. As a result of spectral energy distribution fitting by Monte Carlo radiative transfer modeling, we infer the existence of an optically thick inner disk at a few AU. Combining our observations and modeling, we classify the disk of PDS 70 as a pre-transitional disk. Furthermore, based on the analysis of L'-band imaging data, we put an upper limit of approx.30 to approx.50 M(sub J) on the mass of companions within the gap. Taking into account the presence of the large and sharp gap, we suggest that the gap could be formed by dynamical interactions of sub-stellar companions or multiple unseen giant planets in the gap. Key words: planetary systems - polarization - protoplanetary disks - stars: individual (PDS 70) - stars: pre-main sequence.