WorldWideScience

Sample records for interior reconstruction algorithm

  1. A feature refinement approach for statistical interior CT reconstruction

    Science.gov (United States)

    Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong

    2016-07-01

    Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.

  2. Interior point algorithms theory and analysis

    CERN Document Server

    Ye, Yinyu

    2011-01-01

    The first comprehensive review of the theory and practice of one of today's most powerful optimization techniques. The explosive growth of research into and development of interior point algorithms over the past two decades has significantly improved the complexity of linear programming and yielded some of today's most sophisticated computing techniques. This book offers a comprehensive and thorough treatment of the theory, analysis, and implementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basic and advanced aspects of the subject.

  3. Dictionary learning based statistical interior reconstruction without a prior knowledge

    Science.gov (United States)

    Shi, Yongyi; Mou, Xuanqin

    2016-10-01

    Despite the significantly practical utilities of interior tomography, it still suffers from severe degradation of direct current (DC) shift artifact. Existing literature suggest to introducing prior information of object support (OS) constraint or the zeroth order image moment, i.e., the DC value into interior reconstruction to suppress the shift artifact, while the prior information is not always available in practice. Aimed at alleviating the artifacts without prior knowledge, in this paper, we reported an approach on the estimation of the object support which could be employed to estimate the zeroth order image moment, and hence facilitate the DC shift artifacts removal in interior reconstruction. Firstly, by assuming most of the reconstructed object consists of soft tissues that are equivalent to water, we reconstructed a virtual OS that is symmetrical about the interior region of interest (ROI) for the DC estimation. Hence the DC value can be estimated from the virtual reconstruction. Secondly, a statistical iterative reconstruction incorporated with the sparse representation in terms of learned dictionary and the constraint in terms of image DC value was adopted to solve the interior tomography. Experimental results demonstrate that the relative errors of the estimated zeroth order image moment are 4.7% and 7.6%, corresponding to the simulated data of a human thorax and the real data of a sheep lung, respectively. Reconstructed images with the constraint of the estimated DC value exhibit greatly superior image quality to that without DC value constraint.

  4. Computed laminography and reconstruction algorithm

    Institute of Scientific and Technical Information of China (English)

    QUE Jie-Min; YU Zhong-Qiang; YAN Yong-Lian; CAO Da-Quan; ZHAO Wei; TANG Xiao; SUN Cui-Li; WANG Yan-Fang; WEI Cun-Feng; SHI Rong-Jian; WEI Long

    2012-01-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution.This is especially true for planar objects.In this paper,we set up a new scanning geometry for CL,and study the algebraic reconstruction technique (ART) for CL imaging.We compare the results of ART with variant weighted functions by computer simulation with a digital phantom.It proves that ART algorithm is a good choice for the CL system.

  5. A POSITIVE INTERIOR-POINT ALGORITHM FOR NONLINEAR COMPLEMENTARITY PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    马昌凤; 梁国平; 陈新美

    2003-01-01

    A new iterative method, which is called positive interior-point algorithm, is presented for solving the nonlinear complementarity problems. This method is of the desirable feature of robustness. And the convergence theorems of the algorithm is established. In addition, some numerical results are reported.

  6. Arc-Search Infeasible Interior-Point Algorithm for Linear Programming

    OpenAIRE

    Yang, Yaguang

    2014-01-01

    Mehrotra's algorithm has been the most successful infeasible interior-point algorithm for linear programming since 1990. Most popular interior-point software packages for linear programming are based on Mehrotra's algorithm. This paper proposes an alternative algorithm, arc-search infeasible interior-point algorithm. We will demonstrate, by testing Netlib problems and comparing the test results obtained by arc-search infeasible interior-point algorithm and Mehrotra's algorithm, that the propo...

  7. Optimal Power Flow by Interior Point and Non Interior Point Modern Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Marcin Połomski

    2013-03-01

    Full Text Available The idea of optimal power flow (OPF is to determine the optimal settings for control variables while respecting various constraints, and in general it is related to power system operational and planning optimization problems. A vast number of optimization methods have been applied to solve the OPF problem, but their performance is highly dependent on the size of a power system being optimized. The development of the OPF recently has tracked significant progress both in numerical optimization techniques and computer techniques application. In recent years, application of interior point methods to solve OPF problem has been paid great attention. This is due to the fact that IP methods are among the fastest algorithms, well suited to solve large-scale nonlinear optimization problems. This paper presents the primal-dual interior point method based optimal power flow algorithm and new variant of the non interior point method algorithm with application to optimal power flow problem. Described algorithms were implemented in custom software. The experiments show the usefulness of computational software and implemented algorithms for solving the optimal power flow problem, including the system model sizes comparable to the size of the National Power System.

  8. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    Raman Khurana

    2012-11-01

    CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by measuring identification efficiency and misidentification rates from electrons, muons and hadronic jets. These algorithms enable extended reach for the searches for MSSM Higgs, and other exotic particles.

  9. Wind reconstruction algorithm for Viking Lander 1

    Science.gov (United States)

    Kynkäänniemi, Tuomas; Kemppinen, Osku; Harri, Ari-Matti; Schmidt, Walter

    2017-06-01

    The wind measurement sensors of Viking Lander 1 (VL1) were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.

  10. Deconvolution of interferometric data using interior point iterative algorithms

    Science.gov (United States)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  11. A superlinear interior points algorithm for engineering design optimization

    Science.gov (United States)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  12. A superlinear interior points algorithm for engineering design optimization

    Science.gov (United States)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  13. Algorithms for reconstruction of chromosomal structures.

    Science.gov (United States)

    Lyubetsky, Vassily; Gershgorin, Roman; Seliverstov, Alexander; Gorbunov, Konstantin

    2016-01-19

    One of the main aims of phylogenomics is the reconstruction of objects defined in the leaves along the whole phylogenetic tree to minimize the specified functional, which may also include the phylogenetic tree generation. Such objects can include nucleotide and amino acid sequences, chromosomal structures, etc. The structures can have any set of linear and circular chromosomes, variable gene composition and include any number of paralogs, as well as any weights of individual evolutionary operations to transform a chromosome structure. Many heuristic algorithms were proposed for this purpose, but there are just a few exact algorithms with low (linear, cubic or similar) polynomial computational complexity among them to our knowledge. The algorithms naturally start from the calculation of both the distance between two structures and the shortest sequence of operations transforming one structure into another. Such calculation per se is an NP-hard problem. A general model of chromosomal structure rearrangements is considered. Exact algorithms with almost linear or cubic polynomial complexities have been developed to solve the problems for the case of any chromosomal structure but with certain limitations on operation weights. The computer programs are tested on biological data for the problem of mitochondrial or plastid chromosomal structure reconstruction. To our knowledge, no computer programs are available for this model. Exactness of the proposed algorithms and such low polynomial complexities were proved. The reconstructed evolutionary trees of mitochondrial and plastid chromosomal structures as well as the ancestral states of the structures appear to be reasonable.

  14. Improved wavefront reconstruction algorithm from slope measurements

    Science.gov (United States)

    Phuc, Phan Huy; Manh, Nguyen The; Rhee, Hyug-Gyo; Ghim, Young-Sik; Yang, Ho-Soon; Lee, Yun-Woo

    2017-03-01

    In this paper, we propose a wavefront reconstruction algorithm from slope measurements based on a zonal method. In this algorithm, the slope measurement sampling geometry used is the Southwell geometry, in which the phase values and the slope data are measured at the same nodes. The proposed algorithm estimates the phase value at a node point using the slope measurements of eight points around the node, as doing so is believed to result in better accuracy with regard to the wavefront. For optimization of the processing time, a successive over-relaxation method is applied to iteration loops. We use a trial-and-error method to determine the best relaxation factor for each type of wavefront in order to optimize the iteration time and, thus, the processing time of the algorithm. Specifically, for a circularly symmetric wavefront, the convergence rate of the algorithm can be improved by using the result of a Fourier Transform as an initial value for the iteration. Various simulations are presented to demonstrate the improvements realized when using the proposed algorithm. Several experimental measurements of deflectometry are also processed by using the proposed algorithm.

  15. FAST ALGORITHM FOR NON-UNIFORMLY SAMPLED SIGNAL SPECTRUM RECONSTRUCTION

    Institute of Scientific and Technical Information of China (English)

    Zhu Zhenqian; Zhang Zhimin; Wang Yu

    2013-01-01

    In this paper,a fast algorithm to reconstruct the spectrum of non-uniformly sampled signals is proposed.Compared with the original algorithm,the fast algorithm has a higher computational efficiency,especially when sampling sequence is long.Particularly,a transformation matrix is built,and the reconstructed spectrum is perfectly synthesized from the spectrum of every sampling channel.The fast algorithm has solved efficiency issues of spectrum reconstruction algorithm,and making it possible for the actual application of spectrum reconstruction algorithm in multi-channel Synthetic Aperture Radar (SAR).

  16. Advanced reconstruction algorithms for electron tomography: From comparison to combination

    Energy Technology Data Exchange (ETDEWEB)

    Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2013-04-15

    In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.

  17. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    Science.gov (United States)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  18. Genetic algorithms for minimal source reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, P.S.; Mosher, J.C.

    1993-12-01

    Under-determined linear inverse problems arise in applications in which signals must be estimated from insufficient data. In these problems the number of potentially active sources is greater than the number of observations. In many situations, it is desirable to find a minimal source solution. This can be accomplished by minimizing a cost function that accounts from both the compatibility of the solution with the observations and for its ``sparseness``. Minimizing functions of this form can be a difficult optimization problem. Genetic algorithms are a relatively new and robust approach to the solution of difficult optimization problems, providing a global framework that is not dependent on local continuity or on explicit starting values. In this paper, the authors describe the use of genetic algorithms to find minimal source solutions, using as an example a simulation inspired by the reconstruction of neural currents in the human brain from magnetoencephalographic (MEG) measurements.

  19. Infeasible-interior-point algorithm for a class of nonmonotone complementarity problems and its computational complexity

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents an infeasible-interior-point algorithm for aclass of nonmonotone complementarity problems, and analyses its convergence and computational complexity. The results indicate that the proposed algorithm is a polynomial-time one.

  20. SOLVING CONVEX QUADRATIC PROGRAMMING BY POTENTIAL-REDUCTION INTERIOR-POINT ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The solution of quadratic programming problems is an importantissue in the field of mathematical programming and industrial applications. In this paper, we solve convex quadratic programming by a potential-reduction interior-point algorithm. It is proved that the potential-reduction interior-point algorithm is globally convergent. Some numerical experiments were made.

  1. A new algorithm for 3D reconstruction from support functions

    OpenAIRE

    2009-01-01

    We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab and allows, for the first time, good 3D reconstructions to be performed on an ordinary PC. Under mild conditions, theory guarantees that outputs of the algorithm will converge to the input shape as the number of measurements increases. Reconstructions ...

  2. The Homogeneous Interior-Point Algorithm: Nonsymmetric Cones, Warmstarting, and Applications

    DEFF Research Database (Denmark)

    Skajaa, Anders

    algorithms for these problems is still limited. The goal of this thesis is to investigate and shed light on two computational aspects of homogeneous interior-point algorithms for convex conic optimization: The first part studies the possibility of devising a homogeneous interior-point method aimed at solving...... problems involving constraints that require nonsymmetric cones in their formulation. The second part studies the possibility of warmstarting the homogeneous interior-point algorithm for conic problems. The main outcome of the first part is the introduction of a completely new homogeneous interior......-point algorithm designed to solve nonsymmetric convex conic optimization problems. The algorithm is presented in detail and then analyzed. We prove its convergence and complexity. From a theoretical viewpoint, it is fully competitive with other algorithms and from a practical viewpoint, we show that it holds lots...

  3. A Primal- Dual Infeasible- Interior- Point Algorithm for Multiple Objective Linear Programming Problems

    Institute of Scientific and Technical Information of China (English)

    HUANG Hui; FEI Pu-sheng; YUAN Yuan

    2005-01-01

    A primal-dual infeasible-interior-point algorithm for multiple objective linear programming (MOLP) problems was presented. In contrast to the current MOLP algorithm,moving through the interior of polytope but not confining the iterates within the feasible region in our proposed algorithm result in a solution approach that is quite different and less sensitive to problem size, so providing the potential to dramatically improve the practical computation effectiveness.

  4. Dynamic Data Updating Algorithm for Image Superresolution Reconstruction

    Institute of Scientific and Technical Information of China (English)

    TAN Bing; XU Qing; ZHANG Yan; XING Shuai

    2006-01-01

    A dynamic data updating algorithm for image superesolution is proposed. On the basis of Delaunay triangulation and its local updating property, this algorithm can update the changed region directly under the circumstances that only a part of the source images has been changed. For its high efficiency and adaptability, this algorithm can serve as a fast algorithm for image superesolution reconstruction.

  5. Spatially adaptive regularized iterative high-resolution image reconstruction algorithm

    Science.gov (United States)

    Lim, Won Bae; Park, Min K.; Kang, Moon Gi

    2000-12-01

    High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The

  6. Interior region-of-interest reconstruction using a small, nearly piecewise constant subregion.

    Science.gov (United States)

    Taguchi, Katsuyuki; Xu, Jingyan; Srivastava, Somesh; Tsui, Benjamin M W; Cammin, Jochen; Tang, Qiulin

    2011-03-01

    To develop a method to reconstruct an interior region-of-interest (ROI) image with sufficient accuracy that uses differentiated backprojection (DBP) projection onto convex sets (POCS) [H. Kudo et al., "Tiny a priori knowledge solves the interior problem in computed tomography," Phys. Med. Biol. 53, 2207-2231 (2008)] and a tiny knowledge that there exists a nearly piecewise constant subregion. The proposed method first employs filtered backprojection to reconstruct an image on which a tiny region P with a small variation in the pixel values is identified inside the ROI. Total variation minimization [H. Yu and G. Wang, "Compressed sensing based interior tomography," Phys. Med. Biol. 54, 2791-2805 (2009); W. Han et al., "A general total variation minimization theorem for compressed sensing based interior tomography," Int. J. Biomed. Imaging 2009, Article 125871 (2009)] is then employed to obtain pixel values in the subregion P, which serve as a priori knowledge in the next step. Finally, DBP-POCS is performed to reconstruct f(x,y) inside the ROI. Clinical data and the reconstructed image obtained by an x-ray computed tomography system (SOMATOM Definition; Siemens Healthcare) were used to validate the proposed method. The detector covers an object with a diameter of approximately 500 mm. The projection data were truncated either moderately to limit the detector coverage to Ø 350 mm of the object or severely to cover Ø199 mm. Images were reconstructed using the proposed method. The proposed method provided ROI images with correct pixel values in all areas except near the edge of the ROI. The coefficient of variation, i.e., the root mean square error divided by the mean pixel values, was less than 2.0% or 4.5% with the moderate or severe truncation cases, respectively, except near the boundary of the ROI. The proposed method allows for reconstructing interior ROI images with sufficient accuracy with a tiny knowledge that there exists a nearly piecewise constant

  7. AN INFEASIBLE-INTERIOR-POINT PREDICTOR-CORRECTOR ALGORITHM FOR THE SECOND-ORDER CONE PROGRAM

    Institute of Scientific and Technical Information of China (English)

    Chi Xiaoni; Liu Sanyang

    2008-01-01

    A globally convergent infeasible-interior-point predictor-corrector algorithm is presented for the second-order cone programming (SOCP) by using the Alizadeh-Haeberly-Overton (AHO) search direction. This algorithm does not require the feasibility of the initial points and iteration points. Under suitable assumptions, it is shown that the algorithm can find an c-approximate solution of an SOCP in at most O(√nln(∈0/∈)) iterations. The iteration-complexity bound of our algorithm is almost the same as the best known bound of feasible interior point algorithms for the SOCP.

  8. Fusion of sparse reconstruction algorithms for multiple measurement vectors

    Indian Academy of Sciences (India)

    K G DEEPA; SOORAJ K AMBAT; K V S HARI

    2016-11-01

    We consider the recovery of sparse signals that share a common support from multiple measurement vectors. The performance of several algorithms developed for this task depends on parameters like dimension of the sparse signal, dimension of measurement vector, sparsity level, measurement noise. We propose a fusion framework, where several multiple measurement vector reconstruction algorithms participate and the final signal estimate is obtained by combining the signal estimates of the participating algorithms. We present the conditions for achieving a better reconstruction performance than the participating algorithms. Numerical simulations demonstrate that the proposed fusion algorithm often performs better than the participating algorithms.

  9. A predictor-corrector interior-point algorithm for monotone variational inequality problems

    Institute of Scientific and Technical Information of China (English)

    梁昔明; 钱积新

    2002-01-01

    Mehrotra's recent suggestion of a predictor-corrector variant of primal-dual interior-point method for linear programming is currently the interior-point method of choice for linear programming. In this work the authors give a predictor-corrector interior-point algorithm for monotone variational inequality problems. The algorithm was proved to be equivalent to a level-1 perturbed composite Newton method. Computations in the algorithm do not require the initial iteration to be feasible. Numerical results of experiments are presented.

  10. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    Science.gov (United States)

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-05-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  11. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework.

    Science.gov (United States)

    Matej, Samuel; Daube-Witherspoon, Margaret E; Karp, Joel S

    2016-05-07

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  12. Array antenna diagnostics with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Meincke, Peter; Pivnenko, Sergey;

    2012-01-01

    The 3D reconstruction algorithm is applied to a slotted waveguide array measured at the DTU-ESA Spherical Near-Field Antenna Test Facility. One slot of the array is covered by conductive tape and an error is present in the array excitation. Results show the accuracy obtainable by the 3D...... reconstruction algorithm. Considerations on the measurement sampling, the obtainable spatial resolution, and the possibility of taking full advantage of the reconstruction geometry are provided....

  13. A FAST CONVERGING SPARSE RECONSTRUCTION ALGORITHM IN GHOST IMAGING

    Institute of Scientific and Technical Information of China (English)

    Li Enrong; Chen Mingliang; Gong Wenlin; Wang Hui; Han Shensheng

    2012-01-01

    A fast converging sparse reconstruction algorithm in ghost imaging is presented.It utilizes total variation regularization and its formulation is based on the Karush-Kuhn-Tucker (KKT) theorem in the theory of convex optimization.Tests using experimental data show that,compared with the algorithm of Gradient Projection for Sparse Reconstruction (GPSR),the proposed algorithm yields better results with less computation work.

  14. A combined reconstruction algorithm for computerized ionospheric tomography

    Science.gov (United States)

    Wen, D. B.; Ou, J. K.; Yuan, Y. B.

    Ionospheric electron density profiles inverted by tomographic reconstruction of GPS derived total electron content TEC measurements has the potential to become a tool to quantify ionospheric variability and investigate ionospheric dynamics The problem of reconstructing ionospheric electron density from GPS receiver to satellite TEC measurements are formulated as an ill-posed discrete linear inverse problem A combined reconstruction algorithm of computerized ionospheric tomography CIT is proposed in this paper In this algorithm Tikhonov regularization theory TRT is exploited to solve the ill-posed problem and its estimate from GPS observation data is input as the initial guess of simultaneous iterative reconstruction algorithm SIRT The combined algorithm offer a more reasonable method to choose initial guess of SIRT and the use of SIRT algorithm is to improve the quality of the final reconstructed imaging Numerical experiments from the actual GPS observation data are used to validate the reliability of the method the reconstructed results show that the new algorithm works reasonably and effectively with CIT the overall reconstruction error reduces significantly compared to the reconstruction error of SIRT only or TRT only

  15. Convergence of iterative image reconstruction algorithms for Digital Breast Tomosynthesis

    DEFF Research Database (Denmark)

    Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    solutions can aid in iterative image reconstruction algorithm design. This issue is particularly acute for iterative image reconstruction in Digital Breast Tomosynthesis (DBT), where the corresponding data model IS particularly poorly conditioned. The impact of this poor conditioning is that iterative......Most iterative image reconstruction algorithms are based on some form of optimization, such as minimization of a data-fidelity term plus an image regularizing penalty term. While achieving the solution of these optimization problems may not directly be clinically relevant, accurate optimization....... Math. Imag. Vol. 40, pgs 120-145) and apply it to iterative image reconstruction in DBT....

  16. A new algorithm for 3D reconstruction from support functions

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus

    2009-01-01

    We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab and allows, for the first time, good 3D reconstructions...... to be performed on an ordinary PC. Under mild conditions, theory guarantees that outputs of the algorithm will converge to the input shape as the number of measurements increases. Reconstructions may be obtained without any pre- or post-processing steps and with no restriction on the sets of measurement...

  17. INTERIORITY

    DEFF Research Database (Denmark)

    Hvejsel, Marie Frier

    , tectonically. Hence, it has been a particular idea of the study to explore the relation between furniture, the spatial envelope itself, and its construct by using furniture as an architectural concept. Consequently, the thesis has specifically investigated whether this notion of interiority, describing...... an interrelation of the functional and emotional dimensions of furniture and envelope as form, with the necessary economy and logic of construction, can be developed as a critical architctural theory for transforming the technical and economical elements of construction into experiences of interiority within...... of furnishing ‘gestures’ requiring of the envelope itself to guide, reveal, cover, caress and embrace us. These ‘gestures’ unite function and emotion by describing at once a physical movement and a feeling which is intrinsic of the spatial envelope itself. - The explanatory level has resulted in the development...

  18. A COMPARISON OF EXISTING ALGORITHMS FOR 3D TREE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    E. Bournez

    2017-02-01

    Full Text Available 3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.

  19. EXPECTED NUMBER OF ITERATIONS OF INTERIOR-POINT ALGORITHMS FOR LINEAR PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    Si-ming Huang

    2005-01-01

    We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the expected and anticipated number of iterations of these algorithms is bounded above by O(n1,5). The random LP problem is Todd's probabilistic model with the Cauchy distribution.

  20. Average number of iterations of some polynomial interior-point——Algorithms for linear programming

    Institute of Scientific and Technical Information of China (English)

    黄思明

    2000-01-01

    We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the average number of iterations of these algorithms, coupled with a finite termination technique, is bounded above by O( n1.5). The random LP problem is Todd’s probabilistic model with the standard Gauss distribution.

  1. A Primal-Dual Interior Point-Linear Programming Algorithm for MPC

    DEFF Research Database (Denmark)

    Edlund, Kristian; Sokoler, Leo Emil; Jørgensen, John Bagterp

    2009-01-01

    Constrained optimal control problems for linear systems with linear constraints and an objective function consisting of linear and l1-norm terms can be expressed as linear programs. We develop an efficient primal-dual interior point algorithm for solution of such linear programs. The algorithm...

  2. Average number of iterations of some polynomial interior-point--Algorithms for linear programming

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the average number of iterations of these algorithms, coupled with a finite termination technique, is bounded above by O(n1.5). The random LP problem is Todd's probabilistic model with the standard Gauss distribution.

  3. Performance of the ATLAS primary vertex reconstruction algorithms

    CERN Document Server

    Zhang, Matt

    2017-01-01

    The reconstruction of primary vertices in the busy, high pile up environment of the LHC is a challenging task. The challenges and novel methods developed by the ATLAS experiment to reconstruct vertices in such environments will be presented. Such advances in vertex seeding include methods taken from medical imagining, which allow for reconstruction of very nearby vertices will be highlighted. The performance of the current vertexing algorithms using early Run-2 data will be presented and compared to results from simulation.

  4. Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm

    Institute of Scientific and Technical Information of China (English)

    Mingjian Sun; Naizhang Feng; Yi Shen; Jiangang Li; Liyong Ma; Zhenghua Wu

    2011-01-01

    The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain. However, the sparsity of photoacoustic signals is destroyed because noises always exist. Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm. In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic images based on a set of noisy CS measurements. Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.%@@ The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain.However, the sparsity of photoacoustic signals is destroyed because noises always exist.Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm.In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic inages based on a set of noisy CS measurements.Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.

  5. A POLYNOMIAL PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    Yu Qian; Huang Chongchao; Jiang Yan

    2006-01-01

    This article presents a polynomial predictor-corrector interior-point algorithm for convex quadratic programming based on a modified predictor-corrector interior-point algorithm. In this algorithm, there is only one corrector step after each predictor step,where Step 2 is a predictor step and Step 4 is a corrector step in the algorithm. In the algorithm, the predictor step decreases the dual gap as much as possible in a wider neighborhood of the central path and the corrector step draws iteration points back to a narrower neighborhood and make a reduction for the dual gap. It is shown that the algorithm has O(√nL) iteration complexity which is the best result for convex quadratic programming so far.

  6. Convergence of Algorithms for Reconstructing Convex Bodies and Directional Measures

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus; Milanfar, Peyman

    2006-01-01

    We investigate algorithms for reconstructing a convex body K in Rn from noisy measurements of its support function or its brightness function in k directions u1, . . . , uk. The key idea of these algorithms is to construct a convex polytope Pk whose support function (or brightness function) best ...

  7. A new iterative algorithm to reconstruct the refractive index.

    Science.gov (United States)

    Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y

    2007-06-21

    The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.

  8. A new jet reconstruction algorithm for lepton colliders

    CERN Document Server

    Boronat, Marça; Vos, Marcel

    2014-01-01

    We propose a new sequential jet reconstruction algorithm for future lepton colliders at the energy frontier. The Valencia algorithm combines the natural distance criterion for lepton colliders with the greater robustness against backgrounds of algorithms adapted to hadron colliders. Results on a detailed Monte Carlo simulation of $t\\bar{t}$ and $ZZ$ production at future linear $e^+e^-$ colliders (ILC and CLIC) with a realistic level of background overlaid, show that it achieves better performance in the presence of background.

  9. New vertex reconstruction algorithms for CMS

    CERN Document Server

    Frühwirth, R; Prokofiev, Kirill; Speer, T.; Vanlaer, P.; Chabanat, E.; Estre, N.

    2003-01-01

    The reconstruction of interaction vertices can be decomposed into a pattern recognition problem (``vertex finding'') and a statistical problem (``vertex fitting''). We briefly review classical methods. We introduce novel approaches and motivate them in the framework of high-luminosity experiments like at the LHC. We then show comparisons with the classical methods in relevant physics channels

  10. Reconstruction Algorithms in Undersampled AFM Imaging

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen

    2016-01-01

    This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby the s...

  11. Interior point algorithm for linear programming used in transmission network synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, I.G.; Romero, R.; Mantovani, J.R.S. [Electrical Engineering Department, UNESP, CEP 15385000, Ilha Solteira, SP (Brazil); Garcia, A. [Power Systems Department, University of Campinas, UNICAMP, CEP 13083970, Campinas, SP (Brazil)

    2005-09-15

    This article presents a well-known interior point method (IPM) used to solve problems of linear programming that appear as sub-problems in the solution of the long-term transmission network expansion planning problem. The linear programming problem appears when the transportation model is used, and when there is the intention to solve the planning problem using a constructive heuristic algorithm (CHA), or a branch-and-bound algorithm. This paper shows the application of the IPM in a CHA. A good performance of the IPM was obtained, and then it can be used as tool inside algorithms used to solve the planning problem. Illustrative tests are shown, using electrical systems known in the specialized literature. (author) [Transmission network synthesis; Interior point method; Relaxed optimization models; Network expansion planning; Transportation model; Constructive heuristic algorithms].

  12. Interior-point algorithm based on general kernel function for monotone linear complementarity problem

    Institute of Scientific and Technical Information of China (English)

    LIU Yong; BAI Yan-qin

    2009-01-01

    A polynomial interior-point algorithm is presented for monotone linear complementarity problem (MLCP) based on:a class of kernel functions with the general barrier term, which are called general kernel functions. Under the mild conditions for the barrier term, the complexity bound of algorithm in terms of such kernel function and its derivatives is obtained. The approach is actually an extension of the existing work which only used the specific kernel functions for the MLCP.

  13. Tomographic reconstructions using map algorithms - application to the SPIDR mission

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh Roy, D.N.; Wilton, K.; Cook, T.A.; Chakrabarti, S.; Qi, J.; Gullberg, G.T.

    2004-01-21

    The spectral image of an astronomical scene is reconstructed from noisy tomographic projections using maximum a posteriori (MAP) and filtered backprojection (FBP) algorithms. Both maximum entropy (ME) and Gibbs prior are used in the MAP reconstructions. The scene, which is a uniform background with a localized emissive source superimposed on it, is reconstructed for a broad range of source counts. The algorithms are compared regarding their ability to detect the source in the background. Detectability is defined in terms of a contrast-to-noise ratio (CNR) which is a Monte Carlo ensemble average of spatially averaged CNRs for the individual reconstructions. Overall, MAP was found to yield improved CNR relative to FBP. Moreover, as a function of the total source counts, the CNR varies distinctly different for source and background regions. This may be important in separating a weak source from the background.

  14. Frequency domain simultaneous algebraic reconstruction techniques: algorithm and convergence

    Science.gov (United States)

    Wang, Jiong; Zheng, Yibin

    2005-03-01

    We propose a simultaneous algebraic reconstruction technique (SART) in the frequency domain for linear imaging problems. This algorithm has the advantage of efficiently incorporating pixel correlations in an a priori image model. First it is shown that the generalized SART algorithm converges to the weighted minimum norm solution of a weighted least square problem. Then an implementation in the frequency domain is described. The performance of the new algorithm is demonstrated with fan beam computed tomography (CT) examples. Compared to the traditional SART and its major alternative ART, the new algorithm offers superior image quality and potential application to other modalities.

  15. Three penalized EM-type algorithms for PET image reconstruction.

    Science.gov (United States)

    Teng, Yueyang; Zhang, Tie

    2012-06-01

    Based on Bayes theory, Green introduced the maximum a posteriori (MAP) algorithm to obtain a smoothing reconstruction for positron emission tomography. This algorithm is flexible and convenient for most of the penalties, but it is hard to guarantee convergence. For a common goal, Fessler penalized a weighted least squares (WLS) estimator by a quadratic penalty and then solved it with the successive over-relaxation (SOR) algorithm, however, the algorithm was time-consuming and difficultly parallelized. Anderson proposed another WLS estimator for faster convergence, on which there were few regularization methods studied. For three regularized estimators above, we develop three new expectation maximization (EM) type algorithms to solve them. Unlike MAP and SOR, the proposed algorithms yield update rules by minimizing the auxiliary functions constructed on the previous iterations, which ensure the cost functions monotonically decreasing. Experimental results demonstrated the robustness and effectiveness of the proposed algorithms.

  16. Flame slice algebraic reconstruction technique reconstruction algorithm based on radial total variation

    Science.gov (United States)

    Zhang, Shufang; Wang, Fuyao; Zhang, Cong; Xie, Hui; Wan, Minggang

    2016-09-01

    The engine flame is an important representation of the combustion process in the cylinder, and the three-dimensional (3-D) shape reconstruction of the flame can provide more information for the quantitative analysis of the flame, so as to contribute to further research on the mechanism of the combustion flame. One important method of 3-D shape reconstruction is to reconstruct the two-dimensional (2-D) projection image of the flame, so the optimization problem of the flame 2-D slice reconstruction algorithm is studied in this paper. According to the gradient sparsity characteristics in the total variation (TV) domain and radial diffusion characteristics of the engine combustion flame, a flame 2-D slice algebraic reconstruction technique (ART) reconstruction algorithm based on radial TV (ART-R-TV) is proposed. Numerical simulation results show that the new proposed ART-R-TV algorithm can reconstruct flame slice images more stably and have a better robustness than the two traditional ART algorithms especially in a limited-angle situation.

  17. On Implementing a Homogeneous Interior-Point Algorithm for Nonsymmetric Conic Optimization

    DEFF Research Database (Denmark)

    Skajaa, Anders; Jørgensen, John Bagterp; Hansen, Per Christian

    Based on earlier work by Nesterov, an implementation of a homogeneous infeasible-start interior-point algorithm for solving nonsymmetric conic optimization problems is presented. Starting each iteration from (the vicinity of) the central path, the method computes (nearly) primal-dual symmetric......-cone problem, the facility location problem, entropy problems and geometric programs; all formulated as nonsymmetric conic optimization problems....

  18. Efficient iterative image reconstruction algorithm for dedicated breast CT

    Science.gov (United States)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  19. AN INTERIOR TRUST REGION ALGORITHM FOR NONLINEAR MINIMIZATION WITH LINEAR CONSTRAINTS

    Institute of Scientific and Technical Information of China (English)

    Jian-guo Liu

    2002-01-01

    An interior trust-region-based algorithm for linearly constrained minimization problems is proposed and analyzed. This algorithm is similar to trust region algorithms forunconstrained minimization: a trust region subproblem on a subspace is solved in eachiteration. We establish that the proposed algorithm has convergence properties analogousto those of the trust region algorithms for unconstrained minimization. Namely, every limitpoint of the generated sequence satisfies the Krush-Kuhn-Tucker (KKT) conditions andat least one limit point satisfies second order necessary optimality conditions. In adidition,if one limit point is a strong local minimizer and the Hessian is Lipschitz continuous in aneighborhood of that point, then the generated sequence converges globally to that pointin the rate of at least 2-step quadratic. We are mainly concerned with the theoretical properties of the algorithm in this paper. Implementation issues and adaptation to large-scaleproblems will be addressed in a future report.

  20. AN AFFINE SCALING INTERIOR ALGORITHM VIA CONJUGATE GRADIENT PATH FOR SOLVING BOUND-CONSTRAINED NONLINEAR SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Chunxia Jia; Detong Zhu

    2008-01-01

    In this paper we propose an affine scaling interior algorithm via conjugate gradient path for solving nonlinear equality systems subject to bounds on variables.By employing the affine scaling conjugate gradient path search strategy,we obtain an iterative direction by solving the linearize model.By using the line search technique,we will find an acceptable trial step length along this direction which is strictly feasible and makes the objective function nonmonotonically decreasing.The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions.Furthermore,the numerical results of the proposed algorithm indicate to be effective.

  1. Discrete Spectrum Reconstruction Using Integral Approximation Algorithm.

    Science.gov (United States)

    Sizikov, Valery; Sidorov, Denis

    2017-07-01

    An inverse problem in spectroscopy is considered. The objective is to restore the discrete spectrum from observed spectrum data, taking into account the spectrometer's line spread function. The problem is reduced to solution of a system of linear-nonlinear equations (SLNE) with respect to intensities and frequencies of the discrete spectral lines. The SLNE is linear with respect to lines' intensities and nonlinear with respect to the lines' frequencies. The integral approximation algorithm is proposed for the solution of this SLNE. The algorithm combines solution of linear integral equations with solution of a system of linear algebraic equations and avoids nonlinear equations. Numerical examples of the application of the technique, both to synthetic and experimental spectra, demonstrate the efficacy of the proposed approach in enabling an effective enhancement of the spectrometer's resolution.

  2. A fast and accurate algorithm for diploid individual haplotype reconstruction.

    Science.gov (United States)

    Wu, Jingli; Liang, Binbin

    2013-08-01

    Haplotypes can provide significant information in many research fields, including molecular biology and medical therapy. However, haplotyping is much more difficult than genotyping by using only biological techniques. With the development of sequencing technologies, it becomes possible to obtain haplotypes by combining sequence fragments. The haplotype reconstruction problem of diploid individual has received considerable attention in recent years. It assembles the two haplotypes for a chromosome given the collection of fragments coming from the two haplotypes. Fragment errors significantly increase the difficulty of the problem, and which has been shown to be NP-hard. In this paper, a fast and accurate algorithm, named FAHR, is proposed for haplotyping a single diploid individual. Algorithm FAHR reconstructs the SNP sites of a pair of haplotypes one after another. The SNP fragments that cover some SNP site are partitioned into two groups according to the alleles of the corresponding SNP site, and the SNP values of the pair of haplotypes are ascertained by using the fragments in the group that contains more SNP fragments. The experimental comparisons were conducted among the FAHR, the Fast Hare and the DGS algorithms by using the haplotypes on chromosome 1 of 60 individuals in CEPH samples, which were released by the International HapMap Project. Experimental results under different parameter settings indicate that the reconstruction rate of the FAHR algorithm is higher than those of the Fast Hare and the DGS algorithms, and the running time of the FAHR algorithm is shorter than those of the Fast Hare and the DGS algorithms. Moreover, the FAHR algorithm has high efficiency even for the reconstruction of long haplotypes and is very practical for realistic applications.

  3. Projected Hessian algorithm with backtracking interior point technique for linear constrained optimization

    Institute of Scientific and Technical Information of China (English)

    ZHU Detong

    2006-01-01

    In this paper,we propose a new trust-region-projected Hessian algorithm with nonmonotonic backtracking interior point technique for linear constrained optimization.By performing the QR decomposition of an affine scaling equality constraint matrix,the conducted subproblem in the algorithm is changed into the general trust-region subproblem defined by minimizing a quadratic function subject only to an ellipsoidal constraint.By using both the trust-region strategy and the line-search technique,each iteration switches to a backtracking interior point step generated by the trustregion subproblem.The global convergence and fast local convergence rates for the proposed algorithm are established under some reasonable assumptions.A nonmonotonic criterion is used to speed up the convergence in some ill-conditioned cases.

  4. A PREDICTOR-CORRECTOR INTERIOR-POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    梁昔明; 钱积新

    2002-01-01

    The simplified Newton method, at the expense of fast convergence, reduces the work required by Newton method by reusing the initial Jacobian matrix. The composite Newton method attempts to balance the trade-off between expense and fast convergence by composing one Newton step with one simplified Newton step. Recently, Mehrotra suggested a predictor-corrector variant of primal-dual interior point method for linear programming. It is currently the interiorpoint method of the choice for linear programming. In this work we propose a predictor-corrector interior-point algorithm for convex quadratic programming. It is proved that the algorithm is equivalent to a level-1 perturbed composite Newton method. Computations in the algorithm do not require that the initial primal and dual points be feasible. Numerical experiments are made.

  5. Comparison with reconstruction algorithms in magnetic induction tomography.

    Science.gov (United States)

    Han, Min; Cheng, Xiaolin; Xue, Yuyan

    2016-05-01

    Magnetic induction tomography (MIT) is a kind of imaging technology, which uses the principle of electromagnetic detection to measure the conductivity distribution. In this research, we make an effort to improve the quality of image reconstruction mainly via the image reconstruction of MIT analysis, including solving the forward problem and image reconstruction. With respect to the forward problem, the variational finite element method is adopted. We transform the solution of a nonlinear partial differential equation into linear equations by using field subdividing and the appropriate interpolation function so that the voltage data of the sensing coils can be calculated. With respect to the image reconstruction, a method of modifying the iterative Newton-Raphson (NR) algorithm is presented in order to improve the quality of the image. In the iterative NR, weighting matrix and L1-norm regularization are introduced to overcome the drawbacks of large estimation errors and poor stability of the reconstruction image. On the other hand, within the incomplete-data framework of the expectation maximization (EM) algorithm, the image reconstruction can be converted to the problem of EM through the likelihood function for improving the under-determined problem. In the EM, the missing-data is introduced and the measurement data and the sensitivity matrix are compensated to overcome the drawback that the number of the measurement voltage is far less than the number of the unknown. In addition to the two aspects above, image segmentation is also used to make the lesion more flexible and adaptive to the patients' real conditions, which provides a theoretical reference for the development of the application of the MIT technique in clinical applications. The results show that solving the forward problem with the variational finite element method can provide the measurement voltage data for image reconstruction, the improved iterative NR method and EM algorithm can enhance the image

  6. Concluding Report: Quantitative Tomography Simulations and Reconstruction Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M B; Martz, H E; Slone, D M; Jackson, J A; Schach von Wittenau, A E; Goodman, D M; Logan, C M; Hall, J M

    2002-02-01

    In this report we describe the original goals and final achievements of this Laboratory Directed Research and Development project. The Quantitative was Tomography Simulations and Reconstruction Algorithms project (99-ERD-015) funded as a multi-directorate, three-year effort to advance the state of the art in radiographic simulation and tomographic reconstruction by improving simulation and including this simulation in the tomographic reconstruction process. Goals were to improve the accuracy of radiographic simulation, and to couple advanced radiographic simulation tools with a robust, many-variable optimization algorithm. In this project, we were able to demonstrate accuracy in X-Ray simulation at the 2% level, which is an improvement of roughly a factor of 5 in accuracy, and we have successfully coupled our simulation tools with the CCG (Constrained Conjugate Gradient) optimization algorithm, allowing reconstructions that include spectral effects and blurring in the reconstructions. Another result of the project was the assembly of a low-scatter X-Ray imaging facility for use in nondestructive evaluation applications. We conclude with a discussion of future work.

  7. Computationally efficient algorithm for multifocus image reconstruction

    Science.gov (United States)

    Eltoukhy, Helmy A.; Kavusi, Sam

    2003-05-01

    A method for synthesizing enhanced depth of field digital still camera pictures using multiple differently focused images is presented. This technique exploits only spatial image gradients in the initial decision process. The image gradient as a focus measure has been shown to be experimentally valid and theoretically sound under weak assumptions with respect to unimodality and monotonicity. Subsequent majority filtering corroborates decisions with those of neighboring pixels, while the use of soft decisions enables smooth transitions across region boundaries. Furthermore, these last two steps add algorithmic robustness for coping with both sensor noise and optics-related effects, such as misregistration or optical flow, and minor intensity fluctuations. The dependence of these optical effects on several optical parameters is analyzed and potential remedies that can allay their impact with regard to the technique's limitations are discussed. Several examples of image synthesis using the algorithm are presented. Finally, leveraging the increasing functionality and emerging processing capabilities of digital still cameras, the method is shown to entail modest hardware requirements and is implementable using a parallel or general purpose processor.

  8. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  9. A CUDA-based reverse gridding algorithm for MR reconstruction.

    Science.gov (United States)

    Yang, Jingzhu; Feng, Chaolu; Zhao, Dazhe

    2013-02-01

    MR raw data collected using non-Cartesian method can be transformed on Cartesian grids by traditional gridding algorithm (GA) and reconstructed by Fourier transform. However, its runtime complexity is O(K×N(2)), where resolution of raw data is N×N and size of convolution window (CW) is K. And it involves a large number of matrix calculation including modulus, addition, multiplication and convolution. Therefore, a Compute Unified Device Architecture (CUDA)-based algorithm is proposed to improve the reconstruction efficiency of PROPELLER (a globally recognized non-Cartesian sampling method). Experiment shows a write-write conflict among multiple CUDA threads. This induces an inconsistent result when synchronously convoluting multiple k-space data onto the same grid. To overcome this problem, a reverse gridding algorithm (RGA) was developed. Different from the method of generating a grid window for each trajectory as in traditional GA, RGA calculates a trajectory window for each grid. This is what "reverse" means. For each k-space point in the CW, contribution is cumulated to this grid. Although this algorithm can be easily extended to reconstruct other non-Cartesian sampled raw data, we only implement it based on PROPELLER. Experiment illustrates that this CUDA-based RGA has successfully solved the write-write conflict and its reconstruction speed is 7.5 times higher than that of traditional GA.

  10. A novel dual-axis reconstruction algorithm for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Jenna; Midgley, Paul [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge, CB2 3QZ (United Kingdom)

    2006-02-22

    A new algorithm for computing electron microscopy tomograms which combines iterative methods with dual-axis geometry is presented. Initial modelling using test data shows several improvements over both the weighted back-projection (WBP) and Simultaneous Iterative Reconstruction Technique (SIRT) method, and, with increased stability and tomogram fidelity under high-noise conditions.

  11. Limited angle C-arm tomosynthesis reconstruction algorithms

    Science.gov (United States)

    Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying

    2015-03-01

    In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.

  12. Application of particle filtering algorithm in image reconstruction of EMT

    Science.gov (United States)

    Wang, Jingwen; Wang, Xu

    2015-07-01

    To improve the image quality of electromagnetic tomography (EMT), a new image reconstruction method of EMT based on a particle filtering algorithm is presented. Firstly, the principle of image reconstruction of EMT is analyzed. Then the search process for the optimal solution for image reconstruction of EMT is described as a system state estimation process, and the state space model is established. Secondly, to obtain the minimum variance estimation of image reconstruction, the optimal weights of random samples obtained from the state space are calculated from the measured information. Finally, simulation experiments with five different flow regimes are performed. The experimental results have shown that the average image error of reconstruction results obtained by the method mentioned in this paper is 42.61%, and the average correlation coefficient with the original image is 0.8706, which are much better than corresponding indicators obtained by LBP, Landweber and Kalman Filter algorithms. So, this EMT image reconstruction method has high efficiency and accuracy, and provides a new method and means for EMT research.

  13. Filtered gradient reconstruction algorithm for compressive spectral imaging

    Science.gov (United States)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  14. Image Reconstruction Using a Genetic Algorithm for Electrical Capacitance Tomography

    Institute of Scientific and Technical Information of China (English)

    MOU Changhua; PENG Lihui; YAO Danya; XIAO Deyun

    2005-01-01

    Electrical capacitance tomography (ECT) has been used for more than a decade for imaging dielectric processes. However, because of its ill-posedness and non-linearity, ECT image reconstruction has always been a challenge. A new genetic algorithm (GA) developed for ECT image reconstruction uses initial results from a linear back-projection, which is widely used for ECT image reconstruction to optimize the threshold and the maximum and minimum gray values for the image. The procedure avoids optimizing the gray values pixel by pixel and significantly reduces the search space dimension. Both simulations and static experimental results show that the method is efficient and capable of reconstructing high quality images. Evaluation criteria show that the GA-based method has smaller image error and greater correlation coefficients. In addition, the GA-based method converges quickly with a small number of iterations.

  15. An Improved Predictor-Corrector Interior-Point Algorithm for Linear Complementarity Problems with √ ( ) -Iteration Complexity

    OpenAIRE

    Debin Fang; Qian Yu

    2011-01-01

    This paper proposes an improved predictor-corrector interior-point algorithm for the linear complementarity problem (LCP) based on the Mizuno-Todd-Ye algorithm. The modified corrector steps in our algorithm cannot only draw the iteration point back to a narrower neighborhood of the center path but also reduce the duality gap. It implies that the improved algorithm can converge faster than the MTY algorithm. The iteration complexity of the improved algorithm is proved to obtain √ ( ) whi...

  16. Event Reconstruction Algorithms for the ATLAS Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca-Martin, T.; /CERN; Abolins, M.; /Michigan State U.; Adragna, P.; /Queen Mary, U. of London; Aleksandrov, E.; /Dubna, JINR; Aleksandrov, I.; /Dubna, JINR; Amorim, A.; /Lisbon, LIFEP; Anderson, K.; /Chicago U., EFI; Anduaga, X.; /La Plata U.; Aracena, I.; /SLAC; Asquith, L.; /University Coll. London; Avolio, G.; /CERN; Backlund, S.; /CERN; Badescu, E.; /Bucharest, IFIN-HH; Baines, J.; /Rutherford; Barria, P.; /Rome U. /INFN, Rome; Bartoldus, R.; /SLAC; Batreanu, S.; /Bucharest, IFIN-HH /CERN; Beck, H.P.; /Bern U.; Bee, C.; /Marseille, CPPM; Bell, P.; /Manchester U.; Bell, W.H.; /Glasgow U. /Pavia U. /INFN, Pavia /Regina U. /CERN /Annecy, LAPP /Paris, IN2P3 /Royal Holloway, U. of London /Napoli Seconda U. /INFN, Naples /Argonne /CERN /UC, Irvine /Barcelona, IFAE /Barcelona, Autonoma U. /CERN /Montreal U. /CERN /Glasgow U. /Michigan State U. /Bucharest, IFIN-HH /Napoli Seconda U. /INFN, Naples /New York U. /Barcelona, IFAE /Barcelona, Autonoma U. /Salento U. /INFN, Lecce /Pisa U. /INFN, Pisa /Bucharest, IFIN-HH /UC, Irvine /CERN /Glasgow U. /INFN, Genoa /Genoa U. /Lisbon, LIFEP /Napoli Seconda U. /INFN, Naples /UC, Irvine /Valencia U. /Rio de Janeiro Federal U. /University Coll. London /New York U.; /more authors..

    2011-11-09

    The ATLAS experiment under construction at CERN is due to begin operation at the end of 2007. The detector will record the results of proton-proton collisions at a center-of-mass energy of 14 TeV. The trigger is a three-tier system designed to identify in real-time potentially interesting events that are then saved for detailed offline analysis. The trigger system will select approximately 200 Hz of potentially interesting events out of the 40 MHz bunch-crossing rate (with 10{sup 9} interactions per second at the nominal luminosity). Algorithms used in the trigger system to identify different event features of interest will be described, as well as their expected performance in terms of selection efficiency, background rejection and computation time per event. The talk will concentrate on recent improvements and on performance studies, using a very detailed simulation of the ATLAS detector and electronics chain that emulates the raw data as it will appear at the input to the trigger system.

  17. Long step homogeneous interior point algorithm for the p* nonlinear complementarity problems

    Directory of Open Access Journals (Sweden)

    Lešaja Goran

    2002-01-01

    Full Text Available A P*-Nonlinear Complementarity Problem as a generalization of the P*-Linear Complementarity Problem is considered. We show that the long-step version of the homogeneous self-dual interior-point algorithm could be used to solve such a problem. The algorithm achieves linear global convergence and quadratic local convergence under the following assumptions: the function satisfies a modified scaled Lipschitz condition, the problem has a strictly complementary solution, and certain submatrix of the Jacobian is nonsingular on some compact set.

  18. Benchmarking procedures for high-throughput context specific reconstruction algorithms

    Directory of Open Access Journals (Sweden)

    Maria ePires Pacheco

    2016-01-01

    Full Text Available Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX (Duarte et al., 2007; Thiele et al., 2013 or HMR (Agren et al., 2013 has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last ten years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding.This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished, consistency testing and comparison based testing. The former includes methods like cross validation or testing with artificial networks. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms, that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms

  19. An Improved Predictor-Corrector Interior-Point Algorithm for Linear Complementarity Problems with √(-Iteration Complexity

    Directory of Open Access Journals (Sweden)

    Debin Fang

    2011-01-01

    Full Text Available This paper proposes an improved predictor-corrector interior-point algorithm for the linear complementarity problem (LCP based on the Mizuno-Todd-Ye algorithm. The modified corrector steps in our algorithm cannot only draw the iteration point back to a narrower neighborhood of the center path but also reduce the duality gap. It implies that the improved algorithm can converge faster than the MTY algorithm. The iteration complexity of the improved algorithm is proved to obtain √( which is similar to the classical Mizuno-Todd-Ye algorithm. Finally, the numerical experiments show that our algorithm improved the performance of the classical MTY algorithm.

  20. Well-posedness of the conductivity reconstruction from an interior current density in terms of Schauder theory

    KAUST Repository

    Kim, Yong-Jung

    2015-06-23

    We show the well-posedness of the conductivity image reconstruction problem with a single set of interior electrical current data and boundary conductivity data. Isotropic conductivity is considered in two space dimensions. Uniqueness for similar conductivity reconstruction problems has been known for several cases. However, the existence and the stability are obtained in this paper for the first time. The main tool of the proof is the method of characteristics of a related curl equation.

  1. Energy reconstruction and calibration algorithms for the ATLAS electromagnetic calorimeter

    CERN Document Server

    Delmastro, M

    2003-01-01

    The work of this thesis is devoted to the study, development and optimization of the algorithms of energy reconstruction and calibration for the electromagnetic calorimeter (EMC) of the ATLAS experiment, presently under installation and commissioning at the CERN Large Hadron Collider in Geneva (Switzerland). A deep study of the electrical characteristics of the detector and of the signals formation and propagation is conduced: an electrical model of the detector is developed and analyzed through simulations; a hardware model (mock-up) of a group of the EMC readout cells has been built, allowing the direct collection and properties study of the signals emerging from the EMC cells. We analyze the existing multiple-sampled signal reconstruction strategy, showing the need of an improvement in order to reach the advertised performances of the detector. The optimal filtering reconstruction technique is studied and implemented, taking into account the differences between the ionization and calibration waveforms as e...

  2. Superiorization of incremental optimization algorithms for statistical tomographic image reconstruction

    Science.gov (United States)

    Helou, E. S.; Zibetti, M. V. W.; Miqueles, E. X.

    2017-04-01

    We propose the superiorization of incremental algorithms for tomographic image reconstruction. The resulting methods follow a better path in its way to finding the optimal solution for the maximum likelihood problem in the sense that they are closer to the Pareto optimal curve than the non-superiorized techniques. A new scaled gradient iteration is proposed and three superiorization schemes are evaluated. Theoretical analysis of the methods as well as computational experiments with both synthetic and real data are provided.

  3. Electromagnetic Model and Image Reconstruction Algorithms Based on EIT System

    Institute of Scientific and Technical Information of China (English)

    CAO Zhang; WANG Huaxiang

    2006-01-01

    An intuitive 2 D model of circular electrical impedance tomography ( EIT) sensor with small size electrodes is established based on the theory of analytic functions.The validation of the model is proved using the result from the solution of Laplace equation.Suggestions on to electrode optimization and explanation to the ill-condition property of the sensitivity matrix are provided based on the model,which takes electrode distance into account and can be generalized to the sensor with any simple connected region through a conformal transformation.Image reconstruction algorithms based on the model are implemented to show feasibility of the model using experimental data collected from the EIT system developed in Tianjin University.In the simulation with a human chestlike configuration,electrical conductivity distributions are reconstructed using equi-potential backprojection (EBP) and Tikhonov regularization (TR) based on a conformal transformation of the model.The algorithms based on the model are suitable for online image reconstruction and the reconstructed results are good both in size and position.

  4. Infeasible-interior-point algorithm for a class of nonmonotone complementarity problems and its computational complexity

    Institute of Scientific and Technical Information of China (English)

    HE; Shanglu

    2001-01-01

    [1]Andersen, E. D., Ye, Y., On homogeneous algorithm for the monotone complementarity problem, Mathematical Programming, 1999, 84(2): 375.[2]Wright, S., Ralph, D., A supperlinear infeasible-interior-point algorithm for monotone complementarity problems, Mathematics of Operations Research, 1996, 24(4): 815.[3]Kojima, M., Noma, T., Yoshise, A., Global convergence in infeasible-interior-point algorithms, Mathematical Programming, 1994, 65(1): 43.[4]Kojima, M., Megiddo, N., Noma, T., A new continuation method for complementarity problems with uniform p-functions, Mathematical Programming, 1989, 43(1): 107.[5]Kojima, M., Megiddo, N., Mizuno, S., A general framework of continuation method for complementarity problems, Mathematics of Operations Research, 1993, 18(4): 945.[6]More, J., Rheinboldt, W., On P- and S-functions and related classes of n-dimensional nonlinear mappings, Linear Algebra and Its Applications, 1973, 6(1): 45.

  5. Meaning of Interior Tomography

    CERN Document Server

    Wang, Ge

    2013-01-01

    The classic imaging geometry for computed tomography is for collection of un-truncated projections and reconstruction of a global image, with the Fourier transform as the theoretical foundation that is intrinsically non-local. Recently, interior tomography research has led to theoretically exact relationships between localities in the projection and image spaces and practically promising reconstruction algorithms. Initially, interior tomography was developed for x-ray computed tomography. Then, it has been elevated as a general imaging principle. Finally, a novel framework known as omni-tomography is being developed for grand fusion of multiple imaging modalities, allowing tomographic synchrony of diversified features.

  6. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    Science.gov (United States)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  7. A hybrid ECT image reconstruction based on Tikhonov regularization theory and SIRT algorithm

    Science.gov (United States)

    Lei, Wang; Xiaotong, Du; Xiaoyin, Shao

    2007-07-01

    Electrical Capacitance Tomography (ECT) image reconstruction is a key problem that is not well solved due to the influence of soft-field in the ECT system. In this paper, a new hybrid ECT image reconstruction algorithm is proposed by combining Tikhonov regularization theory and Simultaneous Reconstruction Technique (SIRT) algorithm. Tikhonov regularization theory is used to solve ill-posed image reconstruction problem to obtain a stable original reconstructed image in the region of the optimized solution aggregate. Then, SIRT algorithm is used to improve the quality of the final reconstructed image. In order to satisfy the industrial requirement of real-time computation, the proposed algorithm is further been modified to improve the calculation speed. Test results show that the quality of reconstructed image is better than that of the well-known Filter Linear Back Projection (FLBP) algorithm and the time consumption of the new algorithm is less than 0.1 second that satisfies the online requirements.

  8. A hybrid ECT image reconstruction based on Tikhonov regularization theory and SIRT algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wang Lei [School of Control Science and Engineering, Shandong University, 250061, Jinan (China); Du Xiaotong [School of Control Science and Engineering, Shandong University, 250061, Jinan (China); Shao Xiaoyin [Department of Manufacture Engineering and Engineering Management, City University of Hong Kong (China)

    2007-07-15

    Electrical Capacitance Tomography (ECT) image reconstruction is a key problem that is not well solved due to the influence of soft-field in the ECT system. In this paper, a new hybrid ECT image reconstruction algorithm is proposed by combining Tikhonov regularization theory and Simultaneous Reconstruction Technique (SIRT) algorithm. Tikhonov regularization theory is used to solve ill-posed image reconstruction problem to obtain a stable original reconstructed image in the region of the optimized solution aggregate. Then, SIRT algorithm is used to improve the quality of the final reconstructed image. In order to satisfy the industrial requirement of real-time computation, the proposed algorithm is further been modified to improve the calculation speed. Test results show that the quality of reconstructed image is better than that of the well-known Filter Linear Back Projection (FLBP) algorithm and the time consumption of the new algorithm is less than 0.1 second that satisfies the online requirements.

  9. Efficient algorithms for reconstructing gene content by co-evolution

    Directory of Open Access Journals (Sweden)

    Tuller Tamir

    2011-10-01

    Full Text Available Abstract Background In a previous study we demonstrated that co-evolutionary information can be utilized for improving the accuracy of ancestral gene content reconstruction. To this end, we defined a new computational problem, the Ancestral Co-Evolutionary (ACE problem, and developed algorithms for solving it. Results In the current paper we generalize our previous study in various ways. First, we describe new efficient computational approaches for solving the ACE problem. The new approaches are based on reductions to classical methods such as linear programming relaxation, quadratic programming, and min-cut. Second, we report new computational hardness results related to the ACE, including practical cases where it can be solved in polynomial time. Third, we generalize the ACE problem and demonstrate how our approach can be used for inferring parts of the genomes of non-ancestral organisms. To this end, we describe a heuristic for finding the portion of the genome ('dominant set’ that can be used to reconstruct the rest of the genome with the lowest error rate. This heuristic utilizes both evolutionary information and co-evolutionary information. We implemented these algorithms on a large input of the ACE problem (95 unicellular organisms, 4,873 protein families, and 10, 576 of co-evolutionary relations, demonstrating that some of these algorithms can outperform the algorithm used in our previous study. In addition, we show that based on our approach a ’dominant set’ cab be used reconstruct a major fraction of a genome (up to 79% with relatively low error-rate (e.g. 0.11. We find that the ’dominant set’ tends to include metabolic and regulatory genes, with high evolutionary rate, and low protein abundance and number of protein-protein interactions. Conclusions The ACE problem can be efficiently extended for inferring the genomes of organisms that exist today. In addition, it may be solved in polynomial time in many practical cases

  10. Shape reconstruction from apparent contours theory and algorithms

    CERN Document Server

    Bellettini, Giovanni; Paolini, Maurizio

    2015-01-01

    Motivated by a variational model concerning the depth of the objects in a picture and the problem of hidden and illusory contours, this book investigates one of the central problems of computer vision: the topological and algorithmic reconstruction of a smooth three dimensional scene starting from the visible part of an apparent contour. The authors focus their attention on the manipulation of apparent contours using a finite set of elementary moves, which correspond to diffeomorphic deformations of three dimensional scenes. A large part of the book is devoted to the algorithmic part, with implementations, experiments, and computed examples. The book is intended also as a user's guide to the software code appcontour, written for the manipulation of apparent contours and their invariants. This book is addressed to theoretical and applied scientists working in the field of mathematical models of image segmentation.

  11. Geometric Algorithms for Identifying and Reconstructing Galaxy Systems

    CERN Document Server

    Marinoni, C

    2010-01-01

    The theme of this book chapter is to discuss algorithms for identifying and reconstructing groups and clusters of galaxies out of the general galaxy distribution. I review the progress of detection techniques through time, from the very first visual-like algorithms to the most performant geometrical methods available today. This will allow readers to understand the development of the field as well as the various issues and pitfalls we are confronted with. This essay is drawn from a talk given by the author at the conference "The World a Jigsaw: Tessellations in the Sciences" held at the Lorentz Center in Leiden. It is intended for a broad audience of scientists (and so does not include full academic referencing), but it may be of interest to specialists.

  12. The SRT reconstruction algorithm for semiquantification in PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Samartzis, Alexandros P. [Nuclear Medicine Department, Evangelismos General Hospital, Athens 10676 (Greece); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA, United Kingdom and Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece)

    2015-10-15

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  13. An automated algorithm for the generation of dynamically reconstructed trajectories

    Science.gov (United States)

    Komalapriya, C.; Romano, M. C.; Thiel, M.; Marwan, N.; Kurths, J.; Kiss, I. Z.; Hudson, J. L.

    2010-03-01

    The lack of long enough data sets is a major problem in the study of many real world systems. As it has been recently shown [C. Komalapriya, M. Thiel, M. C. Romano, N. Marwan, U. Schwarz, and J. Kurths, Phys. Rev. E 78, 066217 (2008)], this problem can be overcome in the case of ergodic systems if an ensemble of short trajectories is available, from which dynamically reconstructed trajectories can be generated. However, this method has some disadvantages which hinder its applicability, such as the need for estimation of optimal parameters. Here, we propose a substantially improved algorithm that overcomes the problems encountered by the former one, allowing its automatic application. Furthermore, we show that the new algorithm not only reproduces the short term but also the long term dynamics of the system under study, in contrast to the former algorithm. To exemplify the potential of the new algorithm, we apply it to experimental data from electrochemical oscillators and also to analyze the well-known problem of transient chaotic trajectories.

  14. A study of image reconstruction algorithms for hybrid intensity interferometers

    Science.gov (United States)

    Crabtree, Peter N.; Murray-Krezan, Jeremy; Picard, Richard H.

    2011-09-01

    Phase retrieval is explored for image reconstruction using outputs from both a simulated intensity interferometer (II) and a hybrid system that combines the II outputs with partially resolved imagery from a traditional imaging telescope. Partially resolved imagery provides an additional constraint for the iterative phase retrieval process, as well as an improved starting point. The benefits of this additional a priori information are explored and include lower residual phase error for SNR values above 0.01, increased sensitivity, and improved image quality. Results are also presented for image reconstruction from II measurements alone, via current state-of-the-art phase retrieval techniques. These results are based on the standard hybrid input-output (HIO) algorithm, as well as a recent enhancement to HIO that optimizes step lengths in addition to step directions. The additional step length optimization yields a reduction in residual phase error, but only for SNR values greater than about 10. Image quality for all algorithms studied is quite good for SNR>=10, but it should be noted that the studied phase-recovery techniques yield useful information even for SNRs that are much lower.

  15. An iterative algorithm for soft tissue reconstruction from truncated flat panel projections

    Science.gov (United States)

    Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.

    2006-03-01

    The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.

  16. Statistical reconstruction algorithms for continuous wave electron spin resonance imaging

    Science.gov (United States)

    Kissos, Imry; Levit, Michael; Feuer, Arie; Blank, Aharon

    2013-06-01

    Electron spin resonance imaging (ESRI) is an important branch of ESR that deals with heterogeneous samples ranging from semiconductor materials to small live animals and even humans. ESRI can produce either spatial images (providing information about the spatially dependent radical concentration) or spectral-spatial images, where an extra dimension is added to describe the absorption spectrum of the sample (which can also be spatially dependent). The mapping of oxygen in biological samples, often referred to as oximetry, is a prime example of an ESRI application. ESRI suffers frequently from a low signal-to-noise ratio (SNR), which results in long acquisition times and poor image quality. A broader use of ESRI is hampered by this slow acquisition, which can also be an obstacle for many biological applications where conditions may change relatively quickly over time. The objective of this work is to develop an image reconstruction scheme for continuous wave (CW) ESRI that would make it possible to reduce the data acquisition time without degrading the reconstruction quality. This is achieved by adapting the so-called "statistical reconstruction" method, recently developed for other medical imaging modalities, to the specific case of CW ESRI. Our new algorithm accounts for unique ESRI aspects such as field modulation, spectral-spatial imaging, and possible limitation on the gradient magnitude (the so-called "limited angle" problem). The reconstruction method shows improved SNR and contrast recovery vs. commonly used back-projection-based methods, for a variety of simulated synthetic samples as well as in actual CW ESRI experiments.

  17. Testing the Landscape Reconstruction Algorithm for spatially explicit reconstruction of vegetation in northern Michigan and Wisconsin

    Science.gov (United States)

    Sugita, Shinya; Parshall, Tim; Calcote, Randy; Walker, Karen

    2010-09-01

    The Landscape Reconstruction Algorithm (LRA) overcomes some of the fundamental problems in pollen analysis for quantitative reconstruction of vegetation. LRA first uses the REVEALS model to estimate regional vegetation using pollen data from large sites and then the LOVE model to estimate vegetation composition within the relevant source area of pollen (RSAP) at small sites by subtracting the background pollen estimated from the regional vegetation composition. This study tests LRA using training data from forest hollows in northern Michigan (35 sites) and northwestern Wisconsin (43 sites). In northern Michigan, surface pollen from 152-ha and 332-ha lakes is used for REVEALS. Because of the lack of pollen data from large lakes in northwestern Wisconsin, we use pollen from 21 hollows randomly selected from the 43 sites for REVEALS. RSAP indirectly estimated by LRA is comparable to the expected value in each region. A regression analysis and permutation test validate that the LRA-based vegetation reconstruction is significantly more accurate than pollen percentages alone in both regions. Even though the site selection in northwestern Wisconsin is not ideal, the results are robust. The LRA is a significant step forward in quantitative reconstruction of vegetation.

  18. Medical image reconstruction algorithm based on the geometric information between sensor detector and ROI

    Science.gov (United States)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk

    2016-05-01

    In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.

  19. Quantitatively assessed CT imaging measures of pulmonary interstitial pneumonia: Effects of reconstruction algorithms on histogram parameters

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Hisanobu [Department of Radiology, Hyogo Kaibara Hospital, 5208-1 Kaibara, Kaibara-cho, Tanba 669-3395 (Japan)], E-mail: hisanobu19760104@yahoo.co.jp; Ohno, Yoshiharu [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: yosirad@kobe-u.ac.jp; Yamazaki, Youichi [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: y.yamazk@sahs.med.osaka-u.ac.jp; Nogami, Munenobu [Division of PET, Institute of Biomedical Research and Innovation, 2-2 MInamimachi, Minatojima, Chu0-ku, Kobe 650-0047 (Japan)], E-mail: aznogami@fbri.org; Kusaka, Akiko [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: a.kusaka@hosp.kobe-u.ac.jp; Murase, Kenya [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: murase@sahs.med.osaka-u.ac.jp; Sugimura, Kazuro [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: sugimura@med.kobe-u.ac.jp

    2010-04-15

    This study aimed the influences of reconstruction algorithm for quantitative assessments in interstitial pneumonia patients. A total of 25 collagen vascular disease patients (nine male patients and 16 female patients; mean age, 57.2 years; age range 32-77 years) underwent thin-section MDCT examinations, and MDCT data were reconstructed with three kinds of reconstruction algorithm (two high-frequencies [A and B] and one standard [C]). In reconstruction algorithm B, the effect of low- and middle-frequency space was suppressed compared with reconstruction algorithm A. As quantitative CT parameters, kurtosis, skewness, and mean lung density (MLD) were acquired from a frequency histogram of the whole lung parenchyma in each reconstruction algorithm. To determine the difference of quantitative CT parameters affected by reconstruction algorithms, these parameters were compared statistically. To determine the relationships with the disease severity, these parameters were correlated with PFTs. In the results, all the histogram parameters values had significant differences each other (p < 0.0001) and those of reconstruction algorithm C were the highest. All MLDs had fair or moderate correlation with all parameters of PFT (-0.64 < r < -0.45, p < 0.05). Though kurtosis and skewness in high-frequency reconstruction algorithm A had significant correlations with all parameters of PFT (-0.61 < r < -0.45, p < 0.05), there were significant correlations only with diffusing capacity of carbon monoxide (DLco) and total lung capacity (TLC) in reconstruction algorithm C and with forced expiratory volume in 1 s (FEV1), DLco and TLC in reconstruction algorithm B. In conclusion, reconstruction algorithm has influence to quantitative assessments on chest thin-section MDCT examination in interstitial pneumonia patients.

  20. Validation of an algorithm for planar surgical resection reconstruction

    Science.gov (United States)

    Milano, Federico E.; Ritacco, Lucas E.; Farfalli, Germán L.; Aponte-Tinao, Luis A.; González Bernaldo de Quirós, Fernán; Risk, Marcelo

    2012-02-01

    Surgical planning followed by computer-assisted intraoperative navigation in orthopaedics oncology for tumor resection have given acceptable results in the last few years. However, the accuracy of preoperative planning and navigation is not clear yet. The aim of this study is to validate a method capable of reconstructing the nearly planar surface generated by the cutting saw in the surgical specimen taken off the patient during the resection procedure. This method estimates an angular and offset deviation that serves as a clinically useful resection accuracy measure. The validation process targets the degree to which the automatic estimation is true, taking as a validation criterium the accuracy of the estimation algorithm. For this purpose a manually estimated gold standard (a bronze standard) data set is built by an expert surgeon. The results show that the manual and the automatic methods consistently provide similar measures.

  1. Some Nonlinear Reconstruction Algorithms for Electrical Impedance Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Berryman, J G

    2001-03-09

    An impedance camera [Henderson and Webster, 1978; Dines and Lytle, 1981]--or what is now more commonly called electrical impedance tomography--attempts to image the electrical impedance (or just the conductivity) distribution inside a body using electrical measurements on its boundary. The method has been used successfully in both biomedical [Brown, 1983; Barber and Brown, 1986; J. C. Newell, D. G. Gisser, and D. Isaacson, 1988; Webster, 1990] and geophysical applications [Wexler, Fry, and Neurnan, 1985; Daily, Lin, and Buscheck, 1987], but the analysis of optimal reconstruction algorithms is still progressing [Murai and Kagawa, 1985; Wexler, Fry, and Neurnan, 1985; Kohn and Vogelius, 1987; Yorkey and Webster, 1987; Yorkey, Webster, and Tompkins, 1987; Berryman and Kohn, 1990; Kohn and McKenney, 1990; Santosa and Vogelius, 1990; Yorkey, 1990]. The most common application is monitoring the influx or efflux of a highly conducting fluid (such as brine in a porous rock or blood in the human body) through the volume being imaged. For biomedical applications, this met hod does not have the resolution of radiological methods, but it is comparatively safe and inexpensive and therefore provides a valuable alternative when continuous monitoring of a patient or process is desired. The following discussion is intended first t o summarize the physics of electrical impedance tomography, then to provide a few details of the data analysis and forward modeling requirements, and finally to outline some of the reconstruction algorithms that have proven to be most useful in practice. Pointers to the literature are provided throughout this brief narrative and the reader is encouraged to explore the references for more complete discussions of the various issues raised here.

  2. A parallel stereo reconstruction algorithm with applications in entomology (APSRA)

    Science.gov (United States)

    Bhasin, Rajesh; Jang, Won Jun; Hart, John C.

    2012-03-01

    We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.

  3. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  4. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  5. An Algorithm for Automated Reconstruction of Particle Cascades in High Energy Physics Experiments

    CERN Document Server

    Actis, O; Henrichs, A; Hinzmann, A; Kirsch, M; Müller, G; Steggemann, J

    2008-01-01

    We present an algorithm for reconstructing particle cascades from event data of a high energy physics experiment. For a given physics process, the algorithm reconstructs all possible configurations of the cascade from the final state objects. We describe the procedure as well as examples of physics processes of different complexity studied at hadron-hadron colliders. We estimate the performance of the algorithm by 20 microseconds per reconstructed decay vertex, and 0.6 kByte per reconstructed particle in the decay trees.

  6. A novel reconstruction algorithm for bioluminescent tomography based on Bayesian compressive sensing

    Science.gov (United States)

    Wang, Yaqi; Feng, Jinchao; Jia, Kebin; Sun, Zhonghua; Wei, Huijun

    2016-03-01

    Bioluminescence tomography (BLT) is becoming a promising tool because it can resolve the biodistribution of bioluminescent reporters associated with cellular and subcellular function through several millimeters with to centimeters of tissues in vivo. However, BLT reconstruction is an ill-posed problem. By incorporating sparse a priori information about bioluminescent source, enhanced image quality is obtained for sparsity based reconstruction algorithm. Therefore, sparsity based BLT reconstruction algorithm has a great potential. Here, we proposed a novel reconstruction method based on Bayesian compressive sensing and investigated its feasibility and effectiveness with a heterogeneous phantom. The results demonstrate the potential and merits of the proposed algorithm.

  7. Source reconstruction technique for slot array antennas using the Gerchberg-Papoulis algorithm

    OpenAIRE

    Sano, Makoto; Sierra Castañer, Manuel; Salmerón Ruiz, Tamara; Hirokawa, Jiro; Ando, Makoto

    2014-01-01

    A source reconstruction technique for slot array antennas is presented. By exploiting the information about the positions and the polarizations of slots to the Gerchberg-Papoulis iterative algorithm, the field on the slots is accurately reconstructed. The proposed technique is applied to the source reconstruction of a K-band radial line slot antenna (RLSA), and the simulated and measured results are presented

  8. DART: a robust algorithm for fast reconstruction of three-dimensional grain maps

    DEFF Research Database (Denmark)

    Batenburg, K.J.; Sijbers, J.; Poulsen, Henning Friis;

    2010-01-01

    classical tomography. To test the properties of the algorithm, three-dimensional X-ray diffraction microscopy data are simulated and reconstructed with DART as well as by a conventional iterative technique, namely SIRT (simultaneous iterative reconstruction technique). For 100 × 100 pixel reconstructions...

  9. Source reconstruction technique for slot array antennas using the Gerchberg-Papoulis algorithm

    OpenAIRE

    Sano, Makoto; Sierra Castañer, Manuel; Salmerón Ruiz, Tamara; Hirokawa, Jiro; Ando, Makoto

    2014-01-01

    A source reconstruction technique for slot array antennas is presented. By exploiting the information about the positions and the polarizations of slots to the Gerchberg-Papoulis iterative algorithm, the field on the slots is accurately reconstructed. The proposed technique is applied to the source reconstruction of a K-band radial line slot antenna (RLSA), and the simulated and measured results are presented

  10. Ptychographic reconstruction algorithm for frequency resolved optical gating: super-resolution and supreme robustness

    CERN Document Server

    Sidorenko, Pavel; Avnat, Zohar; Cohen, Oren

    2016-01-01

    Frequency-resolved optical gating (FROG) is probably the most popular technique for complete characterization of ultrashort laser pulses. In FROG, a reconstruction algorithm retrieves the pulse from a measured spectrogram, yet current FROG reconstruction algorithms require and exhibit several restricting features that weaken FROG performances. For example, the delay step must correspond to the spectral bandwidth measured with large enough SNR a condition that limits the temporal resolution of the reconstructed pulse, obscures measurements of weak broadband pulses, and makes measurement of broadband mid-IR pulses hard and slow because the spectrograms become huge. We develop a new approach for FROG reconstruction, based on ptychography (a scanning coherent diffraction imaging technique), that removes many of the algorithmic restrictions. The ptychographic reconstruction algorithm is significantly faster and more robust to noise than current FROG algorithms, which are based on generalized projections (GP). We d...

  11. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  12. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    Science.gov (United States)

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.

  13. Reconstruction of the interior of the Saint Salvator abbey of Ename around 1290

    Directory of Open Access Journals (Sweden)

    Carlotta Capurro

    2014-10-01

    Full Text Available In this paper we outline the process of research about the reconstruction of the Saint Saviour abbey in Ename (Oudenaarde, Belgium in the 13th century both in its architectural decoration and in its furnishing. The reason for the reconstruction is the creation of an educational game for the visitors of the Provincial Heritage Centre, built just next to the archaeological site of the abbey.

  14. Primal-dual Interior-point Algorithms for Second-order Cone Optimization Based on a New Parametric Kernel Function

    Institute of Scientific and Technical Information of China (English)

    Yan Qin BAI; Guo Qiang WANG

    2007-01-01

    A class of polynomial primal-dual interior-point algorithms for second-order cone optimization based on a new parametric kernel function, with parameters p and q, is presented. Its growth term is between linear and quadratic. Some new tools for the analysis of the algorithms are proposed.The complexity bounds of O(√Nlog N log N/ε) for large-update methods and O(√Nlog N/ε) for smallupdate methods match the best known complexity bounds obtained for these methods. Numerical tests demonstrate the behavior of the algorithms for different results of the parameters p and q.

  15. Algorithm study of wavefront reconstruction based on the cyclic radial shear interferometer

    CERN Document Server

    Li Da Hai; Chen Huai Xin; Chen Zhen Pei; Chen Bo Fei; Jing Feng

    2002-01-01

    The author presents a new algorithm of wavefront reconstruction based on the cyclic radial shear interferometer. The algorithm is a technique that the actual wavefront can be reconstructed directly and accurately from the distribution of phase difference which is obtained from the radial shearing pattern by Fourier transform. It can help to measure accurately the distorted wavefront of ICF in-process. An experiment is presented to test the algorithm

  16. Advanced reconstruction algorithms for electron tomography: from comparison to combination.

    Science.gov (United States)

    Goris, B; Roelandts, T; Batenburg, K J; Heidari Mezerji, H; Bals, S

    2013-04-01

    In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    Science.gov (United States)

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Real Time Equilibrium Reconstruction Algorithm in EAST Tokamak

    Institute of Scientific and Technical Information of China (English)

    王华忠; 罗家融; 黄勤超

    2004-01-01

    The EAST (HT-7U) superconducting tokamak is a national project of China on fusion research, with a capability of long-pulse (~ 1000 s) operation. In order to realize a longduration steady-state operation of EAST, some significant capability of real-time control is required. It would be very crucial to obtain the current profile parameters and the plasma shapes in real time by a flexible control system. As those discharge parameters cannot be directly measured,so a current profile consistent with the magnetohydrodynamic equilibrium should be evaluated from external magnetic measurements, based on a linearized iterative least square method, which can meet the requirements of the measurements. The arithmetic that the EFIT (equilibrium fitting code) is used for reference will be given in this paper and the computational efforts are reduced by parametrizing the current profile linearly in terms of a number of physical parameters.In order to introduce this reconstruction algorithm clearly, the main hardware design will be listed also.

  19. Three-dimensional imaging reconstruction algorithm of gated-viewing laser imaging with compressive sensing.

    Science.gov (United States)

    Li, Li; Xiao, Wei; Jian, Weijian

    2014-11-20

    Three-dimensional (3D) laser imaging combining compressive sensing (CS) has an advantage in lower power consumption and less imaging sensors; however, it brings enormous stress to subsequent calculation devices. In this paper we proposed a fast 3D imaging reconstruction algorithm to deal with time-slice images sampled by single-pixel detectors. The algorithm implements 3D imaging reconstruction before CS recovery, thus it saves plenty of runtime of CS recovery. Several experiments are conducted to verify the performance of the algorithm. Simulation results demonstrated that the proposed algorithm has better performance in terms of efficiency compared to an existing algorithm.

  20. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application......The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...

  1. A new iterative algorithm for reconstructing a signal from its dyadic wavelet transform modulus maxima

    Institute of Scientific and Technical Information of China (English)

    张茁生; 刘贵忠; 刘峰

    2003-01-01

    A new algorithm for reconstructing a signal from its wavelet transform modulus maxima is presented based on an iterative method for solutions to monotone operator equations in Hilbert spaces. The algorithm's convergence is proved. Numerical simulations for different types of signals are given. The results indicate that compared with Mallat's alternate projection method, the proposed algorithm is sim-pler, faster and more effective.

  2. AN OPTIMAL METHOD FOR ADJUSTING THE CENTERING PARAMETER IN THE WIDE-NEIGHBORHOOD PRIMAL-DUAL INTERIOR-POINT ALGORITHM FOR LINEAR PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    Wen-bao Ai

    2004-01-01

    In this paper we present a dynamic optimal method for adjusting the centering pa-rameter in the wide-neighborhood primal-dual interior-point algorithms for linear pro-gramming, while the centering parameter is generally a constant in the classical wide-neighborhood primal-dual interior-point algorithms. The computational results show thatthe new method is more efficient.

  3. Efficient ω-k-algorithm for circular SAR and cylindrical reconstruction areas

    Directory of Open Access Journals (Sweden)

    A. Dallinger

    2006-01-01

    Full Text Available We present a novel reconstruction algorithm of ω-k type which suits for wideband circular synthetic aperture data taken in stripmap mode. The proposed algorithm allows to reconstruct an image on a cylindrical surface. The range trajectory is approximated by Taylor Series expansion using only the quadratic terms which limits the angular reconstruction range (cross range. In our case this is not a restriction for the application. Wider areas with respect to cross range can be realized by joining several reconstructed images side by side to build a wider image by means of digital spotlighting.

  4. Convergence rate calculation of simultaneous iterative reconstruction technique algorithm for diffuse optical tomography image reconstruction: A feasibility study

    Science.gov (United States)

    Chuang, Ching-Cheng; Tsai, Jui-che; Chen, Chung-Ming; Yu, Zong-Han; Sun, Chia-Wei

    2012-04-01

    Diffuse optical tomography (DOT) is an emerging technique for functional biological imaging. The imaging quality of DOT depends on the imaging reconstruction algorithm. The SIRT has been widely used for DOT image reconstruction but there is no criterion to truncate based on any kind of residual parameter. The iteration loops will always be decided by experimental rule. This work presents the CR calculation that can be great help for SIRT optimization. In this paper, four inhomogeneities with various shapes of absorption distributions are simulated as imaging targets. The images are reconstructed and analyzed based on the simultaneous iterative reconstruction technique (SIRT) method. For optimization between time consumption and imaging accuracy in reconstruction process, the numbers of iteration loop needed to be optimized with a criterion in algorithm, that is, the root mean square error (RMSE) should be minimized in limited iterations. For clinical applications of DOT, the RMSE cannot be obtained because the measured targets are unknown. Thus, the correlations between the RMSE and the convergence rate (CR) in SIRT algorithm are analyzed in this paper. From the simulation results, the parameter CR reveals the related RMSE value of reconstructed images. The CR calculation offers an optimized criterion of iteration process in SIRT algorithm for DOT imaging. Based on the result, the SIRT can be modified with CR calculation for self-optimization. CR reveals an indicator of SIRT image reconstruction in clinical DOT measurement. Based on the comparison result between RMSE and CR, a threshold value of CR (CRT) can offer an optimized number of iteration steps for DOT image reconstruction. This paper shows the feasibility study by utilizing CR criterion for SIRT in simulation and the clinical application of DOT measurement relies on further investigation.

  5. Research on Image Reconstruction Algorithms for Tuber Electrical Resistance Tomography System

    Directory of Open Access Journals (Sweden)

    Jiang Zili

    2016-01-01

    Full Text Available The application of electrical resistance tomography (ERT technology has been expanded to the field of agriculture, and the concept of TERT (Tuber Electrical Resistance Tomography is proposed. On the basis of the research on the forward and the inverse problems of the TERT system, a hybrid algorithm based on genetic algorithm is proposed, which can be used in TERT system to monitor the growth status of the plant tubers. The image reconstruction of TERT system is different from the conventional ERT system for two phase-flow measurement. Imaging of TERT needs more precision measurement and the conventional ERT cares more about the image reconstruction speed. A variety of algorithms are analyzed and optimized for the purpose of making them suitable for TERT system. For example: linear back projection, modified Newton-Raphson and genetic algorithm. Experimental results showed that the novel hybrid algorithm is superior to other algorithm and it can effectively improve the image reconstruction quality.

  6. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  7. A Homogeneous and Self-Dual Interior-Point Linear Programming Algorithm for Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Skajaa, Anders

    2015-01-01

    We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...... is significantly faster than several state-of-the-art IPMs based on sparse linear algebra, and 2) warm-start reduces the average number of iterations by 35-40%.......We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...

  8. A practical local tomography reconstruction algorithm based on known subregion

    CERN Document Server

    Paleo, Pierre; Mirone, Alessandro

    2016-01-01

    We propose a new method to reconstruct data acquired in a local tomography setup. This method uses an initial reconstruction and refines it by correcting the low frequency artifacts known as the cupping effect. A basis of Gaussian functions is used to correct the initial reconstruction. The coefficients of this basis are iteratively optimized under the constraint of a known subregion. Using a coarse basis reduces the degrees of freedom of the problem while actually correcting the cupping effect. Simulations show that the known region constraint yields an unbiased reconstruction, in accordance to uniqueness theorems stated in local tomography.

  9. Single image super-resolution reconstruction method based on LC-KSVD algorithm

    Science.gov (United States)

    Zhang, Yaolan; Liu, Yijun

    2017-05-01

    A good dictionary has direct impact to the result of super-resolution image reconstruction. For solving the problem that dictionary learning only contains representation ability but no class information using K-SVD algorithm, this paper proposes single image super-resolution algorithm based on LC-KSVD (Label consist K-SVD). The algorithm adds classifier parameter constraints into the process of dictionary learning and classifier parameters in the process, making the dictionary possess good representation and discrimination ability. The experimental results show that the algorithm has high reconstruction results and good robustness.

  10. Comparison study of typical algorithms for reconstructing time series from the recurrence plot of dynamical systems

    Institute of Scientific and Technical Information of China (English)

    Liu Jie; Shi Shu-Ting; Zhao Jun-Chan

    2013-01-01

    The three most widely used methods for reconstructing the underlying time series via the recurrence plots (RPs) of a dynamical system are compared with each other in this paper.We aim to reconstruct a toy series,a periodical series,a random series,and a chaotic series to compare the effectiveness of the most widely used typical methods in terms of signal correlation analysis.The application of the most effective algorithm to the typical chaotic Lorenz system verifies the correctness of such an effective algorithm.It is verified that,based on the unthresholded RPs,one can reconstruct the original attractor by choosing different RP thresholds based on the Hirata algorithm.It is shown that,in real applications,it is possible to reconstruct the underlying dynamics by using quite little information from observations of real dynamical systems.Moreover,rules of the threshold chosen in the algorithm are also suggested.

  11. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  12. Filtering of measurement noise with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey

    2014-01-01

    Two different antenna models are set up in GRASP and CHAMP, and noise is added to the radiated field. The noisy field is then given as input to the 3D reconstruction of DIATOOL and the SWE coefficients and the far-field radiated by the reconstructed currents are compared with the noise-free results...

  13. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images

    OpenAIRE

    Chong Fan; Xushuai Chen; Lei Zhong; Min Zhou; Yun Shi; Yulin Duan

    2017-01-01

    A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This ...

  14. Application aspects of advanced antenna diagnostics with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey

    2015-01-01

    This paper focuses on two important applications of the 3D reconstruction algorithm of the commercial software DIATOOL for antenna diagnostics. The first one is the accurate and detailed identification of array malfunctioning, thanks to the available enhanced spatial resolution of the reconstructed...

  15. An FBP image reconstruction algorithm for x-ray differential phase contrast CT

    Science.gov (United States)

    Qi, Zhihua; Chen, Guang-Hong

    2008-03-01

    Most recently, a novel data acquisition method has been proposed and experimentally implemented for x-ray differential phase contrast computed tomography (DPC-CT), in which a conventional x-ray tube and a Talbot-Lau type interferometer were utilized in data acquisition. The divergent nature of the data acquisition system requires a divergent-beam image reconstruction algorithm for DPC-CT. This paper focuses on addressing this image reconstruction issue. We developed a filtered backprojection algorithm to directly reconstruct the DPC-CT images from acquired projection data. The developed algorithm allows one to directly reconstruct the decrement of the real part of the refractive index from the measured data. In order to accurately reconstruct an image, the data need to be acquired over an angular range of at least 180° plus the fan-angle. Different from the parallel beam data acquisition and reconstruction methods, a 180° rotation angle for data acquisition system does not provide sufficient data for an accurate reconstruction of the entire field of view. Numerical simulations have been conducted to validate the image reconstruction algorithm.

  16. Array diagnostics, spatial resolution, and filtering of undesired radiation with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, C.; Pivnenko, Sergey; Jørgensen, E.

    2013-01-01

    This paper focuses on three important features of the 3D reconstruction algorithm of DIATOOL: the identification of array elements improper functioning and failure, the obtainable spatial resolution of the reconstructed fields and currents, and the filtering of undesired radiation and scattering...

  17. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  18. An Improved Predictor-Corrector Interior-Point Algorithm for Linear Complementarity Problems with -Iteration Complexity

    National Research Council Canada - National Science Library

    Fang, Debin; Yu, Qian

    2011-01-01

    ...) based on the Mizuno-Todd-Ye algorithm. The modified corrector steps in our algorithm cannot only draw the iteration point back to a narrower neighborhood of the center path but also reduce the duality gap...

  19. Adaptive infinite impulse response system identification using modified-interior search algorithm with Lèvy flight.

    Science.gov (United States)

    Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva

    2017-03-01

    In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method.

  20. Validation of Ionosonde Electron Density Reconstruction Algorithms with IONOLAB-RAY in Central Europe

    Science.gov (United States)

    Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra

    2016-07-01

    Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used

  1. Noise Equivalent Counts Based Emission Image Reconstruction Algorithm of Tomographic Gamma Scanning

    CERN Document Server

    Wang, Ke; Feng, Wei; Han, Dong

    2014-01-01

    Tomographic Gamma Scanning (TGS) is a technique used to assay the nuclide distribution and radioactivity in nuclear waste drums. Both transmission and emission scans are performed in TGS and the transmission image is used for the attenuation correction in emission reconstructions. The error of the transmission image, which is not considered by the existing reconstruction algorithms, negatively affects the final results. An emission reconstruction method based on Noise Equivalent Counts (NEC) is presented. Noises from the attenuation image are concentrated to the projection data to apply the NEC Maximum-Likelihood Expectation-Maximization algorithm. Experiments are performed to verify the effectiveness of the proposed method.

  2. Image Reconstruction Algorithm for Electrical Charge Tomography System

    Directory of Open Access Journals (Sweden)

    M. F. Rahmat

    2010-01-01

    Full Text Available Problem statement: Many problems in scientific computing can be formulated as inverse problem. A vast majority of these problems are ill-posed problems. In Electrical Charge Tomography (EChT, normally the sensitivity matrix generated from forward modeling is very ill-condition. This condition posts difficulties to the inverse problem solution especially in the accuracy and stability of the image being reconstructed. The objective of this study is to reconstruct the image cross-section of the material in pipeline gravity dropped mode conveyor as well to solve the ill-condition of matrix sensitivity. Approach: Least Square with Regularization (LSR method had been introduced to reconstruct the image and the electrodynamics sensor was used to capture the data that installed around the pipe. Results: The images were validated using digital imaging technique and Singular Value Decomposition (SVD method. The results showed that image reconstructed by this method produces a good promise in terms of accuracy and stability. Conclusion: This implied that LSR method provides good and promising result in terms of accuracy and stability of the image being reconstructed. As a result, an efficient method for electrical charge tomography image reconstruction has been introduced.

  3. Polynomial-time interior-point algorithm based on a local self-concordant finite barrier function

    Institute of Scientific and Technical Information of China (English)

    JIN Zheng-jing; BAI Yan-qin

    2009-01-01

    The choice of self-concordant functions is the key to efficient algorithms for linear and quadratic convex optimizations,which provide a method with polynomial-time iterations to solve linear and quadratic convex optimization problems.The parameters of a self-concordant barrier function can be used to compute the complexity bound of the proposed algorithm.In this paper,it is proved that the finite barrier function is a local self-concordant barrier function.By deriving the local values of parameters of this barrier function,the desired complexity bound of an interior-point algorithm based on this local serf-concordant function for linear optimization problem is obtained.The bound matches the best known bound for smallupdate methods.

  4. Algorithms and software for total variation image reconstruction via first-order methods

    DEFF Research Database (Denmark)

    Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt

    2010-01-01

    This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...

  5. Evaluation of the Bresenham algorithm for image reconstruction with ultrasound computer tomography

    Science.gov (United States)

    Spieß, Norbert; Zapf, Michael; Ruiter, Nicole V.

    2011-03-01

    At Karlsruhe Institute of Technology a 3D Ultrasound Computer Tomography (USCT) system is under development for early breast cancer detection. With 3.5 million of acquired raw data and up to one billion voxels for one image, the reconstruction of breast volumes may last for weeks in highest possible resolution. The currently applied backprojection algorithm, based on the synthetic aperture focusing technique (SAFT), offers only limited potential for further decrease of the reconstruction time. An alternative reconstruction method could apply signal detected data and rasterizes the backprojected ellipsoids directly. A well-known rasterization algorithm is the Bresenham algorithm, which was originally designed to rasterize lines. In this work an existing Bresenham concept to rasterize circles is extended to comply with the requirements of image reconstruction in USCT: the circle rasterization was adapted to rasterize spheres and extended to floating point parameterization. The evaluation of the algorithm showed that the quality of the rasterization is comparable to the original algorithm. The achieved performance of the circle and sphere rasterization algorithm was 12MVoxel/s and 3.5MVoxel/s. When taking the performance increase due to the reduced A-Scan data into account, an acceleration of factor 28 in comparison to the currently applied algorithm could be reached. For future work the presented rasterization algorithm offers additional potential for further speed up.

  6. IPED: Inheritance Path-based Pedigree Reconstruction Algorithm Using Genotype Data

    Science.gov (United States)

    Wang, Zhanyong; Han, Buhm; Parida, Laxmi; Eskin, Eleazar

    2013-01-01

    Abstract The problem of inference of family trees, or pedigree reconstruction, for a group of individuals is a fundamental problem in genetics. Various methods have been proposed to automate the process of pedigree reconstruction given the genotypes or haplotypes of a set of individuals. Current methods, unfortunately, are very time-consuming and inaccurate for complicated pedigrees, such as pedigrees with inbreeding. In this work, we propose an efficient algorithm that is able to reconstruct large pedigrees with reasonable accuracy. Our algorithm reconstructs the pedigrees generation by generation, backward in time from the extant generation. We predict the relationships between individuals in the same generation using an inheritance path-based approach implemented with an efficient dynamic programming algorithm. Experiments show that our algorithm runs in linear time with respect to the number of reconstructed generations, and therefore, it can reconstruct pedigrees that have a large number of generations. Indeed it is the first practical method for reconstruction of large pedigrees from genotype data. PMID:24093229

  7. A New Algorithm for Reconstructing Two-Dimensional Temperature Distribution by Ultrasonic Thermometry

    Directory of Open Access Journals (Sweden)

    Xuehua Shen

    2015-01-01

    Full Text Available Temperature, especially temperature distribution, is one of the most fundamental and vital parameters for theoretical study and control of various industrial applications. In this paper, ultrasonic thermometry to reconstruct temperature distribution is investigated, referring to the dependence of ultrasound velocity on temperature. In practical applications of this ultrasonic technique, reconstruction algorithm based on least square method is commonly used. However, it has a limitation that the amount of divided blocks of measure area cannot exceed the amount of effective travel paths, which eventually leads to its inability to offer sufficient temperature information. To make up for this defect, an improved reconstruction algorithm based on least square method and multiquadric interpolation is presented. And then, its reconstruction performance is validated via numerical studies using four temperature distribution models with different complexity and is compared with that of algorithm based on least square method. Comparison and analysis indicate that the algorithm presented in this paper has more excellent reconstruction performance, as the reconstructed temperature distributions will not lose information near the edge of area while with small errors, and its mean reconstruction time is short enough that can meet the real-time demand.

  8. Acceleration of the direct reconstruction of linear parametric images using nested algorithms.

    Science.gov (United States)

    Wang, Guobao; Qi, Jinyi

    2010-03-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  9. Parallel OSEM Reconstruction Algorithm for Fully 3-D SPECT on a Beowulf Cluster.

    Science.gov (United States)

    Rong, Zhou; Tianyu, Ma; Yongjie, Jin

    2005-01-01

    In order to improve the computation speed of ordered subset expectation maximization (OSEM) algorithm for fully 3-D single photon emission computed tomography (SPECT) reconstruction, an experimental beowulf-type cluster was built and several parallel reconstruction schemes were described. We implemented a single-program-multiple-data (SPMD) parallel 3-D OSEM reconstruction algorithm based on message passing interface (MPI) and tested it with combinations of different number of calculating processors and different size of voxel grid in reconstruction (64×64×64 and 128×128×128). Performance of parallelization was evaluated in terms of the speedup factor and parallel efficiency. This parallel implementation methodology is expected to be helpful to make fully 3-D OSEM algorithms more feasible in clinical SPECT studies.

  10. Optimization of digital breast tomosynthesis (DBT) acquisition parameters for human observers: effect of reconstruction algorithms

    Science.gov (United States)

    Zeng, Rongping; Badano, Aldo; Myers, Kyle J.

    2017-04-01

    We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.

  11. A Compton scattering image reconstruction algorithm based on total variation minimization

    Institute of Scientific and Technical Information of China (English)

    Li Shou-Peng; Wang Lin-Yuan; Yan Bin; Li Lei; Liu Yong-Jun

    2012-01-01

    Compton scattering imaging is a novel radiation imaging method using scattered photons.Its main characteristics are detectors that do not have to be on the opposite side of the source,so avoiding the rotation process.The reconstruction problem of Compton scattering imaging is the inverse problem to solve electron densities from nonlinear equations,which is ill-posed.This means the solution exhibits instability and sensitivity to noise or erroneous measurements.Using the theory for reconstruction of sparse images,a reconstruction algorithm based on total variation minimization is proposed.The reconstruction problem is described as an optimization problem with nonlinear data-consistency constraint.The simulated results show that the proposed algorithm could reduce reconstruction error and improve image quality,especially when there are not enough measurements.

  12. Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

    Science.gov (United States)

    Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi

    2017-03-01

    Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin–Osher–Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.

  13. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    Energy Technology Data Exchange (ETDEWEB)

    Chartrand, Rick [Los Alamos National Laboratory

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  14. Reconstruction algorithm medical imaging DRR; Algoritmo de construccion de imagenes medicas DRR

    Energy Technology Data Exchange (ETDEWEB)

    Estrada Espinosa, J. C.

    2013-07-01

    The method of reconstruction for digital radiographic Imaging (DRR), is based on two orthogonal images, on the dorsal and lateral decubitus position of the simulation. DRR images are reconstructed with an algorithm that simulates running a conventional X-ray, a single rendition team, beam emitted is not divergent, in this case, the rays are considered to be parallel in the image reconstruction DRR, for this purpose, it is necessary to use all the values of the units (HU) hounsfield of each voxel in all axial cuts that form the study TC, finally obtaining the reconstructed image DRR performing a transformation from 3D to 2D. (Author)

  15. Practical algorithms for simulation and reconstruction of digital in-line holograms

    CERN Document Server

    Latychevskaia, Tatiana

    2014-01-01

    Here, we present practical methods for simulation and reconstruction of in-line digital holograms recorded with plane and spherical waves. The algorithms described here are applicable to holographic imaging of an object exhibiting absorption as well as phase shifting properties. Optimal parameters, related to distances, sampling rate, and other factors for successful simulation and reconstruction of holograms are evaluated and criteria for the achievable resolution are worked out. Moreover, we show that the numerical procedures for the reconstruction of holograms recorded with plane and spherical waves are identical under certain conditions. Experimental examples of holograms and their reconstructions are also discussed.

  16. Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath

    2012-01-01

    This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals....... The performance of the existing compressed sensing reconstruction algorithms have not been investigated yet for the discrete multi-coset sampling. We compare the following algorithms – orthogonal matching pursuit, multiple signal classification, subspace-augmented multiple signal classification, focal under...

  17. Technical Note: Proximal Ordered Subsets Algorithms for TV Constrained Optimization in CT Image Reconstruction

    CERN Document Server

    Rose, Sean; Sidky, Emil Y; Pan, Xiaochuan

    2016-01-01

    This article is intended to supplement our 2015 paper in Medical Physics titled "Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization", in which ordered subsets methods were employed to perform total-variation constrained data-discrepancy minimization for image reconstruction in X-ray computed tomography. Here we provide details regarding implementation of the ordered subsets algorithms and suggestions for selection of algorithm parameters. Detailed pseudo-code is included for every algorithm implemented in the original manuscript.

  18. Theory and algorithms for image reconstruction on chords and within regions of interest.

    Science.gov (United States)

    Zou, Yu; Pan, Xiaochuan; Sidky, Emil Y

    2005-11-01

    We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.

  19. On a Gradient-Based Algorithm for Sparse Signal Reconstruction in the Signal/Measurements Domain

    Directory of Open Access Journals (Sweden)

    Ljubiša Stanković

    2016-01-01

    Full Text Available Sparse signals can be recovered from a reduced set of samples by using compressive sensing algorithms. In common compressive sensing methods the signal is recovered in the sparsity domain. A method for the reconstruction of sparse signals which reconstructs the missing/unavailable samples/measurements is recently proposed. This method can be efficiently used in signal processing applications where a complete set of signal samples exists. The missing samples are considered as the minimization variables, while the available samples are fixed. Reconstruction of the unavailable signal samples/measurements is preformed using a gradient-based algorithm in the time domain, with an adaptive step. Performance of this algorithm with respect to the step-size and convergence are analyzed and a criterion for the step-size adaptation is proposed in this paper. The step adaptation is based on the gradient direction angles. Illustrative examples and statistical study are presented. Computational efficiency of this algorithm is compared with other two commonly used gradient algorithms that reconstruct signal in the sparsity domain. Uniqueness of the recovered signal is checked using a recently introduced theorem. The algorithm application to the reconstruction of highly corrupted images is presented as well.

  20. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET

    Science.gov (United States)

    Mikhaylova, E.; Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm3) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  1. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    Science.gov (United States)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  2. A New Track Reconstruction Algorithm suitable for Parallel Processing based on Hit Triplets and Broken Lines

    CERN Document Server

    Schöning, Andre

    2016-01-01

    Track reconstruction in high track multiplicity environments at current and future high rate particle physics experiments is a big challenge and very time consuming. The search for track seeds and the fitting of track candidates are usually the most time consuming steps in the track reconstruction. Here, a new and fast track reconstruction method based on hit triplets is proposed which exploits a three-dimensional fit model including multiple scattering and hit uncertainties from the very start, including the search for track seeds. The hit triplet based reconstruction method assumes a homogeneous magnetic field which allows to give an analytical solutions for the triplet fit result. This method is highly parallelizable, needs fewer operations than other standard track reconstruction methods and is therefore ideal for the implementation on parallel computing architectures. The proposed track reconstruction algorithm has been studied in the context of the Mu3e-experiment and a typical LHC experiment.

  3. A New Track Reconstruction Algorithm suitable for Parallel Processing based on Hit Triplets and Broken Lines

    Science.gov (United States)

    Schöning, André

    2016-11-01

    Track reconstruction in high track multiplicity environments at current and future high rate particle physics experiments is a big challenge and very time consuming. The search for track seeds and the fitting of track candidates are usually the most time consuming steps in the track reconstruction. Here, a new and fast track reconstruction method based on hit triplets is proposed which exploits a three-dimensional fit model including multiple scattering and hit uncertainties from the very start, including the search for track seeds. The hit triplet based reconstruction method assumes a homogeneous magnetic field which allows to give an analytical solutions for the triplet fit result. This method is highly parallelizable, needs fewer operations than other standard track reconstruction methods and is therefore ideal for the implementation on parallel computing architectures. The proposed track reconstruction algorithm has been studied in the context of the Mu3e-experiment and a typical LHC experiment.

  4. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  5. Algorithms For Phylogeny Reconstruction In a New Mathematical Model

    NARCIS (Netherlands)

    Lenzini, Gabriele; Marianelli, Silvia

    1997-01-01

    The evolutionary history of a set of species is represented by a tree called phylogenetic tree or phylogeny. Its structure depends on precise biological assumptions about the evolution of species. Problems related to phylogeny reconstruction (i.e., finding a tree representation of information regard

  6. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    Science.gov (United States)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  7. An Algorithmic Approach for the Reconstruction of Nasal Skin Defects: Retrospective Analysis of 130 Cases

    Directory of Open Access Journals (Sweden)

    Berrak Akşam

    2016-06-01

    Full Text Available Objective: Most of the malignant cutaneous carcinomas are seen in the nasal region. Reconstruction of nasal defects is challenging because of the unique anatomic properties and complex structure of this region. In this study, we present our algorithm for the nasal skin defects that occurred after malignant skin tumor excisions. Material and Methods: Patients whose nasal skin was reconstructed after malignant skin tumor excision were included in the study. These patients were evaluated by their age, gender, comorbities, tumor location, tumor size, reconstruction type, histopathological diagnosis, and tumor recurrence. Results: A total of 130 patients (70 female, 60 male were evaluated. The average age of the patients was 67.8 years. Tumors were located mostly at the dorsum, alar region, and tip of the nose. When reconstruction methods were evaluated, primary closure was preferred in 14.6% patients, full thickness skin grafts were used in 25.3% patients, and reconstruction with flaps were the choice in 60% patients. Different flaps were used according to the subunits. Mostly, dorsal nasal flaps, bilobed flaps, nasolabial flaps, and forehead flaps were used. Conclusion: The defect-only reconstruction principle was accepted in this study. Previously described subunits, such as the dorsum, tip, alar region, lateral wall, columella, and soft triangles, of the nose were further divided into subregions by their anatomical relations. An algorithm was planned with these sub regions. In nasal skin reconstruction, this algorithm helps in selection the methods for the best results and minimize the complications.

  8. An effective, robust and parallel implementation of an interior point algorithm for limit state optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Frier, Christian;

    2014-01-01

    A robust and effective finite element based implementation of lower bound limit state analysis applying an interior point formulation is presented in this paper. The lower bound formulation results in a convex optimization problem consisting of a number of linear constraints from the equilibrium...... equations and a number of convex non-linear constraints from the yield criteria. The computational robustness has been improved by eliminating a large number of the equilibrium equations a priori leaving only the statical redundant variables as free optimization variables. The elimination of equilibrium...... equations is based on a optimized numbering of elements and stress variables based on the frontal method approach used in the standard finite element method. The optimized numbering secures sparsity in the formulation. The convex non-linear yield criteria are treated directly in the interior point...

  9. Comparing five different iterative reconstruction algorithms for computed tomography in an ROC study.

    Science.gov (United States)

    Jensen, Kristin; Martinsen, Anne Catrine T; Tingberg, Anders; Aaløkken, Trond Mogens; Fosse, Erik

    2014-12-01

    The purpose of this study was to evaluate lesion conspicuity achieved with five different iterative reconstruction techniques from four CT vendors at three different dose levels. Comparisons were made of iterative algorithm and filtered back projection (FBP) among and within systems. An anthropomorphic liver phantom was examined with four CT systems, each from a different vendor. CTDIvol levels of 5 mGy, 10 mGy and 15 mGy were chosen. Images were reconstructed with FBP and the iterative algorithm on the system. Images were interpreted independently by four observers, and the areas under the ROC curve (AUCs) were calculated. Noise and contrast-to-noise ratios (CNR) were measured. One iterative algorithm increased AUC (0.79, 0.95, and 0.97) compared to FBP (0.70, 0.86, and 0.93) at all dose levels (p algorithm increased AUC from 0.78 with FBP to 0.84 (p = 0.007) at 5 mGy. Differences at 10 and 15 mGy were not significant (p-values: 0.084-0.883). Three algorithms showed no difference in AUC compared to FBP (p-values: 0.008-1.000). All of the algorithms decreased noise (10-71%) and improved CNR. Only two algorithms improved lesion detection, even though noise reduction was shown with all algorithms. Iterative reconstruction algorithms affected lesion detection differently at different dose levels. One iterative algorithm improved lesion detectability compared to filtered back projection. Three algorithms did not significantly improve lesion detectability. One algorithm improved lesion detectability at the lowest dose level.

  10. A stand-alone track reconstruction algorithm for the scintillating fibre tracker at the LHCb upgrade

    CERN Multimedia

    Quagliani, Renato

    2017-01-01

    The LHCb upgrade detector project foresees the presence of a scintillating fiber tracker (SciFi) to be used during the LHC Run III, starting in 2020. The instantaneous luminosity will be increased up to $2\\times10^{33}$, five times larger than in Run II and a full software event reconstruction will be performed at the full bunch crossing rate by the trigger. The new running conditions, and the tighter timing constraints in the software trigger, represent a big challenge for track reconstruction. This poster presents the design and performance of a novel algorithm that has been developed to reconstruct track segments using solely hits from the SciFi. This algorithm is crucial for the reconstruction of tracks originating from long-lived particles such as $K_{S}^{0}$ and $\\Lambda$ and allows to greatly enhance the physics potential and capabilities of the LHCb upgrade when compared to its previous implementation.

  11. PROCEEDINGS ON SYNCHROTRON RADIATION: An ART iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging

    Science.gov (United States)

    Wang, Zhen-Tian; Zhang, Li; Huang, Zhi-Feng; Kang, Ke-Jun; Chen, Zhi-Qiang; Fang, Qiao-Guang; Zhu, Pei-Ping

    2009-11-01

    X-ray diffraction enhanced imaging (DEI) has extremely high sensitivity for weakly absorbing low-Z samples in medical and biological fields. In this paper, we propose an Algebra Reconstruction Technique (ART) iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging (DEI-CT). An Ordered Subsets (OS) technique is used to accelerate the ART reconstruction. Few-view reconstruction is also studied, and a partial differential equation (PDE) type filter which has the ability of edge-preserving and denoising is used to improve the image quality and eliminate the artifacts. The proposed algorithm is validated with both the numerical simulations and the experiment at the Beijing synchrotron radiation facility (BSRF).

  12. Cosmic Web Reconstruction through Density Ridges: Method and Algorithm

    CERN Document Server

    Chen, Yen-Chi; Freeman, Peter E; Genovese, Christopher R; Wasserman, Larry

    2015-01-01

    The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictates the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the Subspace Constrained Mean Shift (SCMS) algorithm (Ozertem and Erdogmus (2011); Genovese et al. (2012)) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS method to datasets sampled from the P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA and to LOWZ and CMASS data fro...

  13. Jet Energy Scale and its Uncertainties using the Heavy Ion Jet Reconstruction Algorithm in pp Collisions

    CERN Document Server

    Puri, Akshat; The ATLAS collaboration

    2017-01-01

    ATLAS uses a jet reconstruction algorithm in heavy ion collisions that takes as input calorimeter towers of size $0.1 \\times \\pi/32$ in $\\Delta\\eta \\times \\Delta\\phi$ and iteratively determines the underlying event background. This algorithm, which is different from the standard jet reconstruction used in ATLAS, is also used in the proton-proton collisions used as reference data for the Pb+Pb and p+Pb. This poster provides details of the heavy ion jet reconstruction algorithm and its performance in pp collisions. The calibration procedure is described in detail and cross checks using photon- jet balance are shown. The uncertainties on the jet energy scale and the jet energy resolution are described.

  14. A robust jet reconstruction algorithm for high-energy lepton colliders

    Directory of Open Access Journals (Sweden)

    M. Boronat

    2015-11-01

    Full Text Available We propose a new sequential jet reconstruction algorithm for future lepton colliders at the energy frontier. The Valencia algorithm combines the natural distance criterion for lepton colliders with the greater robustness against backgrounds of algorithms adapted to hadron colliders. Results on a detailed Monte Carlo simulation of tt¯ and ZZ production at future linear e+e− colliders (ILC and CLIC with a realistic level of background overlaid, show that it achieves better performance in the presence of background than the classical algorithms used at previous e+e− colliders.

  15. Reconstruction of strain distribution in fiber Bragg grat-ings with differential evolution algorithm

    Institute of Scientific and Technical Information of China (English)

    WEN Xiao-yan; YU Qoan

    2008-01-01

    Differential evolution algorithm is used to solve the inverse problem of strain distribution in tibet Bragg grating (FBG).Linear and nonlinear strain profiles are reconstructed based on the reflection spectra. An approximate solution could beobtained within only 50 rounds of evolutions. Numerical examples show good agreements between target strain profilesand reconstructed ones. Online performance analysis illuminates the efficiency and practicality of differential evolutionalgorithm in solving the inverse problem of FBG.

  16. Combinatorial analysis and algorithms for quasispecies reconstruction using next-generation sequencing

    Directory of Open Access Journals (Sweden)

    Vincenti Donatella

    2011-01-01

    Full Text Available Abstract Background Next-generation sequencing (NGS offers a unique opportunity for high-throughput genomics and has potential to replace Sanger sequencing in many fields, including de-novo sequencing, re-sequencing, meta-genomics, and characterisation of infectious pathogens, such as viral quasispecies. Although methodologies and software for whole genome assembly and genome variation analysis have been developed and refined for NGS data, reconstructing a viral quasispecies using NGS data remains a challenge. This application would be useful for analysing intra-host evolutionary pathways in relation to immune responses and antiretroviral therapy exposures. Here we introduce a set of formulae for the combinatorial analysis of a quasispecies, given a NGS re-sequencing experiment and an algorithm for quasispecies reconstruction. We require that sequenced fragments are aligned against a reference genome, and that the reference genome is partitioned into a set of sliding windows (amplicons. The reconstruction algorithm is based on combinations of multinomial distributions and is designed to minimise the reconstruction of false variants, called in-silico recombinants. Results The reconstruction algorithm was applied to error-free simulated data and reconstructed a high percentage of true variants, even at a low genetic diversity, where the chance to obtain in-silico recombinants is high. Results on empirical NGS data from patients infected with hepatitis B virus, confirmed its ability to characterise different viral variants from distinct patients. Conclusions The combinatorial analysis provided a description of the difficulty to reconstruct a quasispecies, given a determined amplicon partition and a measure of population diversity. The reconstruction algorithm showed good performance both considering simulated data and real data, even in presence of sequencing errors.

  17. A task-based comparison of two reconstruction algorithms for digital breast tomosynthesis

    Science.gov (United States)

    Mahadevan, Ravi; Ikejimba, Lynda C.; Lin, Yuan; Samei, Ehsan; Lo, Joseph Y.

    2014-03-01

    Digital breast tomosynthesis (DBT) generates 3-D reconstructions of the breast by taking X-Ray projections at various angles around the breast. DBT improves cancer detection as it minimizes tissue overlap that is present in traditional 2-D mammography. In this work, two methods of reconstruction, filtered backprojection (FBP) and the Newton-Raphson iterative reconstruction were used to create 3-D reconstructions from phantom images acquired on a breast tomosynthesis system. The task based image analysis method was used to compare the performance of each reconstruction technique. The task simulated a 10mm lesion within the breast containing iodine concentrations between 0.0mg/ml and 8.6mg/ml. The TTF was calculated using the reconstruction of an edge phantom, and the NPS was measured with a structured breast phantom (CIRS 020) over different exposure levels. The detectability index d' was calculated to assess image quality of the reconstructed phantom images. Image quality was assessed for both conventional, single energy and dual energy subtracted reconstructions. Dose allocation between the high and low energy scans was also examined. Over the full range of dose allocations, the iterative reconstruction yielded a higher detectability index than the FBP for single energy reconstructions. For dual energy subtraction, detectability index was maximized when most of the dose was allocated to the high energy image. With that dose allocation, the performance trend for reconstruction algorithms reversed; FBP performed better than the corresponding iterative reconstruction. However, FBP performance varied very erratically with changing dose allocation. Therefore, iterative reconstruction is preferred for both imaging modalities despite underperforming dual energy FBP, as it provides stable results.

  18. Imaging metallic samples using electrical capacitance tomography: forward modelling and reconstruction algorithms

    Science.gov (United States)

    Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.

  19. Research on reconstruction algorithms for 2D temperature field based on TDLAS

    Science.gov (United States)

    Peng, Dong; Jin, Yi; Zhai, Chao

    2015-10-01

    Tunable Diode Laser Absorption Tomography(TDLAT), as a promising technique which combines Tunable Diode Laser Absorption Spectroscopy(TDLAS) and computer tomography, has shown the advantages of high spatial resolution for temperature measurement. Given the large number of tomography algorithms, it is necessary to understand the feature of tomography algorithms and find suitable ones for the specific experiment. This paper illustrates two different algorithms including algebraic reconstruction technique (ART) and simulated annealing (SA) which are implemented using Matlab. The reconstruction simulations of unimodal and bimodal temperature phantom were done under different conditions, and the results of the simulation were analyzed. It shows that for the unimodal temperature phantom, the both algorithms work well, the reconstruction quality is acceptable under suitable conditions and the result of ART is better. But for the bimodal temperature phantom, the result of SA is much better. More specifically, the reconstruction quality of ART is mainly affected by the ray coverage, the maximum deviation for the unimodal temperature phantom is 5.9%, while for the bimodal temperature field, it is up to 25%. The reconstruction quality of SA is mainly affected by the number of the transitions, the maximum deviation for the unimodal temperature phantom is 9.2% when 6 transitions are used which is a little worse than the result of ART; however, the maximum deviation for the bimodal temperature phantom is much better than ART's, which is about 5.2% when 6 transitions are used.

  20. Incorporation of local dependent reliability information into the Prior Image Constrained Compressed Sensing (PICCS) reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Vaegler, Sven; Sauer, Otto [Wuerzburg Univ. (Germany). Dept. of Radiation Oncology; Stsepankou, Dzmitry; Hesser, Juergen [University Medical Center Mannheim, Mannheim (Germany). Dept. of Experimental Radiation Oncology

    2015-07-01

    The reduction of dose in cone beam computer tomography (CBCT) arises from the decrease of the tube current for each projection as well as from the reduction of the number of projections. In order to maintain good image quality, sophisticated image reconstruction techniques are required. The Prior Image Constrained Compressed Sensing (PICCS) incorporates prior images into the reconstruction algorithm and outperforms the widespread used Feldkamp-Davis-Kress-algorithm (FDK) when the number of projections is reduced. However, prior images that contain major variations are not appropriately considered so far in PICCS. We therefore propose the partial-PICCS (pPICCS) algorithm. This framework is a problem-specific extension of PICCS and enables the incorporation of the reliability of the prior images additionally. We assumed that the prior images are composed of areas with large and small deviations. Accordingly, a weighting matrix considered the assigned areas in the objective function. We applied our algorithm to the problem of image reconstruction from few views by simulations with a computer phantom as well as on clinical CBCT projections from a head-and-neck case. All prior images contained large local variations. The reconstructed images were compared to the reconstruction results by the FDK-algorithm, by Compressed Sensing (CS) and by PICCS. To show the gain of image quality we compared image details with the reference image and used quantitative metrics (root-mean-square error (RMSE), contrast-to-noise-ratio (CNR)). The pPICCS reconstruction framework yield images with substantially improved quality even when the number of projections was very small. The images contained less streaking, blurring and inaccurately reconstructed structures compared to the images reconstructed by FDK, CS and conventional PICCS. The increased image quality is also reflected in large RMSE differences. We proposed a modification of the original PICCS algorithm. The pPICCS algorithm

  1. A level set based algorithm to reconstruct the urinary bladder from multiple views.

    Science.gov (United States)

    Ma, Zhen; Jorge, Renato Natal; Mascarenhas, T; Tavares, João Manuel R S

    2013-12-01

    The urinary bladder can be visualized from different views by imaging facilities such as computerized tomography and magnetic resonance imaging. Multi-view imaging can present more details of this pelvic organ and contribute to a more reliable reconstruction. Based on the information from multi-view planes, a level set based algorithm is proposed to reconstruct the 3D shape of the bladder using the cross-sectional boundaries. The algorithm provides a flexible solution to handle the discrepancies from different view planes and can obtain an accurate bladder surface with more geometric details. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Study of cluster reconstruction and track fitting algorithms for CGEM-IT at BESIII

    CERN Document Server

    Guo, Yue; Ju, Xu-Dong; Wu, Ling-Hui; Xiu, Qing-Lei; Wang, Hai-Xia; Dong, Ming-Yi; Hu, Jing-Ran; Li, Wei-Dong; Li, Wei-Guo; Liu, Huai-Min; Ou-Yang, Qun; Shen, Xiao-Yan; Yuan, Ye; Zhang, Yao

    2015-01-01

    Considering the aging effects of existing Inner Drift Chamber (IDC) of BES\\uppercase\\expandafter{\\romannumeral3}, a GEM based inner tracker is proposed to be designed and constructed as an upgrade candidate for IDC. This paper introduces a full simulation package of CGEM-IT with a simplified digitization model, describes the development of the softwares for cluster reconstruction and track fitting algorithm based on Kalman filter method for CGEM-IT. Preliminary results from the reconstruction algorithms are obtained using a Monte Carlo sample of single muon events in CGEM-IT.

  3. A reconstruction algorithm for compressive quantum tomography using various measurement sets.

    Science.gov (United States)

    Zheng, Kai; Li, Kezhi; Cong, Shuang

    2016-12-14

    Compressed sensing (CS) has been verified that it offers a significant performance improvement for large quantum systems comparing with the conventional quantum tomography approaches, because it reduces the number of measurements from O(d(2)) to O(rd log(d)) in particular for quantum states that are fairly pure. Yet few algorithms have been proposed for quantum state tomography using CS specifically, let alone basis analysis for various measurement sets in quantum CS. To fill this gap, in this paper an efficient and robust state reconstruction algorithm based on compressive sensing is developed. By leveraging the fixed point equation approach to avoid the matrix inverse operation, we propose a fixed-point alternating direction method algorithm for compressive quantum state estimation that can handle both normal errors and large outliers in the optimization process. In addition, properties of five practical measurement bases (including the Pauli basis) are analyzed in terms of their coherences and reconstruction performances, which provides theoretical instructions for the selection of measurement settings in the quantum state estimation. The numerical experiments show that the proposed algorithm has much less calculating time, higher reconstruction accuracy and is more robust to outlier noises than many existing state reconstruction algorithms.

  4. A reconstruction algorithm for compressive quantum tomography using various measurement sets

    Science.gov (United States)

    Zheng, Kai; Li, Kezhi; Cong, Shuang

    2016-12-01

    Compressed sensing (CS) has been verified that it offers a significant performance improvement for large quantum systems comparing with the conventional quantum tomography approaches, because it reduces the number of measurements from O(d2) to O(rd log(d)) in particular for quantum states that are fairly pure. Yet few algorithms have been proposed for quantum state tomography using CS specifically, let alone basis analysis for various measurement sets in quantum CS. To fill this gap, in this paper an efficient and robust state reconstruction algorithm based on compressive sensing is developed. By leveraging the fixed point equation approach to avoid the matrix inverse operation, we propose a fixed-point alternating direction method algorithm for compressive quantum state estimation that can handle both normal errors and large outliers in the optimization process. In addition, properties of five practical measurement bases (including the Pauli basis) are analyzed in terms of their coherences and reconstruction performances, which provides theoretical instructions for the selection of measurement settings in the quantum state estimation. The numerical experiments show that the proposed algorithm has much less calculating time, higher reconstruction accuracy and is more robust to outlier noises than many existing state reconstruction algorithms.

  5. A Hierarchical NeuroBayes-based Algorithm for Full Reconstruction of B Mesons at B Factories

    CERN Document Server

    Feindt, Michael; Kreps, Michal; Kuhr, Thomas; Neubauer, Sebastian; Zander, Daniel; Zupanc, Anze

    2011-01-01

    We describe a new B-meson full reconstruction algorithm designed for the Belle experiment at the B-factory KEKB, an asymmetric e+e- collider. To maximize the number of reconstructed B decay channels, it utilizes a hierarchical reconstruction procedure and probabilistic calculus instead of classical selection cuts. The multivariate analysis package NeuroBayes was used extensively to hold the balance between highest possible efficiency, robustness and acceptable CPU time consumption. In total, 1042 exclusive decay channels were reconstructed, employing 71 neural networks altogether. Overall, we correctly reconstruct one B+/- or B0 candidate in 0.3% or 0.2% of the BBbar events, respectively. This is an improvement in efficiency by roughly a factor of 2, depending on the analysis considered, compared to the cut-based classical reconstruction algorithm used at Belle. The new framework also features the ability to choose the desired purity or efficiency of the fully reconstructed sample. If the same purity as for t...

  6. Reconstruction of Gene Regulatory Networks Based on Two-Stage Bayesian Network Structure Learning Algorithm

    Institute of Scientific and Technical Information of China (English)

    Gui-xia Liu; Wei Feng; Han Wang; Lei Liu; Chun-guang Zhou

    2009-01-01

    In the post-genomic biology era, the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system, and it has been a challenging task in bioinformatics. The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages, but how to determine the network structure and parameters is still important to be explored. This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network .The new algorithm is evaluated with the use of both simulated and yeast cell cycle data. The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.

  7. SimpleSTORM: a fast, self-calibrating reconstruction algorithm for localization microscopy.

    Science.gov (United States)

    Köthe, Ullrich; Herrmannsdörfer, Frank; Kats, Ilia; Hamprecht, Fred A

    2014-06-01

    Although there are many reconstruction algorithms for localization microscopy, their use is hampered by the difficulty to adjust a possibly large number of parameters correctly. We propose SimpleSTORM, an algorithm that determines appropriate parameter settings directly from the data in an initial self-calibration phase. The algorithm is based on a carefully designed yet simple model of the image acquisition process which allows us to standardize each image such that the background has zero mean and unit variance. This standardization makes it possible to detect spots by a true statistical test (instead of hand-tuned thresholds) and to de-noise the images with an efficient matched filter. By reducing the strength of the matched filter, SimpleSTORM also performs reasonably on data with high-spot density, trading off localization accuracy for improved detection performance. Extensive validation experiments on the ISBI Localization Challenge Dataset, as well as real image reconstructions, demonstrate the good performance of our algorithm.

  8. FastDIRC: a fast Monte Carlo and reconstruction algorithm for DIRC detectors

    CERN Document Server

    Hardin, John

    2016-01-01

    FastDIRC is a novel fast Monte Carlo and reconstruction algorithm for DIRC detectors. A DIRC employs rectangular fused-silica bars both as Cherenkov radiators and as light guides. Cherenkov-photon imaging and time-of-propagation information are utilized by a DIRC to identify charged particles. GEANT-based DIRC Monte Carlo simulations are extremely CPU intensive. The FastDIRC algorithm permits fully simulating a DIRC detector more than 10000 times faster than using GEANT. This facilitates designing a DIRC-reconstruction algorithm that improves the Cherenkov-angle resolution of a DIRC detector by about 30% compared to existing algorithms. FastDIRC also greatly reduces the time required to study competing DIRC-detector designs.

  9. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  10. GENETIC ALGORITHM IN OPTIMIZATION DESIGN OF INTERIOR PERMANENT MAGNET SYNCHRONOUS MOTOR

    Directory of Open Access Journals (Sweden)

    Phuong Le Ngo

    2017-01-01

    Full Text Available Classical method of designing electric motors help to achieve functional motor, but doesn’t ensure minimal cost in manufacturing and operating. Recently optimization is becoming an important part in modern electric motor design process. The objective of the optimization process is usually to minimize cost, energy loss, mass, or maximize torque and efficiency. Most of the requirements for electrical machine design are in contradiction to each other (reduction in volume or mass, improvement in efficiency etc.. Optimization in design permanent magnet synchronous motor (PMSM is a multi-objective optimization problem. There are two approaches for solving this problem, one of them is evolution algorithms, which gain a lot of attentions recently. For designing PMSM, evolution algorithms are more attractive approach. Genetic algorithm is one of the most common. This paper presents components and procedures of genetic algorithms, and its implementation on computer. In optimization process, analytical and finite element method are used together for better performance and precision. Result from optimization process is a set of solutions, from which engineer will choose one. This method was used to design a permanent magnet synchronous motor based on an asynchronous motor type АИР112МВ8.

  11. An Effective, Robust And Parallel Implementation Of An Interior Point Algorithm For Limit State Optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars

    2013-01-01

    of the precalculation step, which utilizes the principals of the well-known frontal method. The succeeding optimization algorithm is also significantly optimized, by applying a parallel implementation, which eliminates the exponential growth in computational time relative to the element numbers....

  12. DELAUNAY-BASED SURFACE RECONSTRUCTION ALGORITHM IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Triangulation of scattered points is the first important section during reverse engineering. New concepts of dynamic circle and closed point are put forward based on current basic method. These new concepts can narrow the extent which triangulation process should seek through and optimize the triangles during producing them. Updating the searching edges dynamically controls progress of triangulation. Intersection judgment between new triangle and produced triangles is changed into intersection judgment between new triangle and searching edges. Examples illustrate superiorities of this new algorithm.

  13. TV-constrained incremental algorithms for low-intensity CT image reconstruction

    DEFF Research Database (Denmark)

    Rose, Sean D.; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    constraint can be guided by an image reconstructed by filtered backprojection (FBP). We apply our algorithm to low-dose synchrotron X-ray CT data from the Advanced Photon Source (APS) at Argonne National Labs (ANL) to demonstrate its potential utility. We find that the algorithm provides a means of edge......-preserving regularization with the potential to generate useful images at low iteration number in low-dose CT....

  14. Application of incremental algorithms to CT image reconstruction for sparse-view, noisy data

    OpenAIRE

    Rose, Sean; Andersen, Martin Skovgaard; Sidky, Emil Y.; Pan, Xiaochuan

    2014-01-01

    This conference contribution adapts an incremental framework for solving optimization problems of interest for sparse-view CT. From the incremental framework two algorithms are derived: one that combines a damped form of the algebraic reconstruction technique (ART) with a total-variation (TV) projection, and one that employs a modified damped ART, accounting for a weighted-quadratic data fidelity term, combined with TV projection. The algorithms are demonstrated on simulated, noisy, sparsevie...

  15. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  16. Fast direct fourier reconstruction of radial and PROPELLER MRI data using the chirp transform algorithm on graphics hardware.

    Science.gov (United States)

    Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan

    2013-10-01

    To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.

  17. The anâtaxis phylogenetic reconstruction algorithm

    OpenAIRE

    Sonderegger, Bernhard Pascal

    2007-01-01

    La phylogénétique est une discipline scientifique qui a pour but de reconstruire l'histoire de l'évolution à partir des caractères d'organismes vivants ou de données fossiles. Depuis quelques décennies, l'utilisation de caractères moléculaires dans la phylogénétique est devenue incontournable. D'ailleurs, la phylogénétique prend aujourd'hui un rôle important dans l'analyse bioinformatique de toutes sortes de données moléculaires. Il existe un grand nombre de techniques de reconstruction phylo...

  18. The anâtaxis phylogenetic reconstruction algorithm

    OpenAIRE

    Sonderegger, Bernhard Pascal; Chopard, Bastien; Bittar, Gabriel

    2008-01-01

    La phylogénétique est une discipline scientifique qui a pour but de reconstruire l'histoire de l'évolution à partir des caractères d'organismes vivants ou de données fossiles. Depuis quelques décennies, l'utilisation de caractères moléculaires dans la phylogénétique est devenue incontournable. D'ailleurs, la phylogénétique prend aujourd'hui un rôle important dans l'analyse bioinformatique de toutes sortes de données moléculaires. Il existe un grand nombre de techniques de reconstruction phylo...

  19. Comparing five different iterative reconstruction algorithms for computed tomography in an ROC study

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Kristin; Martinsen, Anne Catrine T. [Rikshospitalet, The Intervention Centre, Postboks 4950, Oslo (Norway); University of Oslo, lnstitute of Physics, Oslo (Norway); Tingberg, Anders [Lund University, Skaane University Hospital, Department of Medical Radiation Physics, Malmoe (Sweden); Aaloekken, Trond Mogens [Rikshospitalet, Department of Radiology and Nuclear Medicine, Postboks 4950, Oslo (Norway); Fosse, Erik [Rikshospitalet, The Intervention Centre, Postboks 4950, Oslo (Norway); University of Oslo, lnstitute of Clinical Medicine, Oslo (Norway)

    2014-12-15

    The purpose of this study was to evaluate lesion conspicuity achieved with five different iterative reconstruction techniques from four CT vendors at three different dose levels. Comparisons were made of iterative algorithm and filtered back projection (FBP) among and within systems. An anthropomorphic liver phantom was examined with four CT systems, each from a different vendor. CTDI{sub vol} levels of 5 mGy, 10 mGy and 15 mGy were chosen. Images were reconstructed with FBP and the iterative algorithm on the system. Images were interpreted independently by four observers, and the areas under the ROC curve (AUCs) were calculated. Noise and contrast-to-noise ratios (CNR) were measured. One iterative algorithm increased AUC (0.79, 0.95, and 0.97) compared to FBP (0.70, 0.86, and 0.93) at all dose levels (p < 0.001 and p = 0.047). Another algorithm increased AUC from 0.78 with FBP to 0.84 (p = 0.007) at 5 mGy. Differences at 10 and 15 mGy were not significant (p-values: 0.084-0.883). Three algorithms showed no difference in AUC compared to FBP (p-values: 0.008-1.000). All of the algorithms decreased noise (10-71 %) and improved CNR. Only two algorithms improved lesion detection, even though noise reduction was shown with all algorithms. (orig.)

  20. A Hierarchical Optimization Algorithm Based on GPU for Real-Time 3D Reconstruction

    Science.gov (United States)

    Lin, Jin-hua; Wang, Lu; Wang, Yan-jie

    2017-06-01

    In machine vision sensing system, it is important to realize high-quality real-time 3D reconstruction in large-scale scene. The recent online approach performed well, but scaling up the reconstruction, it causes pose estimation drift, resulting in the cumulative error, usually requiring a large number of off-line operation to completely correct the error, reducing the reconstruction performance. In order to optimize the traditional volume fusion method and improve the old frame-to-frame pose estimation strategy, this paper presents a real-time CPU to Graphic Processing Unit reconstruction system. Based on a robust camera pose estimation strategy, the algorithm fuses all the RGB-D input values into an effective hierarchical optimization framework, and optimizes each frame according to the global camera attitude, eliminating the serious dependence on the tracking timeliness and continuously tracking globally optimized frames. The system estimates the global optimization of gestures (bundling) in real-time, supports for robust tracking recovery (re-positioning), and re-estimation of large-scale 3D scenes to ensure global consistency. It uses a set of sparse corresponding features, geometric and ray matching functions in one of the parallel optimization systems. The experimental results show that the average reconstruction time is 415 ms per frame, the ICP pose is estimated 20 times in 100.0 ms. For large scale 3D reconstruction scene, the system performs well in online reconstruction area, keeping the reconstruction accuracy at the same time.

  1. Beam hardening correction using iterative total variation (ITV)-based algorithm in CBCT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Sungchae; Huh, Young [Converged Medical Device Research Center, Advanced Medical Device Research Division, KERI, Gyeonggido 426-910 (Korea, Republic of)

    2015-07-01

    Recently, beam hardening reduction is required to produce high-quality reconstructions of X-ray cone-beam computed tomography (CBCT) system for medical applications. This paper introduces the iterative total variation (ITV) for filtered-backprojection suffering from the serious beam hardening problems. Feldkamp, Davis, and Kress (FDK) reconstruction algorithm for CBCT system is widely used reconstruction technique. FDK reconstruction algorithm could be realized by generating the weighted projection data, filtering the projection images, and back-projecting the filtered projection data into the volume. However, FDK algorithm suffers from the beam hardening artifacts by X-ray attenuation coefficients. Recently, total variation (TV) method for compressed sensing (CS) has been particularly useful in exploiting the prior knowledge of minimal variation in the X-ray attenuation characteristics across object or human body. But a practical implementation of this method still remains a challenge. The main problem is the iterative nature of solving the TV-based CS formulation, which generally requires multiple iterations of forward and backward projections of a large dataset in clinically or industrially feasible time frame. In this paper, we propose ITV method after FDK reconstruction for reducing the beam hardening artifacts. The beam hardening problems are reduced by the ITV method to promote sparsity inherent in the X-ray attenuation characteristics. (authors)

  2. An adaptive reconstruction algorithm for spectral CT regularized by a reference image

    Science.gov (United States)

    Wang, Miaoshi; Zhang, Yanbo; Liu, Rui; Guo, Shuxu; Yu, Hengyong

    2016-12-01

    The photon counting detector based spectral CT system is attracting increasing attention in the CT field. However, the spectral CT is still premature in terms of both hardware and software. To reconstruct high quality spectral images from low-dose projections, an adaptive image reconstruction algorithm is proposed that assumes a known reference image (RI). The idea is motivated by the fact that the reconstructed images from different spectral channels are highly correlated. If a high quality image of the same object is known, it can be used to improve the low-dose reconstruction of each individual channel. This is implemented by maximizing the patch-wise correlation between the object image and the RI. Extensive numerical simulations and preclinical mouse study demonstrate the feasibility and merits of the proposed algorithm. It also performs well for truncated local projections, and the surrounding area of the region- of-interest (ROI) can be more accurately reconstructed. Furthermore, a method is introduced to adaptively choose the step length, making the algorithm more feasible and easier for applications.

  3. Reconstruction-plane-dependent weighted FDK algorithm for cone beam volumetric CT

    Science.gov (United States)

    Tang, Xiangyang; Hsieh, Jiang

    2005-04-01

    The original FDK algorithm has been extensively employed in medical and industrial imaging applications. With an increased cone angle, cone beam (CB) artifacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few "circular plus" trajectories have been proposed in the past to reduce CB artifacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artifacts of the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle. The inconsistence between conjugate rays is pixel dependent, i.e., it varies dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artifacts that can be avoided if appropriate view weighting strategy is exercised. In this paper, a modified FDK algorithm is proposed, along with an experimental evaluation and verification, in which the helical body phantom and a humanoid head phantom scanned by a volumetric CT (64 x 0.625 mm) are utilized. Without extra trajectories supplemental to the circular trajectory, the modified FDK algorithm applies reconstruction-plane-dependent view weighting on projection data before 3D backprojection, which reduces the inconsistency between conjugate rays by suppressing the contribution of one of the conjugate rays with a larger cone angle. Both computer-simulated and real phantom studies show that, up to a moderate cone angle, the CB artifacts can be substantially suppressed by the modified FDK algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering, and data manipulation efficiency, can be

  4. A modified OSEM algorithm for PET reconstruction using wavelet processing.

    Science.gov (United States)

    Lee, Nam-Yong; Choi, Yong

    2005-12-01

    Ordered subset expectation-maximization (OSEM) method in positron emission tomography (PET) has been very popular recently. It is an iterative algorithm and provides images with superior noise characteristics compared to conventional filtered backprojection (FBP) algorithms. Due to the lack of smoothness in images in OSEM iterations, however, some type of inter-smoothing is required. For this purpose, the smoothing based on the convolution with the Gaussian kernel has been used in clinical PET practices. In this paper, we incorporated a robust wavelet de-noising method into OSEM iterations as an inter-smoothing tool. The proposed wavelet method is based on a hybrid use of the standard wavelet shrinkage and the robust wavelet shrinkage to have edge preserving and robust de-noising simultaneously. The performances of the proposed method were compared with those of the smoothing methods based on the convolution with Gaussian kernel using software phantoms, physical phantoms, and human PET studies. The results demonstrated that the proposed wavelet method provided better spatial resolution characteristic than the smoothing methods based on the Gaussian convolution, while having comparable performance in noise removal.

  5. A new algorithm for EEG source reconstruction based on LORETA by contracting the source region

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new method is presented for EEG source reconstruction based on multichannel surface EEG recordings. From the low-resolution tomography obtained by the low resolution electromagnetic tomography algorithm (LORETA), this method acquires the source tomography, which has high-resolution by contracting the source region. In contrast to focal underdetermined system solver (FOCUSS), this method can gain more accurate result under certain circumstances.

  6. Computed Tomography Radiation Dose Reduction: Effect of Different Iterative Reconstruction Algorithms on Image Quality

    NARCIS (Netherlands)

    Willemink, M.J.; Takx, R.A.P.; Jong, P.A. de; Budde, R.P.; Bleys, R.L.; Das, M.; Wildberger, J.E.; Prokop, M.; Buls, N.; Mey, J. de; Leiner, T.; Schilham, A.M.

    2014-01-01

    We evaluated the effects of hybrid and model-based iterative reconstruction (IR) algorithms from different vendors at multiple radiation dose levels on image quality of chest phantom scans.A chest phantom was scanned on state-of-the-art computed tomography scanners from 4 vendors at 4 dose levels

  7. New Algorithm for 3D Facial Model Reconstruction and Its Application in Virtual Reality

    Institute of Scientific and Technical Information of China (English)

    Rong-Hua Liang; Zhi-Geng Pan; Chun Chen

    2004-01-01

    3D human face model reconstruction is essential to the generation of facial animations that is widely used in the field of virtual reality(VR).The main issues of 3D facial model reconstruction based on images by vision technologies are in twofold: one is to select and match the corresponding features of face from two images with minimal interaction and the other is to generate the realistic-looking human face model.In this paper,a new algorithm for realistic-looking face reconstruction is presented based on stereo vision.Firstly,a pattern is printed and attached to a planar surface for camera calibration,and corners generation and corners matching between two images are performed by integrating modified image pyramid Lucas-Kanade(PLK)algorithm and local adjustment algorithm,and then 3D coordinates of corners are obtained by 3D reconstruction.Individual face model is generated by the deformation of general 3D model and interpolation of the features.Finally,realisticlooking human face model is obtained after texture mapping and eyes modeling.In addition,some application examples in the field of VR are given.Experimental result shows that the proposed algorithm is robust and the 3D model is photo-realistic.

  8. Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS

    Science.gov (United States)

    Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.

    Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.

  9. Algorithmic approach to lower abdominal, perineal, and groin reconstruction using anterolateral thigh flaps.

    Science.gov (United States)

    Zelken, Jonathan A; AlDeek, Nidal F; Hsu, Chung-Chen; Chang, Nai-Jen; Lin, Chih-Hung; Lin, Cheng-Hung

    2016-02-01

    Lower abdominal, perineal, and groin (LAPG) reconstruction may be performed in a single stage. Anterolateral thigh (ALT) flaps are preferred here and taken as fasciocutaneous (ALT-FC), myocutaneous (ALT-MC), or vastus lateralis myocutaneous (VL-MC) flaps. We aim to present the results of reconstruction from a series of patients and guide flap selection with an algorithmic approach to LAPG reconstruction that optimizes outcomes and minimizes morbidity. Lower abdomen, groin, perineum, vulva, vagina, scrotum, and bladder wounds reconstructed in 22 patients using ALT flaps between 2000 and 2013 were retrospectively studied. Five ALT-FC, eight ALT-MC, and nine VL-MC flaps were performed. All flaps survived. Venous congestion occurred in three VL-MC flaps from mechanical cause. Wound infection occurred in six cases. Urinary leak occurred in three cases of bladder reconstruction. One patient died from congestive heart failure. The ALT flap is time tested and dependably addresses most LAPG defects; flap variations are suited for niche defects. We propose a novel algorithm to guide reconstructive decision-making.

  10. Optical correlation algorithm for reconstructing phase skeleton of complex optical fields for solving the phase problem

    DEFF Research Database (Denmark)

    Angelsky, O. V.; Gorsky, M. P.; Hanson, Steen Grüner;

    2014-01-01

    We propose an optical correlation algorithm illustrating a new general method for reconstructing the phase skeleton of complex optical fields from the measured two-dimensional intensity distribution. The core of the algorithm consists in locating the saddle points of the intensity distribution an...... and connecting such points into nets by the lines of intensity gradient that are closely associated with the equi-phase lines of the field. This algorithm provides a new partial solution to the inverse problem in optics commonly referred to as the phase problem....

  11. Particle Reconstruction at the LHC using Jet Substructure Algorithms

    CERN Document Server

    Rathjens, Denis

    2012-01-01

    The LHC-era with √ s = 7 TeV allows for a new energy-regime to be accessed. Heavy mass-resonances up to 3.5 TeV/c 2 are in reach. If such heavy particles decay hadronically to much lighter Standard Model particles such as top, Z or W, the jets of the decay products have a sizeable probability to be merged into a single jet. The result is a boosted jet with substructure. This diploma thesis deals with the phenomena of boosted jets, algorithms to distinguish substructure in these jets from normal hadronization and methods to further improve searches with boosted jets. The impact of such methods is demonstrated in an example analysis of a Z’→ tt¯-scenario on 2 fb −1 of data.

  12. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    Science.gov (United States)

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two

  13. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu [Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri 63130 (United States); Yang, Deshan [Department of Radiation Oncology, School of Medicine, Washington University in St. Louis, St. Louis, Missouri 63110 (United States); Tan, Jun [Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States)

    2016-04-15

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  14. N3DFix: an Algorithm for Automatic Removal of Swelling Artifacts in Neuronal Reconstructions.

    Science.gov (United States)

    Conde-Sousa, Eduardo; Szücs, Peter; Peng, Hanchuan; Aguiar, Paulo

    2017-01-01

    It is well established that not only electrophysiology but also morphology plays an important role in shaping the functional properties of neurons. In order to properly quantify morphological features it is first necessary to translate observational histological data into 3-dimensional geometric reconstructions of the neuronal structures. This reconstruction process, independently of being manual or (semi-)automatic, requires several preparation steps (e.g. histological processing) before data acquisition using specialized software. Unfortunately these processing steps likely produce artifacts which are then carried to the reconstruction, such as tissue shrinkage and formation of swellings. If not accounted for and corrected, these artifacts can change significantly the results from morphometric analysis and computer simulations. Here we present N3DFix, an open-source software which uses a correction algorithm to automatically find and fix swelling artifacts in neuronal reconstructions. N3DFix works as a post-processing tool and therefore can be used in either manual or (semi-)automatic reconstructions. The algorithm's internal parameters have been defined using a "ground truth" dataset produced by a neuroanatomist, involving two complementary manual reconstruction procedures: in the first, neuronal topology was faithfully reconstructed, including all swelling artifacts; in the second procedure a meticulous correction of the artifacts was manually performed directly during neuronal tracing. The internal parameters of N3DFix were set to minimize the differences between manual amendments and the algorithm's corrections. It is shown that the performance of N3DFix is comparable to careful manual correction of the swelling artifacts. To promote easy access and wide adoption, N3DFix is available in NEURON, Vaa3D and Py3DN.

  15. A tailored ML-EM algorithm for reconstruction of truncated projection data using few view angles

    Science.gov (United States)

    Mao, Yanfei; Zeng, Gengsheng L.

    2013-06-01

    Dedicated cardiac single photon emission computed tomography (SPECT) systems have the advantage of high speed and sensitivity at no loss, or even a gain, in resolution. The potential drawbacks of these dedicated systems are data truncation by the small field of view (FOV) and the lack of view angles. Serious artifacts, including streaks outside the FOV and distortion in the FOV, are introduced to the reconstruction when using the traditional emission data maximum-likelihood expectation-maximization (ML-EM) algorithm to reconstruct images from the truncated data with a small number of views. In this note, we propose a tailored ML-EM algorithm to suppress the artifacts caused by data truncation and insufficient angular sampling by reducing the image updating step sizes for the pixels outside the FOV. As a consequence, the convergence speed for the pixels outside the FOV is decelerated. We applied the proposed algorithm to truncated analytical data, Monte Carlo simulation data and real emission data with different numbers of views. The computer simulation results show that the tailored ML-EM algorithm outperforms the conventional ML-EM algorithm in terms of streak artifacts and distortion suppression for reconstruction from truncated projection data with a small number of views.

  16. An image reconstruction algorithm for electrical capacitance tomography based on simulated annealing particle swarm optimization

    Directory of Open Access Journals (Sweden)

    P. Wang

    2015-04-01

    Full Text Available In this paper, we introduce a novel image reconstruction algorithm with Least Squares Support Vector Machines (LS-SVM and Simulated Annealing Particle Swarm Optimization (APSO, named SAP. This algorithm introduces simulated annealing ideas into Particle Swarm Optimization (PSO, which adopts cooling process functions to replace the inertia weight function and constructs the time variant inertia weight function featured in annealing mechanism. Meanwhile, it employs the APSO procedure to search for the optimized resolution of Electrical Capacitance Tomography (ECT for image reconstruction. In order to overcome the soft field characteristics of ECT sensitivity field, some image samples with typical flow patterns are chosen for training with LS-SVM. Under the training procedure, the capacitance error caused by the soft field characteristics is predicted, and then is used to construct the fitness function of the particle swarm optimization on basis of the capacitance error. Experimental results demonstrated that the proposed SAP algorithm has a quick convergence rate. Moreover, the proposed SAP outperforms the classic Landweber algorithm and Newton-Raphson algorithm on image reconstruction.

  17. Implementation and evaluation of two helical CT reconstruction algorithms in CIVA

    Science.gov (United States)

    Banjak, H.; Costin, M.; Vienne, C.; Kaftandjian, V.

    2016-02-01

    The large majority of industrial CT systems reconstruct the 3D volume by using an acquisition on a circular trajec-tory. However, when inspecting long objects which are highly anisotropic, this scanning geometry creates severe artifacts in the reconstruction. For this reason, the use of an advanced CT scanning method like helical data acquisition is an efficient way to address this aspect known as the long-object problem. Recently, several analytically exact and quasi-exact inversion formulas for helical cone-beam reconstruction have been proposed. Among them, we identified two algorithms of interest for our case. These algorithms are exact and of filtered back-projection structure. In this work we implemented the filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms of Zou and Pan (2004). For performance evaluation, we present a numerical compari-son of the two selected algorithms with the helical FDK algorithm using both complete (noiseless and noisy) and truncated data generated by CIVA (the simulation platform for non-destructive testing techniques developed at CEA).

  18. An architecture for the efficient implementation of compressive sampling reconstruction algorithms in reconfigurable hardware

    Science.gov (United States)

    Ortiz, Fernando E.; Kelmelis, Eric J.; Arce, Gonzalo R.

    2007-04-01

    According to the Shannon-Nyquist theory, the number of samples required to reconstruct a signal is proportional to its bandwidth. Recently, it has been shown that acceptable reconstructions are possible from a reduced number of random samples, a process known as compressive sampling. Taking advantage of this realization has radical impact on power consumption and communication bandwidth, crucial in applications based on small/mobile/unattended platforms such as UAVs and distributed sensor networks. Although the benefits of these compression techniques are self-evident, the reconstruction process requires the solution of nonlinear signal processing algorithms, which limit applicability in portable and real-time systems. In particular, (1) the power consumption associated with the difficult computations offsets the power savings afforded by compressive sampling, and (2) limited computational power prevents these algorithms to maintain pace with the data-capturing sensors, resulting in undesirable data loss. FPGA based computers offer low power consumption and high computational capacity, providing a solution to both problems simultaneously. In this paper, we present an architecture that implements the algorithms central to compressive sampling in an FPGA environment. We start by studying the computational profile of the convex optimization algorithms used in compressive sampling. Then we present the design of a pixel pipeline suitable for FPGA implementation, able to compute these algorithms.

  19. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    Science.gov (United States)

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  20. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); Chen Da [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); College of Material Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Li Zhenghong [Institute of Nuclear Physics and Chemistry, CAEP, Mianyang, 621900 Sichuan (China); Wu Yuelei [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); Nuclear and Radiation Safety Centre, State Environmental Protection Administration (SEPA), Beijing 100082 (China)

    2012-11-15

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  1. Extension of the modal wave-front reconstruction algorithm to non-uniform illumination.

    Science.gov (United States)

    Ma, Xiaoyu; Mu, Jie; Rao, ChangHui; Yang, Jinsheng; Rao, XueJun; Tian, Yu

    2014-06-30

    Attempts are made to eliminate the effects of non-uniform illumination on the precision of wave-front measurement. To achieve this, the relationship between the wave-front slope at a single sub-aperture and the distributions of the phase and light intensity of the wave-front were first analyzed to obtain the relevant theoretical formulae. Then, based on the principle of modal wave-front reconstruction, the influence of the light intensity distribution on the wave-front slope is introduced into the calculation of the reconstruction matrix. Experiments were conducted to prove that the corrected modal wave-front reconstruction algorithm improved the accuracy of wave-front reconstruction. Moreover, the correction is conducive to high-precision wave-front measurement using a Hartmann wave-front sensor in the presence of non-uniform illumination.

  2. Comparing field-based and numerically modelled reconstructions of the last Cordilleran Ice Sheet deglaciation over the Thompson Plateau, southern interior British Columbia, Canada.

    Science.gov (United States)

    Cripps, Jonathan; Brennand, Tracy; Seguinot, Julien; Perkins, Andrew

    2016-04-01

    Palaeoglaciological and palaeoclimate reconstructions of the deglaciation of the last Cordilleran Ice Sheet (CIS) over British Columbia (BC), Canada, are limited by the relative lack of understanding of the late-glacial ice sheet margins and dynamics. Deglaciation of the last CIS over the southern Interior Plateau of BC has been characterised as proceeding via stagnation and downwasting into dead ice lobes in valleys where ice was thickest. This conceptual model explains the apparent lack of moraines, which may otherwise imply active recession, and known palaeo-glacial lakes are explained as being dammed by these dead ice lobes. However, downwasting alone is at odds with coeval ice sheets which receded systematically towards their interiors. Presented here is a comparison between a new field-based reconstruction of the deglaciation of the northern Thompson Plateau, and ice sheet model results of the same area. Glacioisostatic tilts, reconstructed using mapped shoreline elevations, rise to the north-northwest at around 1.8 m/km, implying an ice surface slope, and likely active recession, towards the Coast Mountains. New reconstructions of the stages of glacial Lake Nicola (gLN), utilising field and aerial photographic mapping of shorelines, and sedimentology and geophysical surveys on ice-marginal and glaciolacustrine landforms, largely support this interpretation; the lake expanded and lowered to the north-northwest as progressively lower outlets were opened during ice retreat in this direction. Fields of newly discovered glaciotectonised moraines, grounding-line deposits and overridden glacial lake sediments record ice margin oscillations and minor readvances within gLN; the general alignment of these features further supports recession to the north-northwest. Numerical simulations of deglaciation of the area results in ice retreat to the north-northeast, which is inconsistent with the north-north-westward evolution of gLN. Excess precipitation over the eastern

  3. Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography.

    Science.gov (United States)

    Lim, JooWon; Lee, KyeoReh; Jin, Kyong Hwan; Shin, Seungwoo; Lee, SeoEun; Park, YongKeun; Ye, Jong Chul

    2015-06-29

    In optical tomography, there exist certain spatial frequency components that cannot be measured due to the limited projection angles imposed by the numerical aperture of objective lenses. This limitation, often called as the missing cone problem, causes the under-estimation of refractive index (RI) values in tomograms and results in severe elongations of RI distributions along the optical axis. To address this missing cone problem, several iterative reconstruction algorithms have been introduced exploiting prior knowledge such as positivity in RI differences or edges of samples. In this paper, various existing iterative reconstruction algorithms are systematically compared for mitigating the missing cone problem in optical diffraction tomography. In particular, three representative regularization schemes, edge preserving, total variation regularization, and the Gerchberg-Papoulis algorithm, were numerically and experimentally evaluated using spherical beads as well as real biological samples; human red blood cells and hepatocyte cells. Our work will provide important guidelines for choosing the appropriate regularization in ODT.

  4. Application of incremental algorithms to CT image reconstruction for sparse-view, noisy data

    DEFF Research Database (Denmark)

    Rose, Sean; Andersen, Martin Skovgaard; Sidky, Emil Y.;

    2014-01-01

    This conference contribution adapts an incremental framework for solving optimization problems of interest for sparse-view CT. From the incremental framework two algorithms are derived: one that combines a damped form of the algebraic reconstruction technique (ART) with a total-variation (TV) pro......) projection, and one that employs a modified damped ART, accounting for a weighted-quadratic data fidelity term, combined with TV projection. The algorithms are demonstrated on simulated, noisy, sparseview CT data.......This conference contribution adapts an incremental framework for solving optimization problems of interest for sparse-view CT. From the incremental framework two algorithms are derived: one that combines a damped form of the algebraic reconstruction technique (ART) with a total-variation (TV...

  5. BPF-based reconstruction algorithm for multiple rotation-translation scan mode

    Institute of Scientific and Technical Information of China (English)

    Ming Chen; Huitao Zhang; Peng Zhang

    2008-01-01

    In industrial CT, it is often required to inspect large objects using short line-detectors. To acquire the complete CT data for the scanning slice of large objects using short line-detectors, some multi-scan modes have been developed. But the existing methods reconstructing an image from the data scanned by multi-scan modes have to rebin the data into fan-beam or parallel-beam form via data interpolation. The data rebinning process not only increases great computational cost, but also degrades image resolution. In this paper, we propose a backprojection-filtration (BPF)-based reconstruction algorithm for rotation-translation (RT) multi-scan mode. An important feature of the proposed algorithm is that data rebinning process is not introduced. The simulation results have verified the validity of the proposed algorithm.

  6. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    2017-08-12

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  7. Fast reconstruction algorithm from modules maxima of signal wavelet transform and its application in enhancement of medical images

    Science.gov (United States)

    Zhai, Guangtao; Sun, Fengrong; Song, Haohao; Zhang, Mingqiang; Liu, Li; Wang, Changyu

    2003-09-01

    The modulus maxima of a signal's wavelet transform on different levels contain important information of the signal, which can be help to construct wavelet coefficients. A fast algorithm based on Hermite interpolation polynomial for reconstructing signal from its wavelet transform maxima is proposed in this paper. An implementation of this algorithm in medical image enhancement is also discussed. Numerical experiments have shown that compared with the Alternating Projection algorithm proposed by Mallat, this reconstruction algorithm is simpler, more efficient, and at the same time keeps high reconstruction Signal to Noise Ratio. When applied to the image contract enhancement, the computing time of this algorithm is much less compared with the one using Mallat's Alternative Projection, and the results are almost the same, so it is a practical fast reconstruction algorithm.

  8. Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)

    Science.gov (United States)

    McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian

    2006-03-01

    To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the

  9. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

    Science.gov (United States)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

    2013-01-01

    The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

  10. Final Technical Report: Sparse Grid Scenario Generation and Interior Algorithms for Stochastic Optimization in a Parallel Computing Environment

    Energy Technology Data Exchange (ETDEWEB)

    Mehrotra, Sanjay [Northwestern Univ., Evanston, IL (United States)

    2016-09-07

    The support from this grant resulted in seven published papers and a technical report. Two papers are published in SIAM J. on Optimization [87, 88]; two papers are published in IEEE Transactions on Power Systems [77, 78]; one paper is published in Smart Grid [79]; one paper is published in Computational Optimization and Applications [44] and one in INFORMS J. on Computing [67]). The works in [44, 67, 87, 88] were funded primarily by this DOE grant. The applied papers in [77, 78, 79] were also supported through a subcontract from the Argonne National Lab. We start by presenting our main research results on the scenario generation problem in Sections 1–2. We present our algorithmic results on interior point methods for convex optimization problems in Section 3. We describe a new ‘central’ cutting surface algorithm developed for solving large scale convex programming problems (as is the case with our proposed research) with semi-infinite number of constraints in Section 4. In Sections 5–6 we present our work on two application problems of interest to DOE.

  11. Optical tomography reconstruction algorithm with the finite element method: An optimal approach with regularization tools

    Energy Technology Data Exchange (ETDEWEB)

    Balima, O., E-mail: ofbalima@gmail.com [Département des Sciences Appliquées, Université du Québec à Chicoutimi, 555 bd de l’Université, Chicoutimi, QC, Canada G7H 2B1 (Canada); Favennec, Y. [LTN UMR CNRS 6607 – Polytech’ Nantes – La Chantrerie, Rue Christian Pauc, BP 50609 44 306 Nantes Cedex 3 (France); Rousse, D. [Chaire de recherche industrielle en technologies de l’énergie et en efficacité énergétique (t3e), École de technologie supérieure, 201 Boul. Mgr, Bourget Lévis, QC, Canada G6V 6Z3 (Canada)

    2013-10-15

    Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.

  12. Double-barrel vascularised fibula graft in mandibular reconstruction: a 10-year experience with an algorithm.

    Science.gov (United States)

    Shen, Yi; Guo, Xue-hua; Sun, Jian; Li, Jun; Shi, Jun; Huang, Wei; Ow, Andrew

    2013-03-01

    This retrospective study aims to report an algorithm to assist surgeons in selecting different modes of the double-barrel vascularised fibula graft for mandibular reconstruction. A total of 45 patients who underwent reconstruction of mandibular defects with different modes of the double-barrel vascularised fibula graft were reviewed. Our algorithm for deciding on any one of the different modes for different mandibular defects is influenced by factors including history of radiotherapy, the length of mandibular body defect and the need to preserve the inferior mandibular border. Post-operative functional outcomes included diet type and speech, and aesthetic results gained at post-operative 2 years. Patients with implant-borne prosthetic teeth underwent assessment of their masticatory function. There were four modes of mandibular reconstruction according to our algorithm, which included double-barrel vascularised fibula graft (n=21), partial double-barrel fibula graft (n=11), condylar prosthesis in combination with partial/double-barrel fibula graft (n=11), and double-barrel fibula onlay graft (n=2). Flap survival in all patients was 97.78%. Good occlusion, bony unions and wound closures were observed in 44 patients. Eleven patients received dental implantation in the transplanted fibula at post-operative 9-18th months. One patient wore removal partial dentures. For 11 patients with implant-borne prosthetic teeth, the average post-operative ipsilateral occlusal force was 41.5±17.7% of the contralateral force. Good functional and aesthetic results were achieved in 38 patients with more than 2 years of follow-up, including regular diet, normal speech and excellent or good appearance, especially for patients with dental rehabilitation. Good aesthetic and functional results can be achieved after dental rehabilitation by following our algorithm when choosing the different modes of double-barrel vascularised fibula graft for mandibular reconstruction. Copyright © 2012

  13. An efficient polyenergetic SART (pSART) reconstruction algorithm for quantitative myocardial CT perfusion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yuan, E-mail: yuan.lin@duke.edu; Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, 2424 Erwin Road, Suite 302, Durham, North Carolina 27705 (United States)

    2014-02-15

    Purpose: In quantitative myocardial CT perfusion imaging, beam hardening effect due to dense bone and high concentration iodinated contrast agent can result in visible artifacts and inaccurate CT numbers. In this paper, an efficient polyenergetic Simultaneous Algebraic Reconstruction Technique (pSART) was presented to eliminate the beam hardening artifacts and to improve the CT quantitative imaging ability. Methods: Our algorithm made threea priori assumptions: (1) the human body is composed of several base materials (e.g., fat, breast, soft tissue, bone, and iodine); (2) images can be coarsely segmented to two types of regions, i.e., nonbone regions and noniodine regions; and (3) each voxel can be decomposed into a mixture of two most suitable base materials according to its attenuation value and its corresponding region type information. Based on the above assumptions, energy-independent accumulated effective lengths of all base materials can be fast computed in the forward ray-tracing process and be used repeatedly to obtain accurate polyenergetic projections, with which a SART-based equation can correctly update each voxel in the backward projecting process to iteratively reconstruct artifact-free images. This approach effectively reduces the influence of polyenergetic x-ray sources and it further enables monoenergetic images to be reconstructed at any arbitrarily preselected target energies. A series of simulation tests were performed on a size-variable cylindrical phantom and a realistic anthropomorphic thorax phantom. In addition, a phantom experiment was also performed on a clinical CT scanner to further quantitatively validate the proposed algorithm. Results: The simulations with the cylindrical phantom and the anthropomorphic thorax phantom showed that the proposed algorithm completely eliminated beam hardening artifacts and enabled quantitative imaging across different materials, phantom sizes, and spectra, as the absolute relative errors were reduced

  14. A full-Newton step feasible interior-point algorithm for P∗(κ-LCP based on a new search direction

    Directory of Open Access Journals (Sweden)

    Behrouz Kheirfam

    2016-12-01

    Full Text Available In this paper, we present a full-Newton step feasible interior-point algorithm for a P∗(κ linear complementarity problem based on a new search direction. We apply a vector-valued function generated by a univariate function on nonlinear equations of the system which defines the central path. Furthermore, we derive the iteration bound for the algorithm, which coincides with the best-known iteration bound for these types of algorithms. Numerical results show that the proposed algorithm is competitive and reliable.

  15. A new ionospheric tomographic algorithm – constrained multiplicative algebraic reconstruction technique (CMART)

    Indian Academy of Sciences (India)

    Wen Debao; Liu Sanzhi

    2010-08-01

    For the limitation of the conventional multiplicative algebraic reconstruction technique (MART), a constrained MART (CMART) is proposed in this paper. In the new tomographic algorithm, a popular two-dimensional multi-point finite difference approximation of the second order Laplacian operator is used to smooth the electron density field. The feasibility and superiority of the new method are demonstrated by using the numerical simulation experiment. Finally, the CMART is used to reconstruct the regional electron density field by using the actual GNSS data under geomagnetic quiet and disturbed days. The available ionosonde data from Beijing station further validates the superiority of the new method.

  16. A new ionospheric tomographic algorithm — constrained multiplicative algebraic reconstruction technique (CMART)

    Science.gov (United States)

    Wen, Debao; Liu, Sanzhi

    2010-08-01

    For the limitation of the conventional multiplicative algebraic reconstruction technique (MART), a constrained MART (CMART) is proposed in this paper. In the new tomographic algorithm, a popular two-dimensional multi-point finite difference approximation of the second order Laplacian operator is used to smooth the electron density field. The feasibility and superiority of the new method are demonstrated by using the numerical simulation experiment. Finally, the CMART is used to reconstruct the regional electron density field by using the actual GNSS data under geomagnetic quiet and disturbed days. The available ionosonde data from Beijing station further validates the superiority of the new method.

  17. The application of MUSIC algorithm in spectrum reconstruction and interferogram processing

    Science.gov (United States)

    Jian, Xiaohua; Zhang, Chunmin; Zhao, Baochang; Zhu, Baohui

    2008-05-01

    Three different methods of spectrum reproduction and interferogram processing are discussed and contrasted in this paper. Especially, the nonparametric model of MUSIC (multiple signal classification) algorithm is firstly brought into the practical spectrum reconstruction processing. The experimental results prove that this method has immensely improved the resolution of reproduced spectrum, and provided a better math model for super advanced resolving power in spectrum reconstruction. The usefulness and simplicity of the technique will lead the interference imaging spectrometers to almost every field into which the spectroscopy has ventured and into some where it has not gone before.

  18. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    Energy Technology Data Exchange (ETDEWEB)

    Guo, J. [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China); Lehrstuhl fuer Radiochemie, Technische Universitaet Muenchen, Garching 80748 (Germany); Buecherl, T. [Lehrstuhl fuer Radiochemie, Technische Universitaet Muenchen, Garching 80748 (Germany); Zou, Y., E-mail: zouyubin@pku.edu.cn [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China); Guo, Z. [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China)

    2011-09-21

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  19. Integration of robust filters and phase unwrapping algorithms for image reconstruction of objects containing height discontinuities.

    Science.gov (United States)

    Weng, Jing-Feng; Lo, Yu-Lung

    2012-05-07

    For 3D objects with height discontinuities, the image reconstruction performance of interferometric systems is adversely affected by the presence of noise in the wrapped phase map. Various schemes have been proposed for detecting residual noise, speckle noise and noise at the lateral surfaces of the discontinuities. However, in most schemes, some noisy pixels are missed and noise detection errors occur. Accordingly, this paper proposes two robust filters (designated as Filters A and B, respectively) for improving the performance of the phase unwrapping process for objects with height discontinuities. Filter A comprises a noise and phase jump detection scheme and an adaptive median filter, while Filter B replaces the detected noise with the median phase value of an N × N mask centered on the noisy pixel. Filter A enables most of the noise and detection errors in the wrapped phase map to be removed. Filter B then detects and corrects any remaining noise or detection errors during the phase unwrapping process. Three reconstruction paths are proposed, Path I, Path II and Path III. Path I combines the path-dependent MACY algorithm with Filters A and B, while Paths II and III combine the path-independent cellular automata (CA) algorithm with Filters A and B. In Path II, the CA algorithm operates on the whole wrapped phase map, while in Path III, the CA algorithm operates on multiple sub-maps of the wrapped phase map. The simulation and experimental results confirm that the three reconstruction paths provide a robust and precise reconstruction performance given appropriate values of the parameters used in the detection scheme and filters, respectively. However, the CA algorithm used in Paths II and III is relatively inefficient in identifying the most suitable unwrapping paths. Thus, of the three paths, Path I yields the lowest runtime.

  20. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R; Kim, Jeehyun; Nelson, J Stuart [Beckman Laser Institute and Medical Clinic, University of California, Irvine, CA 92612 (United States)], E-mail: wverkruy@uci.edu

    2008-03-07

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  1. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    Science.gov (United States)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  2. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    Science.gov (United States)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  3. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    Science.gov (United States)

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  4. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions

    Science.gov (United States)

    Mazzaferri, Javier; Beaton, Luke; Hounye, Gisèle; Sayah, Diane N.; Costantino, Santiago

    2017-02-01

    The use of optical coherence tomography (OCT) to study ocular diseases associated with choroidal physiology is sharply limited by the lack of available automated segmentation tools. Current research largely relies on hand-traced, single B-Scan segmentations because commercially available programs require high quality images, and the existing implementations are closed, scarce and not freely available. We developed and implemented a robust algorithm for segmenting and quantifying the choroidal layer from 3-dimensional OCT reconstructions. Here, we describe the algorithm, validate and benchmark the results, and provide an open-source implementation under the General Public License for any researcher to use (https://www.mathworks.com/matlabcentral/fileexchange/61275-choroidsegmentation).

  5. Applicability of a set of tomographic reconstruction algorithms for quantitative SPECT on irradiated nuclear fuel assemblies

    Science.gov (United States)

    Jacobsson Svärd, Staffan; Holcombe, Scott; Grape, Sophie

    2015-05-01

    A fuel assembly operated in a nuclear power plant typically contains 100-300 fuel rods, depending on fuel type, which become strongly radioactive during irradiation in the reactor core. For operational and security reasons, it is of interest to experimentally deduce rod-wise information from the fuel, preferably by means of non-destructive measurements. The tomographic SPECT technique offers such possibilities through its two-step application; (1) recording the gamma-ray flux distribution around the fuel assembly, and (2) reconstructing the assembly's internal source distribution, based on the recorded radiation field. In this paper, algorithms for performing the latter step and extracting quantitative relative rod-by-rod data are accounted for. As compared to application of SPECT in nuclear medicine, nuclear fuel assemblies present a much more heterogeneous distribution of internal attenuation to gamma radiation than the human body, typically with rods containing pellets of heavy uranium dioxide surrounded by cladding of a zirconium alloy placed in water or air. This inhomogeneity severely complicates the tomographic quantification of the rod-wise relative source content, and the deduction of conclusive data requires detailed modelling of the attenuation to be introduced in the reconstructions. However, as shown in this paper, simplified models may still produce valuable information about the fuel. Here, a set of reconstruction algorithms for SPECT on nuclear fuel assemblies are described and discussed in terms of their quantitative performance for two applications; verification of fuel assemblies' completeness in nuclear safeguards, and rod-wise fuel characterization. It is argued that a request not to base the former assessment on any a priori information brings constraints to which reconstruction methods that may be used in that case, whereas the use of a priori information on geometry and material content enables highly accurate quantitative assessment, which

  6. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Justin, E-mail: justin.solomon@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Biomedical Engineering and Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  7. Conductivity and current density image reconstruction using harmonic Bz algorithm in magnetic resonance electrical impedance tomography.

    Science.gov (United States)

    Oh, Suk Hoon; Lee, Byung Il; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Seo, Jin Keun

    2003-10-07

    Magnetic resonance electrical impedance tomography (MREIT) is to provide cross-sectional images of the conductivity distribution sigma of a subject. While injecting current into the subject, we measure one component Bz of the induced magnetic flux density B = (Bx, By, Bz) using an MRI scanner. Based on the relation between (inverted delta)2 Bz and inverted delta sigma, the harmonic Bz algorithm reconstructs an image of sigma using the measured Bz data from multiple imaging slices. After we obtain sigma, we can reconstruct images of current density distributions for any given current injection method. Following the description of the harmonic Bz algorithm, this paper presents reconstructed conductivity and current density images from computer simulations and phantom experiments using four recessed electrodes injecting six different currents of 26 mA. For experimental results, we used a three-dimensional saline phantom with two polyacrylamide objects inside. We used our 0.3 T (tesla) experimental MRI scanner to measure the induced Bz. Using the harmonic Bz algorithm, we could reconstruct conductivity and current density images with 82 x 82 pixels. The pixel size was 0.6 x 0.6 mm2. The relative L2 errors of the reconstructed images were between 13.8 and 21.5% when the signal-to-noise ratio (SNR) of the corresponding MR magnitude images was about 30. The results suggest that in vitro and in vivo experimental studies with animal subjects are feasible. Further studies are requested to reduce the amount of injection current down to less than 1 mA for human subjects.

  8. The regularized blind tip reconstruction algorithm as a scanning probe microscopy tip metrology method

    CERN Document Server

    Jozwiak, G; Masalska, A; Gotszalk, T; Ritz, I; Steigmann, H

    2011-01-01

    The problem of an accurate tip radius and shape characterization is very important for determination of surface mechanical and chemical properties on the basis of the scanning probe microscopy measurements. We think that the most favorable methods for this purpose are blind tip reconstruction methods, since they do not need any calibrated characterizers and might be performed on an ordinary SPM setup. As in many other inverse problems also in case of these methods the stability of the solution in presence of vibrational and electronic noise needs application of so called regularization techniques. In this paper the novel regularization technique (Regularized Blind Tip Reconstruction - RBTR) for blind tip reconstruction algorithm is presented. It improves the quality of the solution in presence of isotropic and anisotropic noise. The superiority of our approach is proved on the basis of computer simulations and analysis of images of the Budget Sensors TipCheck calibration standard. In case of characterization ...

  9. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    Gagnon, Louis-Guillaume; The ATLAS collaboration

    2017-01-01

    ATLAS track reconstruction software is continuously evolving to match the demands from the increasing instantaneous luminosity of the LHC, as well as the increased center-of-mass energy. These conditions result in a higher abundance of events with dense track environments, such the core of jets or boosted tau leptons undergoing three-prong decays. These environments are characterised by charged particle separations on the order of the ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction software to cope with the expected conditions during LHC Run~2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented and physics performance studies are shown, including a measurement of the fraction of lost tracks in jets with high transverse momentum.

  10. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    Gagnon, Louis-Guillaume; The ATLAS collaboration

    2016-01-01

    ATLAS track reconstruction code is continuously evolving to match the demands from the increasing instantaneous luminosity of LHC, as well as the increased centre-of-mass energy. With the increase in energy, events with dense environments, e.g. the cores of jets or boosted tau leptons, become much more abundant. These environments are characterised by charged particle separations on the order of ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction code to cope with the expected conditions during LHC Run 2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented. In addition, physics performance studies are shown, e.g. a measurement of the fraction of lost tracks in jets with high transverse momentum.

  11. Effects of photon noise on speckle image reconstruction with the Knox-Thompson algorithm. [in astronomy

    Science.gov (United States)

    Nisenson, P.; Papaliolios, C.

    1983-01-01

    An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average arre biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.

  12. DEVELOPMENT OF ALGORITHMS OF NUMERICAL PROJECT OPTIMIZATION FOR THE CONSTRUCTION AND RECONSTRUCTION OF ENGINEERING STRUCTURES

    Directory of Open Access Journals (Sweden)

    MENEJLJUK О. І.

    2016-08-01

    Full Text Available Raising of problem. The paper analyzes the numerical optimization methods of construction projects and reconstruction of engineering structures. Purpose. Possible ways of modeling organizational and technological solutions in construction are presented. Based on the analysis the most effective method of optimization by experimental and statistical modeling with application of modern computer programs in the field of project management and mathematical statistics is selected. Conclusion. An algorithm for solving optimization problems by means of experimental and statistical modeling is developed.

  13. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT

    Energy Technology Data Exchange (ETDEWEB)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Mascolo-Fortin, Julia, E-mail: julia.mascolo-fortin.1@ulaval.ca [Département de physique, de génie physique et d’optique, Université Laval, Québec, Québec G1V 0A6 (Canada); Goussard, Yves, E-mail: yves.goussard@polymtl.ca [Département de génie électrique/Institut de génie biomédical, École Polytechnique de Montréal, C.P. 6079, succ. Centre-ville, Montréal, Québec H3C 3A7 (Canada); Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca [Département de physique, de génie physique et d’optique and Centre de recherche sur le cancer, Université Laval, Québec, Québec G1V 0A6, Canada and Département de radio-oncologie and Centre de recherche du CHU de Québec, Québec, Québec G1R 2J6 (Canada)

    2015-11-15

    Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can

  14. Image quality evaluation of iterative CT reconstruction algorithms: a perspective from spatial domain noise texture measures

    Science.gov (United States)

    Pachon, Jan H.; Yadava, Girijesh; Pal, Debashish; Hsieh, Jiang

    2012-03-01

    Non-linear iterative reconstruction (IR) algorithms have shown promising improvements in image quality at reduced dose levels. However, IR images sometimes may be perceived as having different image noise texture than traditional filtered back projection (FBP) reconstruction. Standard linear-systems-based image quality evaluation metrics are limited in characterizing such textural differences and non-linear image-quality vs. dose trade-off behavior, hence limited in predicting potential impact of such texture differences in diagnostic task. In an attempt to objectively characterize and measure dose dependent image noise texture and statistical properties of IR and FBP images, we have investigated higher order moments and Haralicks Gray Level Co-occurrence Matrices (GLCM) based texture features on phantom images reconstructed by an iterative and a traditional FBP method. In this study, the first 4 central order moments, and multiple texture features from Haralick GLCM in 4 directions at 6 different ROI sizes and four dose levels were computed. For resolution, noise and texture trade-off analysis, spatial frequency domain NPS and contrastdependent MTF were also computed. Preliminary results of the study indicate that higher order moments, along with spatial domain measures of energy, contrast, correlation, homogeneity, and entropy consistently capture the textural differences between FBP and IR as dose changes. These metrics may be useful in describing the perceptual differences in randomness, coarseness, contrast, and smoothness of images reconstructed by non-linear algorithms.

  15. Non-Iterative Regularized reconstruction Algorithm for Non-CartesiAn MRI: NIRVANA.

    Science.gov (United States)

    Kashyap, Satyananda; Yang, Zhili; Jacob, Mathews

    2011-02-01

    We introduce a novel noniterative algorithm for the fast and accurate reconstruction of nonuniformly sampled MRI data. The proposed scheme derives the reconstructed image as the nonuniform inverse Fourier transform of a compensated dataset. We derive each sample in the compensated dataset as a weighted linear combination of a few measured k-space samples. The specific k-space samples and the weights involved in the linear combination are derived such that the reconstruction error is minimized. The computational complexity of the proposed scheme is comparable to that of gridding. At the same time, it provides significantly improved accuracy and is considerably more robust to noise and undersampling. The advantages of the proposed scheme makes it ideally suited for the fast reconstruction of large multidimensional datasets, which routinely arise in applications such as f-MRI and MR spectroscopy. The comparisons with state-of-the-art algorithms on numerical phantoms and MRI data clearly demonstrate the performance improvement. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.

    Science.gov (United States)

    Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel

    2016-10-01

    This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.

  17. Coral Reef environment reconstruction using small drones, new generation photogrammetry algorithms and satellite imagery

    Science.gov (United States)

    Elisa, Casella; Rovere, Alessio; Harris, Daniel; Parravicini, Valeriano

    2016-04-01

    Surveys based on Remotely Piloted Aircraft Systems (RPAS), together with new-generation Structure from Motion (SfM) and Multi-View Stereo (MVS) reconstruction algorithms have been employed to reconstruct the shallow bathymetry of the inner lagoon of a coral reef in Moorea, French Polinesia. This technique has already been used with a high rate of success on coastal environments (e.g. sandy beaches and rocky shorelines) reaching accuracy of the final Digital Elevation Model in the order of few centimeters. The application of such techniques to reconstruct shallow underwater environments is, though, still little reported. We then used the bathymetric dataset obtained from aerial pictures as ground-truth for relative bathymetry obtained from satellite imagery (WorldView-2) of a larger area within the same study site. The first results of our work suggest that RPAS coupled with SfM and MVS algorithms can be used to reconstruct shallow water environments with favorable weather conditions, and can be employed to ground-truth to satellite imagery.

  18. IPED2X: a robust pedigree reconstruction algorithm for complicated pedigrees.

    Science.gov (United States)

    He, Dan; Eskin, Eleazar

    2014-12-01

    Reconstruction of family trees, or pedigree reconstruction, for a group of individuals is a fundamental problem in genetics. Some recent methods have been developed to reconstruct pedigrees using genotype data only. These methods are accurate and efficient for simple pedigrees which contain only siblings, where two individuals share the same pair of parents. A most recent method IPED2 is able to handle complicated pedigrees with half-sibling relationships, where two individuals share only one parent. However, the method is shown to miss many true positive half-sibling relationships as it removes all suspicious half-sibling relationships during the parent construction process. In this work, we propose a novel method IPED2X, which deploys a more robust algorithm for parent construction in the pedigrees by considering more possible operations rather than simple deletion. We convert the parent construction problem into a graph labeling problem and propose a more effective labeling algorithm. We show in our experiments that IPED2X is more powerful on capturing the true half-sibling relationships, which further leads to better reconstruction accuracy.

  19. Relaxed Linearized Algorithms for Faster X-Ray CT Image Reconstruction.

    Science.gov (United States)

    Nien, Hung; Fessler, Jeffrey A

    2016-04-01

    Statistical image reconstruction (SIR) methods are studied extensively for X-ray computed tomography (CT) due to the potential of acquiring CT scans with reduced X-ray dose while maintaining image quality. However, the longer reconstruction time of SIR methods hinders their use in X-ray CT in practice. To accelerate statistical methods, many optimization techniques have been investigated. Over-relaxation is a common technique to speed up convergence of iterative algorithms. For instance, using a relaxation parameter that is close to two in alternating direction method of multipliers (ADMM) has been shown to speed up convergence significantly. This paper proposes a relaxed linearized augmented Lagrangian (AL) method that shows theoretical faster convergence rate with over-relaxation and applies the proposed relaxed linearized AL method to X-ray CT image reconstruction problems. Experimental results with both simulated and real CT scan data show that the proposed relaxed algorithm (with ordered-subsets [OS] acceleration) is about twice as fast as the existing unrelaxed fast algorithms, with negligible computation and memory overhead.

  20. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    Science.gov (United States)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  1. Fast hybrid CPU- and GPU-based CT reconstruction algorithm using air skipping technique.

    Science.gov (United States)

    Lee, Byeonghun; Lee, Ho; Shin, Yeong Gil

    2010-01-01

    This paper presents a fast hybrid CPU- and GPU-based CT reconstruction algorithm to reduce the amount of back-projection operation using air skipping involving polygon clipping. The algorithm easily and rapidly selects air areas that have significantly higher contrast in each projection image by applying K-means clustering method on CPU, and then generates boundary tables for verifying valid region using segmented air areas. Based on these boundary tables of each projection image, clipped polygon that indicates active region when back-projection operation is performed on GPU is determined on each volume slice. This polygon clipping process makes it possible to use smaller number of voxels to be back-projected, which leads to a faster GPU-based reconstruction method. This approach has been applied to a clinical data set and Shepp-Logan phantom data sets having various ratio of air region for quantitative and qualitative comparison and analysis of our and conventional GPU-based reconstruction methods. The algorithm has been proved to reduce computational time to half without losing any diagnostic information, compared to conventional GPU-based approaches.

  2. Filtered gradient compressive sensing reconstruction algorithm for sparse and structured measurement matrices

    Science.gov (United States)

    Mejia, Yuri H.; Arguello, Henry

    2016-05-01

    Compressive sensing state-of-the-art proposes random Gaussian and Bernoulli as measurement matrices. Nev- ertheless, often the design of the measurement matrix is subject to physical constraints, and therefore it is frequently not possible that the matrix follows a Gaussian or Bernoulli distribution. Examples of these lim- itations are the structured and sparse matrices of the compressive X-Ray, and compressive spectral imaging systems. A standard algorithm for recovering sparse signals consists in minimizing an objective function that includes a quadratic error term combined with a sparsity-inducing regularization term. This problem can be solved using the iterative algorithms for solving linear inverse problems. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity. However, current algorithms are slow for getting a high quality image reconstruction because they do not exploit the structured and sparsity characteristics of the compressive measurement matrices. This paper proposes the development of a gradient-based algorithm for compressive sensing reconstruction by including a filtering step that yields improved quality using less iterations. This algorithm modifies the iterative solution such that it forces to converge to a filtered version of the residual AT y, where y is the measurement vector and A is the compressive measurement matrix. We show that the algorithm including the filtering step converges faster than the unfiltered version. We design various filters that are motivated by the structure of AT y. Extensive simulation results using various sparse and structured matrices highlight the relative performance gain over the existing iterative process.

  3. Evaluation of a new reconstruction algorithm for x-ray phase-contrast imaging

    Science.gov (United States)

    Seifert, Maria; Hauke, Christian; Horn, Florian; Lachner, Sebastian; Ludwig, Veronika; Pelzer, Georg; Rieger, Jens; Schuster, Max; Wandner, Johannes; Wolf, Andreas; Michel, Thilo; Anton, Gisela

    2016-04-01

    X-ray grating-based phase-contrast imaging might open up entirely new opportunities in medical imaging. However, transferring the interferometer technique from laboratory setups to conventional imaging systems the necessary rigidity of the system is difficult to achieve. Therefore, vibrations or distortions of the system lead to inaccuracies within the phase-stepping procedure. Given insufficient stability of the phase-step positions, up to now, artifacts in phase-contrast images occur, which lower the image quality. This is a problem with regard to the intended use of phase-contrast imaging in clinical routine as for example tiny structures of the human anatomy cannot be observed. In this contribution we evaluate an algorithm proposed by Vargas et.al.1 and applied to X-ray imaging by Pelzer et.al. that enables us to reconstruct a differential phase-contrast image without the knowledge of the specific phase-step positions. This method was tested in comparison to the standard reconstruction by Fourier analysis. The quality of phase-contrast images remains stable, even if the phase-step positions are completely unknown and not uniformly distributed. To also achieve attenuation and dark-field images the proposed algorithm has been combined with a further algorithm of Vargas et al.3 Using this algorithm, the phase-step positions can be reconstructed. With the help of the proper phase-step positions it is possible to get information about the phase, the amplitude and the offset of the measured data. We evaluated this algorithm concerning the measurement of thick objects which show a high absorbency.

  4. Interior point algorithm-based power flow optimisation of a combined AC and DC multi-terminal grid

    Directory of Open Access Journals (Sweden)

    Farhan Beg

    2015-01-01

    Full Text Available The high cost of power electronic equipment, lower reliability and poor power handling capacity of the semiconductor devices had stalled the deployment of systems based on DC (multi-terminal direct current system (MTDC networks. The introduction of voltage source converters (VSCs for transmission has renewed the interest in the development of large interconnected grids based on both alternate current (AC and DC transmission networks. Such a grid platform also realises the added advantage of integrating the renewable energy sources into the grid. Thus a grid based on DC MTDC network is a possible solution to improve energy security and check the increasing supply demand gap. An optimal power solution for combined AC and DC grids obtained by the solution of the interior point algorithm is proposed in this study. Multi-terminal HVDC grids lie at the heart of various suggested transmission capacity increases. A significant difference is observed when MTDC grids are solved for power flows in place of conventional AC grids. This study deals with the power flow problem of a combined MTDC and an AC grid. The AC side is modelled with the full power flow equations and the VSCs are modelled using a connecting line, two generators and an AC node. The VSC and the DC losses are also considered. The optimisation focuses on several different goals. Three different scenarios are presented in an arbitrary grid network with ten AC nodes and five converter stations.

  5. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.

  6. Influence of different path length computation models and iterative reconstruction algorithms on the quality of transmission reconstruction in Tomographic Gamma Scanning

    Science.gov (United States)

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2017-07-01

    This paper studies the influence of different path length computation models and iterative reconstruction algorithms on the quality of transmission reconstruction in Tomographic Gamma Scanning. The research purpose is to quantify and to localize heterogeneous matrices while investigating the recovery of linear attenuation coefficients (LACs) maps in 200 liter drums. Two different path length computation models so called ;point to point (PP); model and ;point to detector (PD); model are coupled with two different transmission reconstruction algorithms - Algebraic Reconstruction Technique (ART) with non-negativity constraint, and Maximum Likelihood Expectation Maximization (MLEM), respectively. Thus 4 modes are formed: ART-PP, ART-PD, MLEM-PP, MLEM-PD. The inter-comparison of transmission reconstruction qualities of these 4 modes is taken into account for heterogeneous matrices in the radioactive waste drums. Results illustrate that transmission-reconstructed qualities of MLEM algorithm are better than ART algorithm to get the most accurate LACs maps in good agreement with the reference data simulated by Monte Carlo. Moreover, PD model can be used to assay higher density waste drum and has a greater scope of application than PP model in TGS.

  7. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    Science.gov (United States)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  8. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions.

    Science.gov (United States)

    Grootjans, Willem; Meeuwis, Antoi P W; Slump, Cornelis H; de Geus-Oei, Lioe-Fee; Gotthardt, Martin; Visser, Eric P

    2016-12-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4.2, respectively

  9. New Virtual Cutting Algorithms for 3D Surface Model Reconstructed from Medical Images

    Institute of Scientific and Technical Information of China (English)

    WANG Wei-hong; QIN Xu-Jia

    2006-01-01

    This paper proposes a practical algorithms of plane cutting, stereo clipping and arbitrary cutting for 3D surface model reconstructed from medical images. In plane cutting and stereo clipping algorithms, the 3D model is cut by plane or polyhedron. Lists of edge and vertex in every cut plane are established. From these lists the boundary contours are created and their relationship of embrace is ascertained. The region closed by the contours is triangulated using Delaunay triangulation algorithm. Arbitrary cutting operation creates cutting curve interactively.The cut model still maintains its correct topology structure. With these operations,tissues inside can be observed easily and it can aid doctors to diagnose. The methods can also be used in surgery planning of radiotherapy.

  10. Realization and Comparison of Several Regression Algorithms for Electron Energy Spectrum Reconstruction

    Institute of Scientific and Technical Information of China (English)

    LI Gui; LIN Hui; WU Ai-Dong; SONG Gang; WU Yi-Can

    2008-01-01

    To determine the electron energy spectra for medical accelerator effectively, we investigate a nonlinear programming model with several nonlinear regression algorithms, including Levenberg-Marquardt, Quasi-Newton, Gradient, Conjugate Gradient, Newton, Principal-Axis and NMinimize algorithms. The local relaxation-bound method is also developed to increase the calculation accuracy. The testing results demonstrate that the above methods could reconstruct the electron energy spectra effectively. Especially, further with the local relaxationbound method the Levenberg Marquardt, Newton and N Minimize algorithms could precisely obtain both the electron energy spectra and the photon contamination. Further study shows that ignoring about 4% photon contamination would increase error greatly, and it also inaccurately makes the electron energy spectra 'drift' to the low energy.

  11. A Fast Algorithm for Muon Track Reconstruction and its Application to the ANTARES Neutrino Telescope

    CERN Document Server

    Aguilar, J A; Albert, A; Andre, M; Anghinolfi, M; Anton, G; Anvar, S; Ardid, M; Jesus, A C Assis; Astraatmadja, T; Aubert, J-J; Auer, R; Baret, B; Basa, S; Bazzotti, M; Bertin, V; Biagi, S; Bigongiari, C; Bogazzi, C; Bou-Cabo, M; Bouwhuis, M C; Brown, A M; Brunner, J; Busto, J; Camarena, F; Capone, A; Carloganu, C; Carminati, G; Carr, J; Cecchini, S; Charvis, Ph; Chiarusi, T; Circella, M; Coniglione, R; Costantini, H; Cottini, N; Coyle, P; Curtil, C; Decowski, M P; Dekeyser, I; Deschamps, A; Distefano, C; Donzaud, C; Dornic, D; Dorosti, Q; Drouhin, D; Eberl, T; Emanuele, U; Ernenwein, J-P; Escoffier, S; Fehr, F; Flaminio, V; Fritsch, U; Fuda, J-L; Galata, S; Gay, P; Giacomelli, G; Gomez-Gonzalez, J P; Graf, K; Guillard, G; Halladjian, G; Hallewell, G; van Haren, H; Heijboer, A J; Hello, Y; Hernandez-Rey, J J; Herold, B; Hößl, J; Hsu, C C; de Jong, M; Kadler, M; Kalantar-Nayestanaki, N; Kalekin, O; Kappes, A; Katz, U; Kooijman, P; Kopper, C; Kouchner, A; Kulikovskiy, V; Lahmann, R; Lamare, P; Larosa, G; Lefevre, D; Lim, G; Presti, D Lo; Loehner, H; Loucatos, S; Lucarelli, F; Mangano, S; Marcelin, M; Margiotta, A; Martinez-Mora, J A; Mazure, A; Meli, A; Montaruli, T; Morganti, M; Moscoso, L; Motz, H; Naumann, C; Neff, M; Palioselitis, D; Pavalas, G E; Payre, P; Petrovic, J; Picot-Clemente, N; Picq, C; Popa, V; Pradier, T; Presani, E; Racca, C; Reed, C; Riccobene, G; Richardt, C; Richter, R; Rostovtsev, A; Rujoiu, M; Russo, G V; Salesa, F; Sapienza, P; Schöck, F; Schuller, J-P; Shanidze, R; Simeone, F; Spiess, A; Spurio, M; Steijger, J J M; Stolarczyk, Th; Taiuti, M; Tamburini, C; Tasca, L; Toscano, S; Vallage, B; Van Elewyck, V; Vannoni, G; Vecchi, M; Vernin, P; Wijnker, G; de Wolf, E; Yepes, H; Zaborov, D; Zornoza, J D; Zuniga, J

    2011-01-01

    An algorithm is presented, that provides a fast and robust reconstruction of neutrino induced upward-going muons and a discrimination of these events from downward-going atmospheric muon background in data collected by the ANTARES neutrino telescope. The algorithm consists of a hit merging and hit selection procedure followed by fitting steps for a track hypothesis and a point-like light source. It is particularly well-suited for real time applications such as online monitoring and fast triggering of optical follow-up observations for multi-messenger studies. The performance of the algorithm is evaluated with Monte Carlo simulations and various distributions are compared with that obtained in ANTARES data.

  12. A fast algorithm for muon track reconstruction and its application to the ANTARES neutrino telescope

    Science.gov (United States)

    Aguilar, J. A.; Al Samarai, I.; Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Anvar, S.; Ardid, M.; Assis Jesus, A. C.; Astraatmadja, T.; Aubert, J.-J.; Auer, R.; Baret, B.; Basa, S.; Bazzotti, M.; Bertin, V.; Biagi, S.; Bigongiari, C.; Bogazzi, C.; Bou-Cabo, M.; Bouwhuis, M. C.; Brown, A. M.; Brunner, J.; Busto, J.; Camarena, F.; Capone, A.; Cârloganu, C.; Carminati, G.; Carr, J.; Cecchini, S.; Charvis, Ph.; Chiarusi, T.; Circella, M.; Coniglione, R.; Costantini, H.; Cottini, N.; Coyle, P.; Curtil, C.; Decowski, M. P.; Dekeyser, I.; Deschamps, A.; Distefano, C.; Donzaud, C.; Dornic, D.; Dorosti, Q.; Drouhin, D.; Eberl, T.; Emanuele, U.; Ernenwein, J.-P.; Escoffier, S.; Fehr, F.; Flaminio, V.; Fritsch, U.; Fuda, J.-L.; Galatà, S.; Gay, P.; Giacomelli, G.; Gómez-González, J. P.; Graf, K.; Guillard, G.; Halladjian, G.; Hallewell, G.; van Haren, H.; Heijboer, A. J.; Hello, Y.; Hernández-Rey, J. J.; Herold, B.; Hößl, J.; Hsu, C. C.; de Jong, M.; Kadler, M.; Kalantar-Nayestanaki, N.; Kalekin, O.; Kappes, A.; Katz, U.; Kooijman, P.; Kopper, C.; Kouchner, A.; Kulikovskiy, V.; Lahmann, R.; Lamare, P.; Larosa, G.; Lefèvre, D.; Lim, G.; Lo Presti, D.; Loehner, H.; Loucatos, S.; Lucarelli, F.; Mangano, S.; Marcelin, M.; Margiotta, A.; Martinez-Mora, J. A.; Mazure, A.; Meli, A.; Montaruli, T.; Morganti, M.; Moscoso, L.; Motz, H.; Naumann, C.; Neff, M.; Palioselitis, D.; Păvălaş, G. E.; Payre, P.; Petrovic, J.; Picot-Clemente, N.; Picq, C.; Popa, V.; Pradier, T.; Presani, E.; Racca, C.; Reed, C.; Riccobene, G.; Richardt, C.; Richter, R.; Rostovtsev, A.; Rujoiu, M.; Russo, G. V.; Salesa, F.; Sapienza, P.; Schöck, F.; Schuller, J.-P.; Shanidze, R.; Simeone, F.; Spiess, A.; Spurio, M.; Steijger, J. J. M.; Stolarczyk, Th.; Taiuti, M.; Tamburini, C.; Tasca, L.; Toscano, S.; Vallage, B.; Van Elewyck, V.; Vannoni, G.; Vecchi, M.; Vernin, P.; Wijnker, G.; de Wolf, E.; Yepes, H.; Zaborov, D.; Zornoza, J. D.; Zúñiga, J.

    2011-04-01

    An algorithm is presented, that provides a fast and robust reconstruction of neutrino induced upward-going muons and a discrimination of these events from downward-going atmospheric muon background in data collected by the ANTARES neutrino telescope. The algorithm consists of a hit merging and hit selection procedure followed by fitting steps for a track hypothesis and a point-like light source. It is particularly well-suited for real time applications such as online monitoring and fast triggering of optical follow-up observations for multi-messenger studies. The performance of the algorithm is evaluated with Monte Carlo simulations and various distributions are compared with that obtained in ANTARES data.

  13. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization

    KAUST Repository

    Jin, Bangti

    2011-08-24

    This paper develops a novel sparse reconstruction algorithm for the electrical impedance tomography problem of determining a conductivity parameter from boundary measurements. The sparsity of the \\'inhomogeneity\\' with respect to a certain basis is a priori assumed. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity-promoting ℓ 1-penalty term, and it allows us to obtain quantitative results when the assumption is valid. A novel iterative algorithm of soft shrinkage type was proposed. Numerical results for several two-dimensional problems with both single and multiple convex and nonconvex inclusions were presented to illustrate the features of the proposed algorithm and were compared with one conventional approach based on smoothness regularization. © 2011 John Wiley & Sons, Ltd.

  14. A Fast local Reconstruction algorithm by selective backprojection for Low-Dose in Dental Computed Tomography

    CERN Document Server

    Bin, Yan; Yu, Han; Feng, Zhang; Chao, Wang Xian; Lei, Li

    2013-01-01

    High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer, which become a major clinical concern. The backprojection-filtration (BPF) algorithm could reduce radiation dose by reconstructing images from truncated data in a short scan. In dental CT, it could reduce radiation dose for the teeth by using the projection acquired in a short scan, and could avoid irradiation to other part by using truncated projection. However, the limit of integration for backprojection varies per PI-line, resulting in low calculation efficiency and poor parallel performance. Recently, a tent BPF (T-BPF) has been proposed to improve calculation efficiency by rearranging projection. However, the memory-consuming data rebinning process is included. Accordingly, the chose-BPF (C-BPF) algorithm is proposed in this paper. In this algorithm, the derivative of projection is backprojected to the points whose x coordinate is less than that of the source focal spot to obtain the differentiated backprojection...

  15. An ordered-subsets proximal preconditioned gradient algorithm for edge-preserving PET image reconstruction.

    Science.gov (United States)

    Mehranian, Abolfazl; Rahmim, Arman; Ay, Mohammad Reza; Kotasidis, Fotis; Zaidi, Habib

    2013-05-01

    In iterative positron emission tomography (PET) image reconstruction, the statistical variability of the PET data precorrected for random coincidences or acquired in sufficiently high count rates can be properly approximated by a Gaussian distribution, which can lead to a penalized weighted least-squares (PWLS) cost function. In this study, the authors propose a proximal preconditioned gradient algorithm accelerated with ordered subsets (PPG-OS) for the optimization of the PWLS cost function and develop a framework to incorporate boundary side information into edge-preserving total variation (TV) and Huber regularizations. The PPG-OS algorithm is proposed to address two issues encountered in the optimization of PWLS function with edge-preserving regularizers. First, the second derivative of this function (Hessian matrix) is shift-variant and ill-conditioned due to the weighting matrix (which includes emission data, attenuation, and normalization correction factors) and the regularizer. As a result, the paraboloidal surrogate functions (used in the optimization transfer techniques) end up with high curvatures and gradient-based algorithms take smaller step-sizes toward the solution, leading to a slow convergence. In addition, preconditioners used to improve the condition number of the problem, and thus to speed up the convergence, would poorly act on the resulting ill-conditioned Hessian matrix. Second, the PWLS function with a nondifferentiable penalty such as TV is not amenable to optimization using gradient-based algorithms. To deal with these issues and also to enhance edge-preservation of the TV and Huber regularizers by incorporating adaptively or anatomically derived boundary side information, the authors followed a proximal splitting method. Thereby, the optimization of the PWLS function is split into a gradient descent step (upgraded by preconditioning, step size optimization, and ordered subsets) and a proximal mapping associated with boundary weighted TV

  16. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  17. Ghost images and feasibility of reconstructions with the Richardson-Lucy algorithm

    Science.gov (United States)

    Llacer, Jorge; Nunez, Jorge

    1994-09-01

    This paper is the result of a question that was raised at the recent workshop on 'The Restoration of HST Images and Spectra II', that took place at the Space Telescope Science Institute in November 1993, for which there was no forthcoming answer at that time. The question was: What is the null space (ghost images) of the Richardson-Lucy (RL) algorithm? Another question that came up for which there is a straight-forward answer was: What does the MLE algorithm really do? In this paper we attempt to answer both questions. This paper will begin with a brief description of the null space of an imaging system, with particular emphasis on the Hubble telescope. The imaging conditions under which there is a possibly damaging null space will be described in terms of linear methods of reconstruction. For the uncorrected Hubble telescope, it is shown that for a PSF computed by TINYTIM on a 512 X 512 dimension, there is no null space. We introduce the concept of a 'nearly null' space, with an unsharp distinction between the 'measurement' and the 'null' components of an image and generate a reduced resolution Hubble Point Spread Function (PSF) that has that nearly null space. We then study the propagation characteristics of null images in the Maximum Likelihood Estimator (MLE), or Richardson-Lucy algorithm, and the nature of its possible effects, but we find in computer simulations that the algorithm is very robust to those effects: if they exist, the effects are local and tend to disappear with increasing iteration numbers. We then demonstrate how a PSF that has small components in frequency domain results in noise magnification, just as one would expect in linear reconstruction. The answer to the second question is given in terms of the residuals of a reconstruction and the concept of feasibility.

  18. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    Science.gov (United States)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  19. Seamless texture mapping algorithm for image-based three-dimensional reconstruction

    Science.gov (United States)

    Liu, Jiapeng; Liu, Bin; Fang, Tao; Huo, Hong; Zhao, Yuming

    2016-09-01

    Texture information plays an important role in rendering true objects, especially with the wide application of image-based three-dimensional (3-D) reconstruction and 3-D laser scanning. This paper proposes a seamless texture mapping algorithm to achieve a high-quality visual effect for 3-D reconstruction. At first, a series of image sets is produced by analyzing the visibility of triangular facets, the image sets are clustered and segmented into a number of optimal reference texture patches. Second, the generated texture patches are sequenced to create a rough texture map, then a weighting process is adopted to reduce the color discrepancies between adjacent patches. Finally, a multiresolution decomposition and fusion technique is used to generate the transition section and eliminate the boundary effect. Experiments show that the proposed algorithm is effective and practical for obtaining high-quality 3-D texture mapping for 3-D reconstruction. Compared with traditional methods, it maintains the texture clarity while eliminating the color seams, in addition, it also supports 3-D texture mapping for big data application.

  20. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method.

  1. Performance assessment of different pulse reconstruction algorithms for the ATHENA X-ray Integral Field Unit

    Science.gov (United States)

    Peille, Philippe; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; den Hartog, Roland; de Plaa, Jelle; Barret, Didier; den Herder, Jan-Willem; Piro, Luigi; Barcons, Xavier; Pointecouteau, Etienne

    2016-07-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  2. An Algorithmic Approach to Total Breast Reconstruction with Free Tissue Transfer

    Directory of Open Access Journals (Sweden)

    Seong Cheol Yu

    2013-05-01

    Full Text Available As microvascular techniques continue to improve, perforator flap free tissue transfer is now the gold standard for autologous breast reconstruction. Various options are available for breast reconstruction with autologous tissue. These include the free transverse rectus abdominis myocutaneous (TRAM flap, deep inferior epigastric perforator flap, superficial inferior epigastric artery flap, superior gluteal artery perforator flap, and transverse/vertical upper gracilis flap. In addition, pedicled flaps can be very successful in the right hands and the right patient, such as the pedicled TRAM flap, latissimus dorsi flap, and thoracodorsal artery perforator. Each flap comes with its own advantages and disadvantages related to tissue properties and donor-site morbidity. Currently, the problem is how to determine the most appropriate flap for a particular patient among those potential candidates. Based on a thorough review of the literature and accumulated experiences in the author’s institution, this article provides a logical approach to autologous breast reconstruction. The algorithms presented here can be helpful to customize breast reconstruction to individual patient needs.

  3. Subjet multiplicity of gluon and quark jets reconstructed with the k⊥ algorithm in ppbar collisions

    Science.gov (United States)

    Abazov, V. M.; Abbott, B.; Abdesselam, A.; Abolins, M.; Abramov, V.; Acharya, B. S.; Adams, D. L.; Adams, M.; Ahmed, S. N.; Alexeev, G. D.; Alton, A.; Alves, G. A.; Amos, N.; Anderson, E. W.; Arnoud, Y.; Avila, C.; Baarmand, M. M.; Babintsev, V. V.; Babukhadia, L.; Bacon, T. C.; Baden, A.; Baldin, B.; Balm, P. W.; Banerjee, S.; Barberis, E.; Baringer, P.; Barreto, J.; Bartlett, J. F.; Bassler, U.; Bauer, D.; Bean, A.; Beaudette, F.; Begel, M.; Belyaev, A.; Beri, S. B.; Bernardi, G.; Bertram, I.; Besson, A.; Beuselinck, R.; Bezzubov, V. A.; Bhat, P. C.; Bhatnagar, V.; Bhattacharjee, M.; Blazey, G.; Blekman, F.; Blessing, S.; Boehnlein, A.; Bojko, N. I.; Borcherding, F.; Bos, K.; Bose, T.; Brandt, A.; Breedon, R.; Briskin, G.; Brock, R.; Brooijmans, G.; Bross, A.; Buchholz, D.; Buehler, M.; Buescher, V.; Burtovoi, V. S.; Butler, J. M.; Canelli, F.; Carvalho, W.; Casey, D.; Casilum, Z.; Castilla-Valdez, H.; Chakraborty, D.; Chan, K. M.; Chekulaev, S. V.; Cho, D. K.; Choi, S.; Chopra, S.; Christenson, J. H.; Chung, M.; Claes, D.; Clark, A. R.; Cochran, J.; Coney, L.; Connolly, B.; Cooper, W. E.; Coppage, D.; Crépé-Renaudin, S.; Cummings, M. A.; Cutts, D.; Davis, G. A.; Davis, K.; de, K.; de Jong, S. J.; del Signore, K.; Demarteau, M.; Demina, R.; Demine, P.; Denisov, D.; Denisov, S. P.; Desai, S.; Diehl, H. T.; Diesburg, M.; Doulas, S.; Ducros, Y.; Dudko, L. V.; Duensing, S.; Duflot, L.; Dugad, S. R.; Duperrin, A.; Dyshkant, A.; Edmunds, D.; Ellison, J.; Elvira, V. D.; Engelmann, R.; Eno, S.; Eppley, G.; Ermolov, P.; Eroshin, O. V.; Estrada, J.; Evans, H.; Evdokimov, V. N.; Fahland, T.; Feher, S.; Fein, D.; Ferbel, T.; Filthaut, F.; Fisk, H. E.; Fisyak, Y.; Flattum, E.; Fleuret, F.; Fortner, M.; Fox, H.; Frame, K. C.; Fu, S.; Fuess, S.; Gallas, E.; Galyaev, A. N.; Gao, M.; Gavrilov, V.; Genik, R. J.; Genser, K.; Gerber, C. E.; Gershtein, Y.; Gilmartin, R.; Ginther, G.; Gómez, B.; Gómez, G.; Goncharov, P. I.; González Solís, J. L.; Gordon, H.; Goss, L. T.; Gounder, K.; Goussiou, A.; Graf, N.; Graham, G.; Grannis, P. D.; Green, J. A.; Greenlee, H.; Greenwood, Z. D.; Grinstein, S.; Groer, L.; Grünendahl, S.; Gupta, A.; Gurzhiev, S. N.; Gutierrez, G.; Gutierrez, P.; Hadley, N. J.; Haggerty, H.; Hagopian, S.; Hagopian, V.; Hall, R. E.; Hanlet, P.; Hansen, S.; Hauptman, J. M.; Hays, C.; Hebert, C.; Hedin, D.; Heinmiller, J. M.; Heinson, A. P.; Heintz, U.; Heuring, T.; Hildreth, M. D.; Hirosky, R.; Hobbs, J. D.; Hoeneisen, B.; Huang, Y.; Illingworth, R.; Ito, A. S.; Jaffré, M.; Jain, S.; Jesik, R.; Johns, K.; Johnson, M.; Jonckheere, A.; Jöstlein, H.; Juste, A.; Kahl, W.; Kahn, S.; Kajfasz, E.; Kalinin, A. M.; Karmanov, D.; Karmgard, D.; Kehoe, R.; Khanov, A.; Kharchilava, A.; Kim, S. K.; Klima, B.; Knuteson, B.; Ko, W.; Kohli, J. M.; Kostritskiy, A. V.; Kotcher, J.; Kothari, B.; Kotwal, A. V.; Kozelov, A. V.; Kozlovsky, E. A.; Krane, J.; Krishnaswamy, M. R.; Krivkova, P.; Krzywdzinski, S.; Kubantsev, M.; Kuleshov, S.; Kulik, Y.; Kunori, S.; Kupco, A.; Kuznetsov, V. E.; Landsberg, G.; Lee, W. M.; Leflat, A.; Leggett, C.; Lehner, F.; Li, J.; Li, Q. Z.; Li, X.; Lima, J. G.; Lincoln, D.; Linn, S. L.; Linnemann, J.; Lipton, R.; Lucotte, A.; Lueking, L.; Lundstedt, C.; Luo, C.; Maciel, A. K.; Madaras, R. J.; Malyshev, V. L.; Manankov, V.; Mao, H. S.; Marshall, T.; Martin, M. I.; Mauritz, K. M.; May, B.; Mayorov, A. A.; McCarthy, R.; McMahon, T.; Melanson, H. L.; Merkin, M.; Merritt, K. W.; Miao, C.; Miettinen, H.; Mihalcea, D.; Mishra, C. S.; Mokhov, N.; Mondal, N. K.; Montgomery, H. E.; Moore, R. W.; Mostafa, M.; da Motta, H.; Nagy, E.; Nang, F.; Narain, M.; Narasimham, V. S.; Naumann, N. A.; Neal, H. A.; Negret, J. P.; Negroni, S.; Nunnemann, T.; O'Neil, D.; Oguri, V.; Olivier, B.; Oshima, N.; Padley, P.; Pan, L. J.; Papageorgiou, K.; Para, A.; Parashar, N.; Partridge, R.; Parua, N.; Paterno, M.; Patwa, A.; Pawlik, B.; Perkins, J.; Peters, O.; Pétroff, P.; Piegaia, R.; Pope, B. G.; Popkov, E.; Prosper, H. B.; Protopopescu, S.; Przybycien, M. B.; Qian, J.; Raja, R.; Rajagopalan, S.; Ramberg, E.; Rapidis, P. A.; Reay, N. W.; Reucroft, S.; Ridel, M.; Rijssenbeek, M.; Rizatdinova, F.; Rockwell, T.; Roco, M.; Royon, C.; Rubinov, P.; Ruchti, R.; Rutherfoord, J.; Sabirov, B. M.; Sajot, G.; Santoro, A.; Sawyer, L.; Schamberger, R. D.; Schellman, H.; Schwartzman, A.; Sen, N.; Shabalina, E.; Shivpuri, R. K.; Shpakov, D.; Shupe, M.; Sidwell, R. A.; Simak, V.; Singh, H.; Singh, J. B.; Sirotenko, V.; Slattery, P.; Smith, E.; Smith, R. P.; Snihur, R.; Snow, G. R.; Snow, J.; Snyder, S.; Solomon, J.; Song, Y.; Sorín, V.; Sosebee, M.; Sotnikova, N.; Soustruznik, K.; Souza, M.; Stanton, N. R.; Steinbrück, G.; Stephens, R. W.; Stichelbaut, F.; Stoker, D.; Stolin, V.; Stone, A.; Stoyanova, D. A.; Strang, M. A.; Strauss, M.; Strovink, M.; Stutte, L.; Sznajder, A.; Talby, M.; Taylor, W.; Tentindo-Repond, S.; Tripathi, S. M.; Trippe, T. G.; Turcot, A. S.; Tuts, P. M.; Vaniev, V.; van Kooten, R.; Varelas, N.; Vertogradov, L. S.; Villeneuve-Seguier, F.; Volkov, A. A.; Vorobiev, A. P.; Wahl, H. D.; Wang, H.; Wang, Z.-M.; Warchol, J.; Watts, G.; Wayne, M.; Weerts, H.; White, A.; White, J. T.; Whiteson, D.; Wightman, J. A.; Wijngaarden, D. A.; Willis, S.; Wimpenny, S. J.; Womersley, J.; Wood, D. R.; Xu, Q.; Yamada, R.; Yamin, P.; Yasuda, T.; Yatsunenko, Y.; Yip, K.; Youssef, S.; Yu, J.; Yu, Z.; Zanabria, M.; Zhang, X.; Zheng, H.; Zhou, B.; Zhou, Z.; Zielinski, M.; Zieminska, D.; Zieminski, A.; Zutshi, V.; Zverev, E. G.; Zylberstejn, A.

    2002-03-01

    The DØ Collaboration has studied for the first time the properties of hadron-collider jets reconstructed with a successive-combination algorithm based on relative transverse momenta (k⊥) of energy clusters. Using the standard value D=1.0 of the jet-separation parameter in the k⊥ algorithm, we find that the pT of such jets is higher than the ET of matched jets reconstructed with cones of radius R=0.7, by about 5 (8) GeV at pT~90 (240) GeV. To examine internal jet structure, the k⊥ algorithm is applied within D=0.5 jets to resolve any subjets. The multiplicity of subjets in jet samples at (s)=1800 GeV and 630 GeV is extracted separately for gluons (Mg) and quarks (Mq), and the ratio of average subjet multiplicities in gluon and quark jets is measured as (-1)/(-1)=1.84+/-0.15 (stat)+/-0.220.18 (syst). This ratio is in agreement with the expectations from the HERWIG Monte Carlo event generator and a resummation calculation, and with observations in e+e- annihilations, and is close to the naive prediction for the ratio of color charges of CA/CF=9/4=2.25.

  4. Chest wall infiltration by lung cancer: value of thin-sectional CT with different reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Uhrmeister, P.; Allmann, K.H.; Altehoefer, C.; Laubenberger, J.; Langer, M. [Department of Diagnostic Radiology, University Hospital Freiburg (Germany); Wertzel, H.; Hasse, J. [Department of Thoracic Surgery, University Hospital Freiburg (Germany)

    1999-09-01

    The aim of this investigation was to evaluate whether thin-sectional CT with different reconstruction algorithms can improve the diagnostic accuracy with regard to chest wall invasion in patients with peripheral bronchogenic carcinoma. Forty-one patients with intrapulmonary lesions and tumor contact to the thoracic wall as seen on CT staging underwent additional 1-mm CT slices with reconstruction in a high-resolution (HR) and an edge blurring, soft detail (SD) algorithm. Five criteria were applied and validated by histological findings. Using the criteria of the intact fat layer, HRCT had a sensitivity of 81 % and a specificity of 79 %, SD CT had a sensitivity of 96 % and a specificity of 78 %, and standard CT technique had a sensitivity of 50 % and a specificity of 71 %, respectively. Regarding changes of intercostal soft tissue, HRCT achieved a sensitivity of 71 % and a specificity of 96 %, SD CT had a sensitivity of 94 % and a specificity of 96 % (standard CT technique: sensitivity 50 % and specificity 96 %). For the other criteria, such as pleural contact area, angle, and osseous destruction, no significant differences were found. Diagnostic accuracy of chest wall infiltration can be improved by using thin sectional CT. Especially the application of an edge-blurring (SD) algorithm increases sensitivity and specificity without additional costs. (orig.) With 4 figs., 1 tab., 26 refs.

  5. An explicit reconstruction algorithm for the transverse ray transform of a second rank tensor field from three axis data

    Science.gov (United States)

    Desai, Naeem M.; Lionheart, William R. B.

    2016-11-01

    We give an explicit plane-by-plane filtered back-projection reconstruction algorithm for the transverse ray transform of symmetric second rank tensor fields on Euclidean three-space, using data from rotation about three orthogonal axes. We show that in the general case two-axis data is insufficient, but we give an explicit reconstruction procedure for the potential case with two-axis data. We describe a numerical implementation of the three-axis algorithm and give reconstruction results for simulated data.

  6. Electron bunch profile reconstruction based on phase-constrained iterative algorithm

    Science.gov (United States)

    Bakkali Taheri, F.; Konoplev, I. V.; Doucas, G.; Baddoo, P.; Bartolini, R.; Cowley, J.; Hooker, S. M.

    2016-03-01

    The phase retrieval problem occurs in a number of areas in physics and is the subject of continuing investigation. The one-dimensional case, e.g., the reconstruction of the temporal profile of a charged particle bunch, is particularly challenging and important for particle accelerators. Accurate knowledge of the longitudinal (time) profile of the bunch is important in the context of linear colliders, wakefield accelerators and for the next generation of light sources, including x-ray SASE FELs. Frequently applied methods, e.g., minimal phase retrieval or other iterative algorithms, are reliable if the Blaschke phase contribution is negligible. This, however, is neither known a priori nor can it be assumed to apply to an arbitrary bunch profile. We present a novel approach which gives reproducible, most-probable and stable reconstructions for bunch profiles (both artificial and experimental) that would otherwise remain unresolved by the existing techniques.

  7. An image reconstruction algorithm of EIT based on pulmonary prior information

    Institute of Scientific and Technical Information of China (English)

    Huaxiang WANG; Li HU; ling WANG; Lu LI

    2009-01-01

    Using a CT scan of the pulmonary tissue, a human pulmonary model is established combined with the structure property of the human lung tissue using the software COMSOL. Combined with the conductivity contribution information of the human tissue and organ,an image reconstruction method of electrical impedance tomography based on pulmonary prior information is proposed using the conjugate gradient method. Simulation results show that the uniformity index of sensitivity distribution of the pulmonary model is 15.568, which is significantly reduced compared with 34.218 based on the round field. The proposed algorithm improves the uniformity of the sensing field, the image resolution of the conductivity distribution of pulmonary tissue and the quality of the reconstruction image based on pulmonary prior information.

  8. Algorithmic Study of M-Estimators for Multi-Function Sensor Data Reconstruction

    Institute of Scientific and Technical Information of China (English)

    LIU Dan; SUN Jinwei; WEI Guo

    2007-01-01

    This paper describes a data reconstruction technique for a multi-function sensor based on the Mestimator, which uses least squares and weighted least squares method.The algorithm has better robustness than conventional least squares which can amplify the errors of inaccurate data.The M-estimator places particular emphasis on reducing the effects of large data errors,which are further overcome by an iterative regression process which gives small weights to large off-group data errors and large weights to small data errors. Simulation results are consistent with the hypothesis with 81 groups of regression data having an average accuracy of 3.5%, which demonstrates that the M-estimator provides more accurate and reliable data reconstruction.

  9. Layout Optimization of Sensor-based Reconstruction of Explosion Overpressure Field Based on the Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Miaomiao Bai

    2014-11-01

    Full Text Available In underwater blasting experiment, the layout of the sensor has always been highly concerned. From the perspective of reconstruction with explosion overpressure field, the paper presents four indicators, which can obtain the optimal sensor layout scheme and guide sensor layout in practical experiment, combining with the genetic algorithm with global search. Then, a multi-scale model in every subregion of underwater blasting field was established to be used simulation experiments. By Matlab, the variation of these four indicators with different sensor layout, and reconstruction accuracy are analyzed and discussed. Finally, a conclusion has been raised through the analysis and comparison of simulation results, that the program can get a better sensor layout. It requires fewer number of sensors to be able to get good results with high accuracy. In the actual test explosions, we can refer to this scheme laid sensors.

  10. Reconstructing optical parameters from double-integrating-sphere measurements using a genetic algorithm

    Science.gov (United States)

    Böcklin, Christoph; Baumann, Dirk; Stuker, Florian; Klohs, Jan; Rudin, Markus; Fröhlich, Jürg

    2013-02-01

    For the reconstruction of physiological changes in specific tissue layers detected by optical techniques, the exact knowledge of the optical parameters μa, μs and g of different tissue types is of paramount importance. One approach to accurately determine these parameters for biological tissue or phantom material is to use a double-integrating-sphere measurement system. It offers a flexible way to measure various kinds of tissues, liquids and artificial phantom materials. Accurate measurements can be achieved by technical adjustments and calibration of the spheres using commercially available reflection and transmission standards. The determination For the reconstruction of physiological changes in specific tissue layers detected by optical techniques, the exact knowledge of the optical parameters μa, μs and g of different tissue types is of paramount importance. One approach to accurately determine these parameters for biological tissue or phantom material is to use a double-integrating-sphere measurement system. It offers a flexible way to measure various kinds of tissues, liquids and artificial phantom materials. Accurate measurements can be achieved by technical adjustments and calibration of the spheres using commercially available reflection and transmission standards. The determination of the optical parameters of a material is based on two separate steps. Firstly, the reflectance ρs, the total transmittance TsT and the unscattered transmittance TsC of the sample s are measured with the double-integrating-sphere setup. Secondly, the optical parameters μa, μs and g are reconstructed with an inverse search algorithm combined with an appropriate solver for the forward problem (calculating ρs, TsT and TsC from μa, μs and g) has to be applied. In this study a Genetic Algorithm is applied as search heuristic, since it offers the most flexible and general approach without requiring any foreknowledge of the fitness-landscape. Given the challenging

  11. Structural algorithm to reservoir reconstruction using passive seismic data (synthetic example)

    Energy Technology Data Exchange (ETDEWEB)

    Smaglichenko, Tatyana A.; Volodin, Igor A.; Lukyanitsa, Andrei A.; Smaglichenko, Alexander V.; Sayankina, Maria K. [Oil and Gas Research Institute, Russian Academy of Science, Gubkina str.3, 119333, Moscow (Russian Federation); Faculty of Computational Mathematics and Cybernetics, M.V. Lomonosov Moscow State University, Leninskie gory, 1, str.52,Second Teaching Building.119991 Moscow (Russian Federation); Shmidt' s Institute of Physics of the Earth, Russian Academy of Science, Bolshaya Gruzinskaya str. 10, str.1, 123995 Moscow (Russian Federation); Oil and Gas Research Institute, Russian Academy of Science, Gubkina str.3, 119333, Moscow (Russian Federation)

    2012-09-26

    Using of passive seismic observations to detect a reservoir is a new direction of prospecting and exploration of hydrocarbons. In order to identify thin reservoir model we applied the modification of Gaussian elimination method in conditions of incomplete synthetic data. Because of the singularity of a matrix conventional method does not work. Therefore structural algorithm has been developed by analyzing the given model as a complex model. Numerical results demonstrate of its advantage compared with usual way of solution. We conclude that the gas reservoir is reconstructed by retrieving of the image of encasing shale beneath it.

  12. Enhanced temporal resolution at cardiac CT with a novel CT image reconstruction algorithm: Initial patient experience

    Energy Technology Data Exchange (ETDEWEB)

    Apfaltrer, Paul, E-mail: paul.apfaltrer@medma.uni-heidelberg.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Institute of Clinical Radiology and Nuclear Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim (Germany); Schoendube, Harald, E-mail: harald.schoendube@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Allmendinger, Thomas, E-mail: thomas.allmendinger@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Tricarico, Francesco, E-mail: francescotricarico82@gmail.com [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Department of Bioimaging and Radiological Sciences, Catholic University of the Sacred Heart, “A. Gemelli” Hospital, Largo A. Gemelli 8, Rome (Italy); Schindler, Andreas, E-mail: andreas.schindler@campus.lmu.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Vogt, Sebastian, E-mail: sebastian.vogt@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Sunnegårdh, Johan, E-mail: johan.sunnegardh@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); and others

    2013-02-15

    Objective: To evaluate the effect of a temporal resolution improvement method (TRIM) for cardiac CT on diagnostic image quality for coronary artery assessment. Materials and methods: The TRIM-algorithm employs an iterative approach to reconstruct images from less than 180° of projections and uses a histogram constraint to prevent the occurrence of limited-angle artifacts. This algorithm was applied in 11 obese patients (7 men, 67.2 ± 9.8 years) who had undergone second generation dual-source cardiac CT with 120 kV, 175–426 mAs, and 500 ms gantry rotation. All data were reconstructed with a temporal resolution of 250 ms using traditional filtered-back projection (FBP) and of 200 ms using the TRIM-algorithm. Contrast attenuation and contrast-to-noise-ratio (CNR) were measured in the ascending aorta. The presence and severity of coronary motion artifacts was rated on a 4-point Likert scale. Results: All scans were considered of diagnostic quality. Mean BMI was 36 ± 3.6 kg/m{sup 2}. Average heart rate was 60 ± 9 bpm. Mean effective dose was 13.5 ± 4.6 mSv. When comparing FBP- and TRIM reconstructed series, the attenuation within the ascending aorta (392 ± 70.7 vs. 396.8 ± 70.1 HU, p > 0.05) and CNR (13.2 ± 3.2 vs. 11.7 ± 3.1, p > 0.05) were not significantly different. A total of 110 coronary segments were evaluated. All studies were deemed diagnostic; however, there was a significant (p < 0.05) difference in the severity score distribution of coronary motion artifacts between FBP (median = 2.5) and TRIM (median = 2.0) reconstructions. Conclusion: The algorithm evaluated here delivers diagnostic imaging quality of the coronary arteries despite 500 ms gantry rotation. Possible applications include improvement of cardiac imaging on slower gantry rotation systems or mitigation of the trade-off between temporal resolution and CNR in obese patients.

  13. Performance of the Mean-Timer algorithm for DT Local Reconstruction and muon time measurement.

    CERN Document Server

    CMS Collaboration

    2014-01-01

    The Mean-Timer algorithm has been recently implemented as default for the local reconstruction within the CMS Drift Tubes (DT), for muons that appear to be out-of-time (OOT) or lack measured hits on one of the two space projections. Besides improving the spatial resoluton for OOT muons, this method allows a precise time measurement that can be used to tag OOT muons, in order either to reject them (e.g. as a result of OOT Pile Up) or to select them for exotic physical analyses.

  14. An edge-preserving algorithm of joint image restoration and volume reconstruction for rotation-scanning 4D echocardiographic images

    Institute of Scientific and Technical Information of China (English)

    GUO Qiang; YANG Xin

    2006-01-01

    A statistical algorithm for the reconstruction from time sequence echocardiographic images is proposed in this paper.The ability to jointly restore the images and reconstruct the 3D images without blurring the boundary is the main innovation of this algorithm. First, a Bayesian model based on MAP-MRF is used to reconstruct 3D volume, and extended to deal with the images acquired by rotation scanning method. Then, the spatiotemporal nature of ultrasound images is taken into account for the parameter of energy function, which makes this statistical model anisotropic. Hence not only can this method reconstruct 3D ultrasound images, but also remove the speckle noise anisotropically. Finally, we illustrate the experiments of our method on the synthetic and medical images and compare it with the isotropic reconstruction method.

  15. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    Directory of Open Access Journals (Sweden)

    Abbasi Mahdi

    2012-06-01

    Full Text Available Abstract Background Electrical Impedance Tomography (EIT is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical

  16. A compressed sensing-based iterative algorithm for CT reconstruction and its possible application to phase contrast imaging

    Directory of Open Access Journals (Sweden)

    Li Xueli

    2011-08-01

    Full Text Available Abstract Background Computed Tomography (CT is a technology that obtains the tomogram of the observed objects. In real-world applications, especially the biomedical applications, lower radiation dose have been constantly pursued. To shorten scanning time and reduce radiation dose, one can decrease X-ray exposure time at each projection view or decrease the number of projections. Until quite recently, the traditional filtered back projection (FBP method has been commonly exploited in CT image reconstruction. Applying the FBP method requires using a large amount of projection data. Especially when the exposure speed is limited by the mechanical characteristic of the imaging facilities, using FBP method may prolong scanning time and cumulate with a high dose of radiation consequently damaging the biological specimens. Methods In this paper, we present a compressed sensing-based (CS-based iterative algorithm for CT reconstruction. The algorithm minimizes the l1-norm of the sparse image as the constraint factor for the iteration procedure. With this method, we can reconstruct images from substantially reduced projection data and reduce the impact of artifacts introduced into the CT reconstructed image by insufficient projection information. Results To validate and evaluate the performance of this CS-base iterative algorithm, we carried out quantitative evaluation studies in imaging of both software Shepp-Logan phantom and real polystyrene sample. The former is completely absorption based and the later is imaged in phase contrast. The results show that the CS-based iterative algorithm can yield images with quality comparable to that obtained with existing FBP and traditional algebraic reconstruction technique (ART algorithms. Discussion Compared with the common reconstruction from 180 projection images, this algorithm completes CT reconstruction from only 60 projection images, cuts the scan time, and maintains the acceptable quality of the

  17. An Algorithm For Training Multilayer Perceptron MLP For Image Reconstruction Using Neural Network Without Overfitting.

    Directory of Open Access Journals (Sweden)

    Mohammad Mahmudul Alam Mia

    2015-02-01

    Full Text Available Abstract Recently back propagation neural network BPNN has been applied successfully in many areas with excellent generalization results for example rule extraction classification and evaluation. In this paper the Levenberg-Marquardt back-propagation algorithm is used for training the network and reconstructs the image. It is found that Marquardt algorithm is significantly more proficient. A practical problem with MLPs is to select the correct complexity for the model i.e. the right number of hidden units or correct regularization parameters. In this paper a study is made to determine the issue of number of neurons in every hidden layer and the quantity of hidden layers needed for getting the high accuracy. We performed regression R analysis to measure the correlation between outputs and targets.

  18. Adjoint-optimization algorithm for spatial reconstruction of a scalar source

    Science.gov (United States)

    Wang, Qi; Hasegawa, Yosuke; Meneveau, Charles; Zaki, Tamer

    2016-11-01

    Identifying the location of the source of passive scalar transported in a turbulent environment based on remote measurements is an ill-posed problem. A conjugate-gradient algorithm is proposed, and relies on eddy-resolving simulations of both the forward and adjoint scalar transport equations to reconstruct the spatial distribution of the source. The formulation can naturally accommodate measurements from multiple sensors. The algorithm is evaluated for scalar dispersion in turbulent channel flow (Reτ = 180). As the distance between the source and sensor increases, the accuracy of the source recovery deteriorates due to diffusive effects. Improvement in performance is demonstrated for higher Prantl numbers and also with increasing number of sensors. This study is supported by the National Science Foundation (Grant CNS 1461870).

  19. Reconstruction Algorithms for Positron Emission Tomography and Single Photon Emission Computed Tomography and their Numerical Implementation

    CERN Document Server

    Fokas, A S; Marinakis, V

    2004-01-01

    The modern imaging techniques of Positron Emission Tomography and of Single Photon Emission Computed Tomography are not only two of the most important tools for studying the functional characteristics of the brain, but they now also play a vital role in several areas of clinical medicine, including neurology, oncology and cardiology. The basic mathematical problems associated with these techniques are the construction of the inverse of the Radon transform and of the inverse of the so called attenuated Radon transform respectively. We first show that, by employing mathematical techniques developed in the theory of nonlinear integrable equations, it is possible to obtain analytic formulas for these two inverse transforms. We then present algorithms for the numerical implementation of these analytic formulas, based on approximating the given data in terms of cubic splines. Several numerical tests are presented which suggest that our algorithms are capable of producing accurate reconstruction for realistic phanto...

  20. Performance of Hull-Detection Algorithms For Proton Computed Tomography Reconstruction

    CERN Document Server

    Schultze, Blake; Censor, Yair; Schulte, Reinhard; Schubert, Keith Evan

    2014-01-01

    Proton computed tomography (pCT) is a novel imaging modality developed for patients receiving proton radiation therapy. The purpose of this work was to investigate hull-detection algorithms used for preconditioning of the large and sparse linear system of equations that needs to be solved for pCT image reconstruction. The hull-detection algorithms investigated here included silhouette/space carving (SC), modified silhouette/space carving (MSC), and space modeling (SM). Each was compared to the cone-beam version of filtered backprojection (FBP) used for hull-detection. Data for testing these algorithms included simulated data sets of a digital head phantom and an experimental data set of a pediatric head phantom obtained with a pCT scanner prototype at Loma Linda University Medical Center. SC was the fastest algorithm, exceeding the speed of FBP by more than 100 times. FBP was most sensitive to the presence of noise. Ongoing work will focus on optimizing threshold parameters in order to define a fast and effic...

  1. Robust baseline-independent algorithms for segmentation and reconstruction of Arabic handwritten cursive script

    Science.gov (United States)

    Mostafa, Khaled; Darwish, Ahmed M.

    1999-01-01

    The problem of cursive script segmentation is an essential one for handwritten character recognition. This is specially true for Arabic text where cursive is the only mode even for typewritten font. In this paper, we present a generalized segmentation approach for handwritten Arabic cursive scripts. The proposed approach is based on the analysis of the upper and lower contours of the word. The algorithm searchers for local minima points along the upper contour and local maxima points along the lower contour of the word. These points are then marked as potential letter boundaries (PLB). A set of rules, based on the nature of Arabic cursive scripts, are then applied to both upper and lower PLB points to eliminate some of the improper ones. A matching process between upper and lower PLBs is then performed in order to obtain the minimum number of non-overlapping PLB for each word. The output of the proposed segmentation algorithm is a set of labeled primitives that represent the Arabic word. In order to reconstruct the original word from its corresponding primitives and diacritics, a novel binding and dot assignment algorithm is introduced. The algorithm achieved correct segmentation rate of 97.7% when tested on samples of loosely constrained handwritten cursive script words consisting of 7922 characters written by 14 different writers.

  2. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    Energy Technology Data Exchange (ETDEWEB)

    Brady, S. L.; Yee, B. S.; Kaufman, R. A. [Department of Radiological Sciences, St. Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States)

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  3. Receiver operating characteristic (ROC) analysis of images reconstructed with iterative expectation maximization algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Yasuyuki; Murase, Kenya [Osaka Medical Coll., Takatsuki (Japan). Graduate School; Higashino, Hiroshi; Sogabe, Ichiro; Sakamoto, Kana

    2001-12-01

    The quality of images reconstructed by means of the maximum likelihood-expectation maximization (ML-EM) and ordered subset (OS)-EM algorithms, was examined with parameters such as the number of iterations and subsets, then compared with the quality of images reconstructed by the filtered back projection method. Phantoms showing signals inside signals, which mimicked single-photon emission computed tomography (SPECT) images of cerebral blood flow and myocardial perfusion, and phantoms showing signals around the signals obtained by SPECT of bone and tumor were used for experiments. To determine signals for recognition, SPECT images in which the signals could be appropriately recognized with a combination of fewer iterations and subsets of different sizes and densities were evaluated by receiver operating characteristic (ROC) analysis. The results of ROC analysis were applied to myocardial phantom experiments and scintigraphy of myocardial perfusion. Taking the image processing time into consideration, good SPECT images were obtained by OS-EM at iteration No. 10 and subset 5. This study will be helpful for selection of parameters such as the number of iterations and subsets when using the ML-EM or OS-EM algorithms. (author)

  4. Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded Vision Devices

    Directory of Open Access Journals (Sweden)

    B. Mahesh

    2013-02-01

    Full Text Available – Most digital cameras use a color filter array to capture the colors of the scene. Sub-sampled (Down sampled versions of the red, green, and blue components are acquired using Single Sensor Embedded vision devices with the help of Color Filter Array (CFA[1]. Hence Interpolation of the missing color samples is necessary to reconstruct a full color image. This method of interpolation is called as Demosaicing (Demosaicking. Least-Square Luma–Chroma demulti-plexing algorithm for Bayer demosaicking [2] is the most effective and efficient demosaicking technique available in the literature. As almost all companies of commercial cameras make use of this cost effective way for interpolating the missing colors and reconstructing the original image, the demosaicking arena has become a vital domain of research of embedded color vision devices[3].Hence, in this paper ,the authors aim is to analyze ,implement and evaluate the relative performance of the best known algorithms. Objective empirical value prove that LSLCDA is superior in performance

  5. First-order convex feasibility algorithms for iterative image reconstruction in limited angular-range X-ray CT

    CERN Document Server

    Sidky, Emil Y; Pan, Xiaochuan

    2012-01-01

    Iterative image reconstruction (IIR) algorithms in Computed Tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this article, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for efficient algorithms for their solution -- thereby facilitating the IIR algorithm design process. An accelerated version of the Chambolle-Pock (CP) algorithm is adapted to various convex fea...

  6. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    Energy Technology Data Exchange (ETDEWEB)

    Rescigno, R., E-mail: regina.rescigno@iphc.cnrs.fr [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Finck, Ch.; Juliani, D. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Spiriti, E. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali di Frascati (Italy); Istituto Nazionale di Fisica Nucleare - Sezione di Roma 3 (Italy); Baudot, J. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Abou-Haidar, Z. [CNA, Sevilla (Spain); Agodi, C. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Alvarez, M.A.G. [CNA, Sevilla (Spain); Aumann, T. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); Battistoni, G. [Istituto Nazionale di Fisica Nucleare - Sezione di Milano (Italy); Bocci, A. [CNA, Sevilla (Spain); Böhlen, T.T. [European Organization for Nuclear Research CERN, Geneva (Switzerland); Medical Radiation Physics, Karolinska Institutet and Stockholm University, Stockholm (Sweden); Boudard, A. [CEA-Saclay, IRFU/SPhN, Gif sur Yvette Cedex (France); Brunetti, A.; Carpinelli, M. [Istituto Nazionale di Fisica Nucleare - Sezione di Cagliari (Italy); Università di Sassari (Italy); Cirrone, G.A.P. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Cortes-Giraldo, M.A. [Departamento de Fisica Atomica, Molecular y Nuclear, University of Sevilla, 41080-Sevilla (Spain); Cuttone, G.; De Napoli, M. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Durante, M. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); and others

    2014-12-11

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.

  7. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    Science.gov (United States)

    Rescigno, R.; Finck, Ch.; Juliani, D.; Spiriti, E.; Baudot, J.; Abou-Haidar, Z.; Agodi, C.; Alvarez, M. A. G.; Aumann, T.; Battistoni, G.; Bocci, A.; Böhlen, T. T.; Boudard, A.; Brunetti, A.; Carpinelli, M.; Cirrone, G. A. P.; Cortes-Giraldo, M. A.; Cuttone, G.; De Napoli, M.; Durante, M.; Gallardo, M. I.; Golosio, B.; Iarocci, E.; Iazzi, F.; Ickert, G.; Introzzi, R.; Krimmer, J.; Kurz, N.; Labalme, M.; Leifels, Y.; Le Fevre, A.; Leray, S.; Marchetto, F.; Monaco, V.; Morone, M. C.; Oliva, P.; Paoloni, A.; Patera, V.; Piersanti, L.; Pleskac, R.; Quesada, J. M.; Randazzo, N.; Romano, F.; Rossi, D.; Rousseau, M.; Sacchi, R.; Sala, P.; Sarti, A.; Scheidenberger, C.; Schuy, C.; Sciubba, A.; Sfienti, C.; Simon, H.; Sipala, V.; Tropea, S.; Vanstalle, M.; Younis, H.

    2014-12-01

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.

  8. a Line-Based 3d Roof Model Reconstruction Algorithm: Tin-Merging and Reshaping (tmr)

    Science.gov (United States)

    Rau, J.-Y.

    2012-07-01

    Three-dimensional building model is one of the major components of a cyber-city and is vital for the realization of 3D GIS applications. In the last decade, the airborne laser scanning (ALS) data is widely used for 3D building model reconstruction and object extraction. Instead, based on 3D roof structural lines, this paper presents a novel algorithm for automatic roof models reconstruction. A line-based roof model reconstruction algorithm, called TIN-Merging and Reshaping (TMR), is proposed. The roof structural line, such as edges, eaves and ridges, can be measured manually from aerial stereo-pair, derived by feature line matching or inferred from ALS data. The originality of the TMR algorithm for 3D roof modelling is to perform geometric analysis and topology reconstruction among those unstructured lines and then reshapes the roof-type using elevation information from the 3D structural lines. For topology reconstruction, a line constrained Delaunay Triangulation algorithm is adopted where the input structural lines act as constraint and their vertex act as input points. Thus, the constructed TINs will not across the structural lines. Later at the stage of Merging, the shared edge between two TINs will be check if the original structural line exists. If not, those two TINs will be merged into a polygon. Iterative checking and merging of any two neighboured TINs/Polygons will result in roof polygons on the horizontal plane. Finally, at the Reshaping stage any two structural lines with fixed height will be used to adjust a planar function for the whole roof polygon. In case ALS data exist, the Reshaping stage can be simplified by adjusting the point cloud within the roof polygon. The proposed scheme reduces the complexity of 3D roof modelling and makes the modelling process easier. Five test datasets provided by ISPRS WG III/4 located at downtown Toronto, Canada and Vaihingen, Germany are used for experiment. The test sites cover high rise buildings and residential

  9. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyungjin, E-mail: khj.snuh@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Park, Chang Min, E-mail: cmpark@radiol.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Song, Yong Sub, E-mail: terasong@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Lee, Sang Min, E-mail: sangmin.lee.md@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Goo, Jin Mo, E-mail: jmgoo@plaza.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of)

    2014-05-15

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose{sup 4} and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose{sup 4} at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility.

  10. Developpement d'algorithmes de reconstruction statistique appliques en tomographie rayons-X assistee par ordinateur

    Science.gov (United States)

    Thibaudeau, Christian

    La tomodensitometrie (TDM) permet d'obtenir, et ce de facon non invasive, une image tridimensionnelle de l'anatomie interne d'un sujet. Elle constitue l'evolution logique de la radiographie et permet l'observation d'un volume sous differents plans (sagittal, coronal, axial ou n'importe quel autre plan). La TDM peut avantageusement completer la tomographie d'emission par positrons (TEP), un outil de predilection utilise en recherche biomedicale et pour le diagnostic du cancer. La TEP fournit une information fonctionnelle, physiologique et metabolique, permettant la localisation et la quantification de radiotraceurs a l'interieur du corps humain. Cette derniere possede une sensibilite inegalee, mais peut neanmoins souffrir d'une faible resolution spatiale et d'un manque de repere anatomique selon le radiotraceur utilise. La combinaison, ou fusion, des images TEP et TDM permet d'obtenir cette localisation anatomique de la distribution du radiotraceur. L'image TDM represente une carte de l'attenuation subie par les rayons-X lors de leur passage a travers les tissus. Elle permet donc aussi d'ameliorer la quantification de l'image TEP en offrant la possibilite de corriger pour l'attenuation. L'image TDM s'obtient par la transformation de profils d'attenuation en une image cartesienne pouvant etre interpretee par l'humain. Si la qualite de cette image est fortement influencee par les performances de l'appareil, elle depend aussi grandement de la capacite de l'algorithme de reconstruction a obtenir une representation fidele du milieu image. Les techniques de reconstruction standards, basees sur la retroprojection filtree (FBP, filtered back-projection), reposent sur un modele mathematiquement parfait de la geometrie d'acquisition. Une alternative a cette methode etalon est appelee reconstruction statistique, ou iterative. Elle permet d'obtenir de meilleurs resultats en presence de bruit ou d'une quantite limitee d'information et peut virtuellement s'adapter a toutes formes

  11. A perspective matrix-based seed reconstruction algorithm with applications to C-arm based intra-operative dosimetry

    Science.gov (United States)

    Narayanan, Sreeram; Cho, Paul S.

    2006-03-01

    Currently available seed reconstruction algorithms are based on the assumption that accurate information about the imaging geometry is known. The assumption is valid for isocentric x-ray units such as radiotherapy simulators. However, the large majority of the clinics performing prostate brachytherapy today use C-arms for which imaging parameters such as source to axis distance, image acquisition angles, central axis of the image are not accurately known. We propose a seed reconstruction algorithm that requires no such knowledge of geometry. The new algorithm makes use of perspective projection matrix, which can be easily derived from a set of known reference points. The perspective matrix calculates the transformation of a point in 3D space to the imaging coordinate system. An accurate representation of the imaging geometry can be derived from the generalized projection matrix (GPM) with eleven degrees of freedom. In this paper we show how GPM can be derived given a theoretical minimum number of reference points. We propose an algorithm to compute the line equation that defines the backprojection operation given the GPM. The algorithm can be extended to any ray-tracing based seed reconstruction algorithms. Reconstruction using the GPM does not require calibration of C-arms and the images can be acquired at arbitrary angles. The reconstruction is performed in near real-time. Our simulations show that reconstruction using GPM is robust and accuracy is independent of the source to detector distance and location of the reference points used to generate the GPM. Seed reconstruction from C-arm images acquired at unknown geometry provides a useful tool for intra-operative dosimetry in prostate brachytherapy.

  12. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  13. The reconstruction algorithm used for [{sup 68}Ga]PSMA-HBED-CC PET/CT reconstruction significantly influences the number of detected lymph node metastases and coeliac ganglia

    Energy Technology Data Exchange (ETDEWEB)

    Krohn, Thomas [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Ulm University, Department of Nuclear Medicine, Ulm (Germany); Birmes, Anita; Winz, Oliver H.; Drude, Natascha I. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Mottaghy, Felix M. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Maastricht UMC+, Department of Nuclear Medicine, Maastricht (Netherlands); Behrendt, Florian F. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Radiology Institute ' ' Aachen Land' ' , Wuerselen (Germany); Verburg, Frederik A. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); University Hospital Giessen and Marburg, Department of Nuclear Medicine, Marburg (Germany)

    2017-04-15

    To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on [{sup 68}Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one [{sup 68}Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p < 0.001 for all comparisons). The differences between the SUVs with the BLOB-OS-TF- and 3D-RAMLA algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p < 0.001 in all cases). The SUVmean ganglion/gluteus ratio was significantly higher with the BLOB-OS-TF algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p < 0.001 in all cases). The results of [{sup 68}Ga]PSMA-HBED-CC PET/CT are affected by the reconstruction algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information. (orig.)

  14. New reconstruction algorithm allows shortened acquisition time for myocardial perfusion SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Valenta, Ines; Treyer, Valerie; Husmann, Lars; Gaemperli, Oliver; Schindler, Michael J.; Herzog, Bernhard A.; Veit-Heibach, Patrick; Pazhenkottil, Aju P.; Kaufmann, Philipp A. [University Hospital Zurich, Cardiac Imaging, Zurich (Switzerland); University of Zurich, Zurich Center for Integrative Human Physiology, Zurich (Switzerland); Buechel, Ronny R.; Nkoulou, Rene [University Hospital Zurich, Cardiac Imaging, Zurich (Switzerland)

    2010-04-15

    Shortening scan time and/or reducing radiation dose at maintained image quality are the main issues of the current research in radionuclide myocardial perfusion imaging (MPI). We aimed to validate a new iterative reconstruction (IR) algorithm for SPECT MPI allowing shortened acquisition time (HALF time) while maintaining image quality vs. standard full time acquisition (FULL time). In this study, 50 patients, referred for evaluation of known or suspected coronary artery disease by SPECT MPI using 99mTc-Tetrofosmin, underwent 1-day adenosine stress 300 MBq/rest 900 MBq protocol with standard (stress 15 min/rest 15 min FULL time) immediately followed by short emission scan (stress 9 min/rest 7 min HALF time) on a Ventri SPECT camera (GE Healthcare). FULL time scans were processed with IR, short scans were additionally processed with a recently developed software algorithm for HALF time emission scans. All reconstructions were subsequently analyzed using commercially available software (QPS/QGS, Cedars Medical Sinai) with/without X-ray based attenuation correction (AC). Uptake values (percent of maximum) were compared by regression and Bland-Altman (BA) analysis in a 20-segment model. HALF scans yielded a 96% readout and 100% clinical diagnosis concordance compared to FULL. Correlation for uptake in each segment (n = 1,000) was r = 0.87at stress (p < 0.001) and r = 0.89 at rest (p < 0.001) with respective BA limits of agreement of -11% to 10% and -12% to 11%. After AC similar correlation (r = 0.82, rest; r = 0.80, stress, both p < 0.001) and BA limits were found (-12% to 10%; -13% to 12%). With the new IR algorithm, SPECT MPI can be acquired at half of the scan time without compromising image quality, resulting in an excellent agreement with FULL time scans regarding to uptake and clinical conclusion. (orig.)

  15. Direct reconstruction algorithm of current dipoles for vector magnetoencephalography and electroencephalography

    Energy Technology Data Exchange (ETDEWEB)

    Nara, Takaaki [Graduate School of Information Science and Technology, University of Tokyo, 7-3-1, Hongo, Bunkyo, Tokyo 113-8656 (Japan); Oohama, Junji [Graduate School of Information Science and Technology, University of Tokyo, 7-3-1, Hongo, Bunkyo, Tokyo 113-8656 (Japan); Hashimoto, Masaru [Graduate School of Information Science and Technology, University of Tokyo, 7-3-1, Hongo, Bunkyo, Tokyo 113-8656 (Japan); Takeda, Tsunehiro [Graduate School of Frontier Science, University of Tokyo, 5-1-5 Kashiwa-no-ha, Kashiwa, Chiba 277-8561 (Japan); Ando, Shigeru [Graduate School of Information Science and Technology, University of Tokyo, 7-3-1, Hongo, Bunkyo, Tokyo 113-8656 (Japan)

    2007-07-07

    This paper presents a novel algorithm to reconstruct parameters of a sufficient number of current dipoles that describe data (equivalent current dipoles, ECDs, hereafter) from radial/vector magnetoencephalography (MEG) with and without electroencephalography (EEG). We assume a three-compartment head model and arbitrary surfaces on which the MEG sensors and EEG electrodes are placed. Via the multipole expansion of the magnetic field, we obtain algebraic equations relating the dipole parameters to the vector MEG/EEG data. By solving them directly, without providing initial parameter guesses and computing forward solutions iteratively, the dipole positions and moments projected onto the xy-plane (equatorial plane) are reconstructed from a single time shot of the data. In addition, when the head layers and the sensor surfaces are spherically symmetric, we show that the required data reduce to radial MEG only. This clarifies the advantage of vector MEG/EEG measurements and algorithms for a generally-shaped head and sensor surfaces. In the numerical simulations, the centroids of the patch sources are well localized using vector/radial MEG measured on the upper hemisphere. By assuming the model order to be larger than the actual dipole number, the resultant spurious dipole is shown to have a much smaller strength magnetic moment (about 0.05 times smaller when the SNR = 16 dB), so that the number of ECDs is reasonably estimated. We consider that our direct method with greatly reduced computational cost can also be used to provide a good initial guess for conventional dipolar/multipolar fitting algorithms.

  16. A Solution to Reconstruct Cross-Cut Shredded Text Documents Based on Character Recognition and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Hedong Xu

    2014-01-01

    Full Text Available The reconstruction of destroyed paper documents is of more interest during the last years. This topic is relevant to the fields of forensics, investigative sciences, and archeology. Previous research and analysis on the reconstruction of cross-cut shredded text document (RCCSTD are mainly based on the likelihood and the traditional heuristic algorithm. In this paper, a feature-matching algorithm based on the character recognition via establishing the database of the letters is presented, reconstructing the shredded document by row clustering, intrarow splicing, and interrow splicing. Row clustering is executed through the clustering algorithm according to the clustering vectors of the fragments. Intrarow splicing regarded as the travelling salesman problem is solved by the improved genetic algorithm. Finally, the document is reconstructed by the interrow splicing according to the line spacing and the proximity of the fragments. Computational experiments suggest that the presented algorithm is of high precision and efficiency, and that the algorithm may be useful for the different size of cross-cut shredded text document.

  17. A 3D reconstruction algorithm for magneto-acoustic tomography with magnetic induction based on ultrasound transducer characteristics

    Science.gov (United States)

    Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-12-01

    In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the

  18. A unified treatment of some iterative algorithms in signal processing and image reconstruction

    Science.gov (United States)

    Byrne, Charles

    2004-02-01

    Let T be a (possibly nonlinear) continuous operator on Hilbert space {\\cal H} . If, for some starting vector x, the orbit sequence {Tkx,k = 0,1,...} converges, then the limit z is a fixed point of T; that is, Tz = z. An operator N on a Hilbert space {\\cal H} is nonexpansive (ne) if, for each x and y in {\\cal H} , \\[ \\| Nx-Ny\\| \\leq \\| x-y\\|. \\] Even when N has fixed points the orbit sequence {Nkx} need not converge; consider the example N = -I, where I denotes the identity operator. However, for any \\alpha \\in (0,1) the iterative procedure defined by \\[ x^{k+1}=(1-\\alpha)x^k+\\alpha Nx^k \\] converges (weakly) to a fixed point of N whenever such points exist. This is the Krasnoselskii-Mann (KM) approach to finding fixed points of ne operators. A wide variety of iterative procedures used in signal processing and image reconstruction and elsewhere are special cases of the KM iterative procedure, for particular choices of the ne operator N. These include the Gerchberg-Papoulis method for bandlimited extrapolation, the SART algorithm of Anderson and Kak, the Landweber and projected Landweber algorithms, simultaneous and sequential methods for solving the convex feasibility problem, the ART and Cimmino methods for solving linear systems of equations, the CQ algorithm for solving the split feasibility problem and Dolidze's procedure for the variational inequality problem for monotone operators.

  19. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Science.gov (United States)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  20. Complexity Analysis of an Interior Point Algorithm for the Semidefinite Optimization Based on a Kernel Function with a Double Barrier Term

    Institute of Scientific and Technical Information of China (English)

    Mohamed ACHACHE

    2015-01-01

    In this paper, we establish the polynomial complexity of a primal-dual path-following interior point algorithm for solving semidefinite optimization (SDO) problems. The proposed algorithm is based on a new kernel function which diff ers from the existing kernel functions in which it has a double barrier term. With this function we define a new search direction and also a new proximity function for analyzing its complexity. We show that if q1>q2>1, the algorithm has O((q1+1) n q1+1 2(q1−q2) log n? ) and O((q1+1) 3q1−2q2+12(q1−q2) √n log n? ) complexity results for large-and small-update methods, respectively.

  1. An efficient reconstruction algorithm for differential phase-contrast tomographic images from a limited number of views

    Energy Technology Data Exchange (ETDEWEB)

    Sunaguchi, Naoki [Faculty of Science and Technology, Gunma University, Kiryu, Gunma 376-8515 (Japan); Yuasa, Tetsuya [Graduate School of Engineering and Science, Yamagata University, Yonezawa, Yamagata 992-8510 (Japan); Gupta, Rajiv [Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Ando, Masami [Research Institute for Science and Technology, Tokyo University of Science, Noda, Chiba 278-8510 (Japan)

    2015-12-21

    The main focus of this paper is reconstruction of tomographic phase-contrast image from a set of projections. We propose an efficient reconstruction algorithm for differential phase-contrast computed tomography that can considerably reduce the number of projections required for reconstruction. The key result underlying this research is a projection theorem that states that the second derivative of the projection set is linearly related to the Laplacian of the tomographic image. The proposed algorithm first reconstructs the Laplacian image of the phase-shift distribution from the second-derivative of the projections using total variation regularization. The second step is to obtain the phase-shift distribution by solving a Poisson equation whose source is the Laplacian image previously reconstructed under the Dirichlet condition. We demonstrate the efficacy of this algorithm using both synthetically generated simulation data and projection data acquired experimentally at a synchrotron. The experimental phase data were acquired from a human coronary artery specimen using dark-field-imaging optics pioneered by our group. Our results demonstrate that the proposed algorithm can reduce the number of projections to approximately 33% as compared with the conventional filtered backprojection method, without any detrimental effect on the image quality.

  2. Towards clinical application of a Laplace operator-based region of interest reconstruction algorithm in C-arm CT.

    Science.gov (United States)

    Xia, Yan; Hofmann, Hannes; Dennerlein, Frank; Mueller, Kerstin; Schwemmer, Chris; Bauer, Sebastian; Chintalapani, Gouthami; Chinnadurai, Ponraj; Hornegger, Joachim; Maier, Andreas

    2014-03-01

    It is known that a reduction of the field-of-view in 3-D X-ray imaging is proportional to a reduction in radiation dose. The resulting truncation, however, is incompatible with conventional reconstruction algorithms. Recently, a novel method for region of interest reconstruction that uses neither prior knowledge nor extrapolation has been published, named approximated truncation robust algorithm for computed tomography (ATRACT). It is based on a decomposition of the standard ramp filter into a 2-D Laplace filtering and a 2-D Radon-based residual filtering step. In this paper, we present two variants of the original ATRACT. One is based on expressing the residual filter as an efficient 2-D convolution with an analytically derived kernel. The second variant is to apply ATRACT in 1-D to further reduce computational complexity. The proposed algorithms were evaluated by using a reconstruction benchmark, as well as two clinical data sets. The results are encouraging since the proposed algorithms achieve a speed-up factor of up to 245 compared to the 2-D Radon-based ATRACT. Reconstructions of high accuracy are obtained, e.g., even real-data reconstruction in the presence of severe truncation achieve a relative root mean square error of as little as 0.92% with respect to nontruncated data.

  3. Phase-contrast CT: fundamental theorem and fast image reconstruction algorithms

    Science.gov (United States)

    Bronnikov, Andrei V.

    2006-08-01

    Phase-contrast x-ray computed tomography (CT) is an emerging imaging technique that can be implemented at third generation synchrotron radiation sources or by using a microfocus x-ray tube. Promising experimental results have recently been obtained in material science and biological applications. At the same time, the lack of a mathematical theory comparable to that of conventional absorption-based CT limits the progress in this field. We suggest such a theory and prove a fundamental theorem that plays the same role for phase-contrast CT as the Fourier slice theorem does for absorption-based CT. The fundamental theorem allows us to derive fast image reconstruction algorithms in the form of filtered backprojection (FBP).

  4. Statistical and systematic uncertainties in pixel-based source reconstruction algorithms for gravitational lensing

    CERN Document Server

    Tagore, Amitpal

    2014-01-01

    Gravitational lens modeling of spatially resolved sources is a challenging inverse problem with many observational constraints and model parameters. We examine established pixel-based source reconstruction algorithms for de-lensing the source and constraining lens model parameters. Using test data for four canonical lens configurations, we explore statistical and systematic uncertainties associated with gridding, source regularisation, interpolation errors, noise, and telescope pointing. Specifically, we compare two gridding schemes in the source plane: a fully adaptive grid that follows the lens mapping but is irregular, and an adaptive Cartesian grid. We also consider regularisation schemes that minimise derivatives of the source (using two finite difference methods) and introduce a scheme that minimises deviations from an analytic source profile. Careful choice of gridding and regularisation can reduce "discreteness noise" in the $\\chi^2$ surface that is inherent in the pixel-based methodology. With a grid...

  5. Comparison of parametric FBP and OS-EM reconstruction algorithm images for PET dynamic study

    Energy Technology Data Exchange (ETDEWEB)

    Oda, Keiichi; Uemura, Koji; Kimura, Yuichi; Senda, Michio [Tokyo Metropolitan Inst. of Gerontology (Japan). Positron Medical Center; Toyama, Hinako; Ikoma, Yoko

    2001-10-01

    An ordered subsets expectation maximization (OS-EM) algorithm is used for image reconstruction to suppress image noise and to make non-negative value images. We have applied OS-EM to a digital brain phantom and to human brain {sup 18}F-FDG PET kinetic studies to generate parametric images. A 45 min dynamic scan was performed starting injection of FDG with a 2D PET scanner. The images were reconstructed with OS-EM (6 iterations, 16 subsets) and with filtered backprojection (FBP), and K1, k2 and k3 images were created by the Marquardt non-linear least squares method based on the 3-parameter kinetic model. Although the OS-EM activity images correlated fairly well with those obtained by FBP, the pixel correlations were poor for the k2 and k3 parametric images, but the plots were scattered along the line of identity and the mean values for K1, k2 and k3 obtained by OS-EM were almost equal to those by FBP. The kinetic fitting error for OS-EM was no smaller than that for FBP. The results suggest that OS-EM is not necessarily superior to FBP for creating parametric images. (author)

  6. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    Science.gov (United States)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  7. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Johnson, Christi R [ORNL; Clayton, Dwight A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2017-01-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  8. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    Science.gov (United States)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  9. A comparative study of interface reconstruction algorithms in molten metal flow

    Directory of Open Access Journals (Sweden)

    Young-Sim Choi

    2013-01-01

    Full Text Available In the present research, two numerical schemes for improving the accuracy of the solution in the flow simulation of molten metal were applied. One method is the Piecewise Linear Interface Calculation (PLIC method and the other is the Donor-Acceptor (D-A method. To verify the module of the interface reconstruction algorithms, simple problems were tested. After these validations, the accuracy and efficiency of these two methods were compared by simulating various real products. On the numerical simulation of free surface flow, it is possible for the PLIC method to track very accurately the interface between phases. The PLIC method, however, has the weak point in that a lot of computational time is required, though it shows the more accurate interface reconstruction. The Donor-Acceptor method has enough effectiveness in the macro-observation of a mold filling sequence though it shows inferior accuracy. Therefore, for the problems that need the accurate solution, PLIC is more appropriate than D-A. More accuracy may cause less efficiency in numerical analysis. Which method between D-A method and PLIC method should be chosen depends on the product.

  10. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    Science.gov (United States)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  11. Region-of-interest reconstruction on medical C-arms with the ATRACT algorithm

    Science.gov (United States)

    Dennerlein, Frank; Maier, Andreas

    2012-03-01

    Between 2006 and 2008, the business volume of the top 20 orthopedic manufacturers increased by 30% to about 35 Billion. Similar growth rates could be observed in the market of neurological devices, which went up in 2009 by 10.9% to a volume of 2.2 Billion in the US and by 7.0% to 500 Million in Europe.* These remarkable increases are closely connected to the fact that nowadays, many medical procedures, such as implantations in osteosynthesis or the placement of stents in neuroradiology can be performed using minimally-invasive approaches. Such approaches require elaborate interoperative imaging technology. C-arm based tomographic X-ray region-of-interest (ROI) tomography can deliver suitable imaging guidance in these circumstances: it can offer 3D information in desired patient regions at reasonably low X-ray dose. Tomographic ROI reconstruction, however, is in general challenging since projection images might be severely truncated. Recently, a novel, truncation-robust algorithm (ATRACT) has been suggested for 3D C-arm ROI imaging. In this paper, we report for the first time on the performance of ATRACT for reconstruction from real, angiographic C-arm data. Our results indicate that the resulting ROI image quality is suitable for intraoperative imaging. We observe only little differences to the images from a non-truncated acquisition, which would necessarily require significantly more X-ray dose.

  12. 3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    CERN Document Server

    Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia

    2015-01-01

    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...

  13. Reconstruction-by-Dilation and Top-Hat Algorithms for Contrast Enhancement and Segmentation of Microcalcifications in Digital Mammograms

    Science.gov (United States)

    Diaz, Claudia C.

    2007-11-01

    I present some results of contrast enhancement and segmentation of microcalcifications in digital mammograms. These mammograms were obtained from MIAS-minidatabase and using a CR to digitize images. White-top-hat and black-top-hat transformations were used to improve the contrast of images, while reconstruction-by-dilation algorithm was used to emphasize the microcalcifications over the tissues. Segmentation was done using different gradient matrices. These algorithms intended to show some details which were not evident in original images.

  14. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    OpenAIRE

    Abbasi Mahdi; Naghsh-Nilchi Ahmad-Reza

    2012-01-01

    Abstract Background Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, sy...

  15. Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems

    Energy Technology Data Exchange (ETDEWEB)

    O' Leary, Dianne P. [Univ. of Maryland; Tits, Andre [Univ. of Maryland

    2014-04-03

    Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.

  16. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaoming, E-mail: mmayzy2008@126.com; Li, Tao, E-mail: litaofeivip@163.com; Li, Xin, E-mail: lx0803@sina.com.cn; Zhou, Weihua, E-mail: wangxue0606@gmail.com

    2015-05-15

    Highlights: • High-resolution scan mode is appropriate for imaging coronary stent. • HD-detail reconstruction algorithm is stent-dedicated kernel. • The intrastent lumen visibility also depends on stent diameter and material. - Abstract: Objective: The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Materials and methods: Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. Results: The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0 ± 8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Conclusions: Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement.

  17. Phantom-based evaluations of two binning algorithms for four-dimensional CT reconstruction in lung cancer radiation therapy

    Institute of Scientific and Technical Information of China (English)

    Fuli Zhang; Huayong Jiang; Weidong Xu; Yadi Wang ; Qingzhi Liu; Na Lu; Diandian Chen; Bo Yao

    2014-01-01

    Objective: The purpose of this study was to evaluate the performance of the phase-binning algorithm and am-plitude-binning algorithm for four-dimensional computed tomography (4DCT) reconstruction in lung cancer radiation therapy. Methods: Quasar phantom data were used for evaluation.Aphantom of known geometry was mounted on a four-dimensional (4D) motion platform programmed with twelve respiratory waves (twelve lung patients trajectories) and scanned with a Philips Bril iance Big bore 16-slice CT simulator. The 4DCT images were reconstructed using both phase- and amplitude-binning algorithms. Internal target volumes (ITVs) of the phase- and amplitude-binned image sets were compared by evaluation of shape and volume distortions. Results: The phantom experiments il ustrated that, as expected, maximum inhalation occurred at the 0% amplitude and maximum exhalation occurred at the 50% amplitude of the amplitude-binned 4DCT image sets. The amplitude-binned algorithm rendered smal er ITV than the phase-binning algorithm. Conclusion: The amplitude-binning algorithm for 4DCT reconstruction may have a potential advantage in reducing the margin and protecting normal lung tissue from unnecessary irradiation.

  18. Influence of reconstruction settings on the performance of adaptive thresholding algorithms for FDG-PET image segmentation in radiotherapy planning.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Loi, Gianfranco; Vigna, Luca; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-30

    The purpose of this study was to analyze the behavior of a contouring algorithm for PET images based on adaptive thresholding depending on lesions size and target-to-background (TB) ratio under different conditions of image reconstruction parameters. Based on this analysis, the image reconstruction scheme able to maximize the goodness of fit of the thresholding algorithm has been selected. A phantom study employing spherical targets was designed to determine slice-specific threshold (TS) levels which produce accurate cross-sectional areas. A wide range of TB ratio was investigated. Multiple regression methods were used to fit the data and to construct algorithms depending both on target cross-sectional area and TB ratio, using various reconstruction schemes employing a wide range of iteration number and amount of postfiltering Gaussian smoothing. Analysis of covariance was used to test the influence of iteration number and smoothing on threshold determination. The degree of convergence of ordered-subset expectation maximization (OSEM) algorithms does not influence TS determination. Among these approaches, the OSEM at two iterations and eight subsets with a 6-8 mm post-reconstruction Gaussian three-dimensional filter provided the best fit with a coefficient of determination R² = 0.90 for cross-sectional areas ≤ 133 mm² and R² = 0.95 for cross-sectional areas > 133 mm². The amount of post-reconstruction smoothing has been directly incorporated in the adaptive thresholding algorithms. The feasibility of the method was tested in two patients with lymph node FDG accumulation and in five patients using the bladder to mimic an anatomical structure of large size and uniform uptake, with satisfactory results. Slice-specific adaptive thresholding algorithms look promising as a reproducible method for delineating PET target volumes with good accuracy.

  19. Computational performance comparison of wavefront reconstruction algorithms for the European Extremely Large Telescope on multi-CPU architecture.

    Science.gov (United States)

    Feng, Lu; Fedrigo, Enrico; Béchet, Clémentine; Brunner, Elisabeth; Pirani, Werther

    2012-06-01

    The European Southern Observatory (ESO) is studying the next generation giant telescope, called the European Extremely Large Telescope (E-ELT). With a 42 m diameter primary mirror, it is a significant step from currently existing telescopes. Therefore, the E-ELT with its instruments poses new challenges in terms of cost and computational complexity for the control system, including its adaptive optics (AO). Since the conventional matrix-vector multiplication (MVM) method successfully used so far for AO wavefront reconstruction cannot be efficiently scaled to the size of the AO systems on the E-ELT, faster algorithms are needed. Among those recently developed wavefront reconstruction algorithms, three are studied in this paper from the point of view of design, implementation, and absolute speed on three multicore multi-CPU platforms. We focus on a single-conjugate AO system for the E-ELT. The algorithms are the MVM, the Fourier transform reconstructor (FTR), and the fractal iterative method (FRiM). This study enhances the scaling of these algorithms with an increasing number of CPUs involved in the computation. We discuss implementation strategies, depending on various CPU architecture constraints, and we present the first quantitative execution times so far at the E-ELT scale. MVM suffers from a large computational burden, making the current computing platform undersized to reach timings short enough for AO wavefront reconstruction. In our study, the FTR provides currently the fastest reconstruction. FRiM is a recently developed algorithm, and several strategies are investigated and presented here in order to implement it for real-time AO wavefront reconstruction, and to optimize its execution time. The difficulty to parallelize the algorithm in such architecture is enhanced. We also show that FRiM can provide interesting scalability using a sparse matrix approach.

  20. 3-Dimensional stereo implementation of photoacoustic imaging based on a new image reconstruction algorithm without using discrete Fourier transform

    Science.gov (United States)

    Ham, Woonchul; Song, Chulgyu

    2017-05-01

    In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.

  1. Optimization of image reconstruction for yttrium-90 SIRT on a LYSO PET/CT system using a Bayesian penalized likelihood reconstruction algorithm.

    Science.gov (United States)

    Rowley, Lisa M; Bradley, Kevin M; Boardman, Philip; Hallam, Aida; McGowan, Daniel R

    2016-09-29

    Imaging on a gamma camera with Yttrium-90 ((90)Y) following selective internal radiotherapy (SIRT) may allow for verification of treatment delivery but suffers relatively poor spatial resolution and imprecise dosimetry calculation. (90)Y Positron Emission Tomography (PET) / Computed Tomography (CT) imaging is possible on 3D, time-of-flight machines however images are usually poor due to low count statistics and noise. A new PET reconstruction software using a Bayesian penalized likelihood (BPL) reconstruction algorithm (termed Q.Clear) released by GE was investigated using phantom and patient scans to optimize the reconstruction for post-SIRT imaging and clarify if this leads to an improvement in clinical image quality using (90)Y.

  2. Reconstructing the history of a WD40 beta-propeller tandem repeat using a phylogenetically informed algorithm

    Directory of Open Access Journals (Sweden)

    Philippe Lavoie-Mongrain

    2015-05-01

    Full Text Available Tandem repeat sequences have been found in great numbers in proteins that are conserved in a wide range of living species. In order to reconstruct the evolutionary history of such sequences, it is necessary to develop algorithms and methods that can work with highly divergent motifs. Here we propose a reconstruction algorithm that uses, in parallel, ortholog tandem repeat sequences from n species whose phylogeny is known, allowing it to distinguish mutations that occurred before and after the first speciation. At each step of the reconstruction, both the boundaries and the length of the duplicated segment are recalculated, making the approach suitable for sequences for which the fixed boundary hypothesis may not hold. We use this algorithm to reconstruct a 4-bladed ancestor of the 7-bladed WD40 beta-propeller, using orthologs of the GNB1 human protein in plants, yeasts, nematodes, insects and fishes. The results obtained for the WD40 repeats are very encouraging, as the noise in the duplication reconstruction is significantly reduced.

  3. A Simple Algorithm for Immediate Postmastectomy Reconstruction of the Small Breast—A Single Surgeon's 10-Year Experience

    Science.gov (United States)

    Kitcat, Magelia; Molina, Alexandra; Meldon, Charlotte; Darhouse, Nagham; Clibbon, Jon; Malata, Charles M.

    2012-01-01

    Introduction: Immediate small breast reconstruction poses challenges including limited potential donor site tissues, a thinner skin envelope, and limited implant choice. Few patients are suitable for autologous reconstruction while contralateral symmetrization surgery that often offsets the problem of obvious asymmetry in thin and small-breasted patients is often unavailable, too expensive, or declined by the patient. Methods: We reviewed 42 consecutive patients with mastectomy weights of 350 g or less (the lowest quartile of all reconstructions). Indications for the mastectomy, body mass index, bra cup size, comorbidity, reconstruction type, and complications were recorded. Results: A total of 59 immediate reconstructions, including 25 latissimus dorsi flaps, 23 implant-only reconstructions, 9 abdominal flaps, and 2 gluteal flaps, were performed in 42 patients. Of the 42 mastectomies, 4 were prophylactic. Forty-three percent of patients had immediate contralateral balancing surgery. The average mastectomy weight was 231 g (range, 74-350 g). Seven percent of implant-based reconstructions developed capsular contracture requiring further surgery. One free transverse rectus abdominus myocutaneous flap failed because of fulminant methicillin resistant staphylococcus aureus septicaemia. Discussion and Conclusion: Balancing contralateral surgery is key in achieving excellent symmetry in reconstruction small-breasted patients. However, many patients wish to avoid contralateral surgery, thus restricting a surgeon's reconstructive options. Autologous flaps, traditionally, had not been considered in thinner women because of inadequacy of donor site tissue, but in fact, often, as with larger-breasted patients, produce superior cosmetic results. We propose a simple algorithm for the reconstruction of small-breasted women (without resorting to super-complex microsurgery), which is designed to tailor the choice of reconstructive technique to the requirements of the individual

  4. The impact of CT radiation dose reduction and iterative reconstruction algorithms from four different vendors on coronary calcium scoring

    NARCIS (Netherlands)

    Willemink, M.J.; Takx, R.A.P.; Jong, P.A. de; Budde, R.P.; Bleys, R.L.; Das, M.; Wildberger, J.E.; Prokop, M.; Buls, N.; Mey, J. de; Schilham, A.M.; Leiner, T.

    2014-01-01

    o analyse the effects of radiation dose reduction and iterative reconstruction (IR) algorithms on coronary calcium scoring (CCS).Fifteen ex vivo human hearts were examined in an anthropomorphic chest phantom using computed tomography (CT) systems from four vendors and examined at four dose levels us

  5. The use of anatomical information for molecular image reconstruction algorithms: Attention/Scatter correction, motion compensation, and noise reduction

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)

    2016-03-15

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.

  6. Nonlinear multifunctional sensor signal reconstruction based on least squares support vector machines and total least squares algorithm

    Institute of Scientific and Technical Information of China (English)

    Xin LIU; Guo WEI; Jin-wei SUN; Dan LIU

    2009-01-01

    Least squares support vector machines (LS-SVMs) are modified support vector machines (SVMs) that involve equality constraints and work with a least squares cost function, which simplifies the optimization procedure. In this paper, a novel training algorithm based on total least squares (TLS) for an LS-SVM is presented and applied to muhifunctional sensor signal reconstruction. For three different nonlinearities of a multi functional sensor model, the reconstruction accuracies of input signals are 0.001 36%, 0.03184% and 0.504 80%, respectively. The experimental results demonstrate the higher reliability and accuracy of the proposed method for multi functional sensor signal reconstruction than the original LS-SVM training algorithm, and verify the feasibility and stability of the proposed method.

  7. Comparison between different tomographic reconstruction algorithms in nuclear medicine imaging; Comparacion entre distintos algoritmos de reconstruccion tomografica en imagenes de medicina nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Llacer Martos, S.; Herraiz Lablanca, M. D.; Puchal Ane, R.

    2011-07-01

    This paper compares the image quality obtained with each of the algorithms is evaluated and its running time, to optimize the choice of algorithm to use taking into account both the quality of the reconstructed image as the time spent on the reconstruction.

  8. Contrast improvement of continuous wave diffuse optical tomography reconstruction by hybrid approach using least square and genetic algorithm.

    Science.gov (United States)

    Patra, Rusha; Dutta, Pranab K

    2015-07-01

    Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10⁻³, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.

  9. Large-scale wave-front reconstruction for adaptive optics systems by use of a recursive filtering algorithm.

    Science.gov (United States)

    Ren, Hongwu; Dekany, Richard; Britton, Matthew

    2005-05-01

    We propose a new recursive filtering algorithm for wave-front reconstruction in a large-scale adaptive optics system. An embedding step is used in this recursive filtering algorithm to permit fast methods to be used for wave-front reconstruction on an annular aperture. This embedding step can be used alone with a direct residual error updating procedure or used with the preconditioned conjugate-gradient method as a preconditioning step. We derive the Hudgin and Fried filters for spectral-domain filtering, using the eigenvalue decomposition method. Using Monte Carlo simulations, we compare the performance of discrete Fourier transform domain filtering, discrete cosine transform domain filtering, multigrid, and alternative-direction-implicit methods in the embedding step of the recursive filtering algorithm. We also simulate the performance of this recursive filtering in a closed-loop adaptive optics system.

  10. Performance comparison of two resolution modeling PET reconstruction algorithms in terms of physical figures of merit used in quantitative imaging.

    Science.gov (United States)

    Matheoud, R; Ferrando, O; Valzano, S; Lizio, D; Sacchetti, G; Ciarmiello, A; Foppiano, F; Brambilla, M

    2015-07-01

    Resolution modeling (RM) of PET systems has been introduced in iterative reconstruction algorithms for oncologic PET. The RM recovers the loss of resolution and reduces the associated partial volume effect. While these methods improved the observer performance, particularly in the detection of small and faint lesions, their impact on quantification accuracy still requires thorough investigation. The aim of this study was to characterize the performances of the RM algorithms under controlled conditions simulating a typical (18)F-FDG oncologic study, using an anthropomorphic phantom and selected physical figures of merit, used for image quantification. Measurements were performed on Biograph HiREZ (B_HiREZ) and Discovery 710 (D_710) PET/CT scanners and reconstructions were performed using the standard iterative reconstructions and the RM algorithms associated to each scanner: TrueX and SharpIR, respectively. RM determined a significant improvement in contrast recovery for small targets (≤17 mm diameter) only for the D_710 scanner. The maximum standardized uptake value (SUVmax) increased when RM was applied using both scanners. The SUVmax of small targets was on average lower with the B_HiREZ than with the D_710. Sharp IR improved the accuracy of SUVmax determination, whilst TrueX showed an overestimation of SUVmax for sphere dimensions greater than 22 mm. The goodness of fit of adaptive threshold algorithms worsened significantly when RM algorithms were employed for both scanners. Differences in general quantitative performance were observed for the PET scanners analyzed. Segmentation of PET images using adaptive threshold algorithms should not be undertaken in conjunction with RM reconstructions. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Reconstructing satellite images to quantify spatially explicit land surface change caused by fires and succession: A demonstration in the Yukon River Basin of interior Alaska

    Science.gov (United States)

    Huang, Shengli; Jin, Suming; Dahal, Devendra; Chen, Xuexia; Young, Claudia; Liu, Heping; Liu, Shuguang

    2013-05-01

    Land surface change caused by fires and succession is confounded by many site-specific factors and requires further study. The objective of this study was to reveal the spatially explicit land surface change by minimizing the confounding factors of weather variability, seasonal offset, topography, land cover, and drainage. In a pilot study of the Yukon River Basin of interior Alaska, we retrieved Normalized Difference Vegetation Index (NDVI), albedo, and land surface temperature (LST) from a postfire Landsat image acquired on August 5th, 2004. With a Landsat reference image acquired on June 26th, 1986, we reconstructed NDVI, albedo, and LST of 1987-2004 fire scars for August 5th, 2004, assuming that these fires had not occurred. The difference between actual postfire and assuming-no-fire scenarios depicted the fires and succession impact. Our results demonstrated the following: (1) NDVI showed an immediate decrease after burning but gradually recovered to prefire levels in the following years, in which burn severity might play an important role during this process; (2) Albedo showed an immediate decrease after burning but then recovered and became higher than prefire levels; and (3) Most fires caused surface warming, but cooler surfaces did exist; time-since-fire affected the prefire and postfire LST difference but no absolute trend could be found. Our approach provided spatially explicit land surface change rather than average condition, enabling a better understanding of fires and succession impact on ecological consequences at the pixel level.

  12. Fast 4D cone-beam reconstruction using the McKinnon-Bates algorithm with truncation correction and nonlinear filtering

    Science.gov (United States)

    Zheng, Ziyi; Sun, Mingshan; Pavkovich, John; Star-Lack, Josh

    2011-03-01

    A challenge in using on-board cone beam computed tomography (CBCT) to image lung tumor motion prior to radiation therapy treatment is acquiring and reconstructing high quality 4D images in a sufficiently short time for practical use. For the 1 minute rotation times typical of Linacs, severe view aliasing artifacts, including streaks, are created if a conventional phase-correlated FDK reconstruction is performed. The McKinnon-Bates (MKB) algorithm provides an efficient means of reducing streaks from static tissue but can suffer from low SNR and other artifacts due to data truncation and noise. We have added truncation correction and bilateral nonlinear filtering to the MKB algorithm to reduce streaking and improve image quality. The modified MKB algorithm was implemented on a graphical processing unit (GPU) to maximize efficiency. Results show that a nearly 4x improvement in SNR is obtained compared to the conventional FDK phase-correlated reconstruction and that high quality 4D images with 0.4 second temporal resolution and 1 mm3 isotropic spatial resolution can be reconstructed in less than 20 seconds after data acquisition completes.

  13. A reconstruction algorithm based on topological gradient for an inverse problem related to a semilinear elliptic boundary value problem

    Science.gov (United States)

    Beretta, Elena; Manzoni, Andrea; Ratti, Luca

    2017-03-01

    In this paper we develop a reconstruction algorithm for the solution of an inverse boundary value problem dealing with a semilinear elliptic partial differential equation of interest in cardiac electrophysiology. The goal is the detection of small inhomogeneities located inside a domain Ω , where the coefficients of the equation are altered, starting from observations of the solution of the equation on the boundary \\partial Ω . Exploiting theoretical results recently achieved in [13], we implement a reconstruction procedure based on the computation of the topological gradient of a suitable cost functional. Numerical results obtained for several test cases finally assess the feasibility and the accuracy of the proposed technique.

  14. Prostate tissue decomposition via DECT using the model based iterative image reconstruction algorithm DIRA

    Science.gov (United States)

    Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Westin, Robin; Alm Carlsson, Gudrun

    2014-03-01

    Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.

  15. The Nonlocal Sparse Reconstruction Algorithm by Similarity Measurement with Shearlet Feature Vector

    Directory of Open Access Journals (Sweden)

    Wu Qidi

    2014-01-01

    Full Text Available Due to the limited accuracy of conventional methods with image restoration, the paper supplied a nonlocal sparsity reconstruction algorithm with similarity measurement. To improve the performance of restoration results, we proposed two schemes to dictionary learning and sparse coding, respectively. In the part of the dictionary learning, we measured the similarity between patches from degraded image by constructing the Shearlet feature vector. Besides, we classified the patches into different classes with similarity and trained the cluster dictionary for each class, by cascading which we could gain the universal dictionary. In the part of sparse coding, we proposed a novel optimal objective function with the coding residual item, which can suppress the residual between the estimate coding and true sparse coding. Additionally, we show the derivation of self-adaptive regularization parameter in optimization under the Bayesian framework, which can make the performance better. It can be indicated from the experimental results that by taking full advantage of similar local geometric structure feature existing in the nonlocal patches and the coding residual suppression, the proposed method shows advantage both on visual perception and PSNR compared to the conventional methods.

  16. Regularized image reconstruction algorithms for dual-isotope myocardial perfusion SPECT (MPS) imaging using a cross-tracer prior.

    Science.gov (United States)

    He, Xin; Cheng, Lishui; Fessler, Jeffrey A; Frey, Eric C

    2011-06-01

    In simultaneous dual-isotope myocardial perfusion SPECT (MPS) imaging, data are simultaneously acquired to determine the distributions of two radioactive isotopes. The goal of this work was to develop penalized maximum likelihood (PML) algorithms for a novel cross-tracer prior that exploits the fact that the two images reconstructed from simultaneous dual-isotope MPS projection data are perfectly registered in space. We first formulated the simultaneous dual-isotope MPS reconstruction problem as a joint estimation problem. A cross-tracer prior that couples voxel values on both images was then proposed. We developed an iterative algorithm to reconstruct the MPS images that converges to the maximum a posteriori solution for this prior based on separable surrogate functions. To accelerate the convergence, we developed a fast algorithm for the cross-tracer prior based on the complete data OS-EM (COSEM) framework. The proposed algorithm was compared qualitatively and quantitatively to a single-tracer version of the prior that did not include the cross-tracer term. Quantitative evaluations included comparisons of mean and standard deviation images as well as assessment of image fidelity using the mean square error. We also evaluated the cross tracer prior using a three-class observer study with respect to the three-class MPS diagnostic task, i.e., classifying patients as having either no defect, reversible defect, or fixed defects. For this study, a comparison with conventional ordered subsets-expectation maximization (OS-EM) reconstruction with postfiltering was performed. The comparisons to the single-tracer prior demonstrated similar resolution for areas of the image with large intensity changes and reduced noise in uniform regions. The cross-tracer prior was also superior to the single-tracer version in terms of restoring image fidelity. Results of the three-class observer study showed that the proposed cross-tracer prior and the convergent algorithms improved the

  17. A new algorithm for $H\\rightarrow\\tau\\bar{\\tau}$ invariant mass reconstruction using Deep Neural Networks

    CERN Document Server

    Dietrich, Felix

    2017-01-01

    Reconstructing the invariant mass in a Higgs boson decay event containing tau leptons turns out to be a challenging endeavour. The aim of this summer student project is to implement a new algorithm for this task, using deep neural networks and machine learning. The results are compared to SVFit, an existing algorithm that uses dynamical likelihood techniques. A neural network is found that reaches the accuracy of SVFit at low masses and even surpasses it at higher masses, while at the same time providing results a thousand times faster.

  18. Validation of the Mean-Timer algorithm for DT Local Reconstruction and muon time measurement, using 2012 data.

    CERN Document Server

    CMS Collaboration

    2015-01-01

    The Mean-Timer algorithm is used for the local reconstruction within the CMS Drift Tubes (DT), for muons that appear to be out-of-time (OOT) or lack measured hits in one of the two space projections. Compared to standard linear fit, this method improves the spatial resoluton for OOT muons. It also allows a precise time measurement that can be used to tag OOT muons, in order either to reject them (e.g. as a result of OOT Pile Up) or to select them for exotic physical analyses. The algorithm was initially developed and tuned on simulation. We present here the first performance results obtained on 2012 data.

  19. Hardware Implementation and Validation of 3D Underwater Shape Reconstruction Algorithm Using a Stereo-Catadioptric System

    Directory of Open Access Journals (Sweden)

    Rihab Hmida

    2016-08-01

    Full Text Available In this paper, we present a new stereo vision-based system and its efficient hardware implementation for real-time underwater environments exploration throughout 3D sparse reconstruction based on a number of feature points. The proposed underwater 3D shape reconstruction algorithm details are presented. The main concepts and advantages are discussed and comparison with existing systems is performed. In order to achieve real-time video constraints, a hardware implementation of the algorithm is performed using Xilinx System Generator. The pipelined stereo vision system has been implemented using Field Programmable Gate Arrays (FPGA technology. Both timing constraints and mathematical operations precision have been evaluated in order to validate the proposed hardware implementation of our system. Experimental results show that the proposed system presents high accuracy and execution time performances.

  20. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    Science.gov (United States)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  1. A new wavelet-based reconstruction algorithm for twin image removal in digital in-line holography

    Science.gov (United States)

    Hattay, Jamel; Belaid, Samir; Aguili, Taoufik; Lebrun, Denis

    2016-07-01

    Two original methods are proposed here for digital in-line hologram processing. Firstly, we propose an entropy-based method to retrieve the focus plane which is very useful for digital hologram reconstruction. Secondly, we introduce a new approach to remove the so-called twin images reconstructed by holograms. This is achieved owing to the Blind Source Separation (BSS) technique. The proposed method is made up of two steps: an Adaptive Quincunx Lifting Scheme (AQLS) and a statistical unmixing algorithm. The AQLS tool is based on wavelet packet transform, whose role is to maximize the sparseness of the input holograms. The unmixing algorithm uses the Independent Component Analysis (ICA) tool. Experimental results confirm the ability of convolutive blind source separation to discard the unwanted twin image from in-line digital holograms.

  2. Influence of radiation dose and reconstruction algorithm in MDCT assessment of airway wall thickness: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Cardona, Daniel [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Nagle, Scott K. [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Department of Pediatrics, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Li, Ke; Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Robinson, Terry E. [Department of Pediatrics, Stanford School of Medicine, 770 Welch Road, Palo Alto, California 94304 (United States)

    2015-10-15

    Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiation dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose

  3. Effects of acquisition time and reconstruction algorithm on image quality, quantitative parameters, and clinical interpretation of myocardial perfusion imaging

    DEFF Research Database (Denmark)

    Enevoldsen, Lotte H; Menashi, Changez A K; Andersen, Ulrik B;

    2013-01-01

    BACKGROUND: Recently introduced iterative reconstruction algorithms with resolution recovery (RR) and noise-reduction technology seem promising for reducing scan time or radiation dose without loss of image quality. However, the relative effects of reduced acquisition time and reconstruction...... software have not previously been reported. The aim of the present study was to investigate the influence of reduced acquisition time and reconstruction software on quantitative and qualitative myocardial perfusion single photon emission computed tomography (SPECT) parameters using full time (FT) and half...... time (HT) protocols and Evolution for Cardiac Software. METHODS: We studied 45 consecutive, non-selected patients referred for a clinically indicated routine 2-day stress/rest (99m)Tc-Sestamibi myocardial perfusion SPECT. All patients underwent an FT and an HT scan. Both FT and HT scans were processed...

  4. Reconstruction

    Directory of Open Access Journals (Sweden)

    Stefano Zurrida

    2011-01-01

    Full Text Available Breast cancer is the most common cancer in women. Primary treatment is surgery, with mastectomy as the main treatment for most of the twentieth century. However, over that time, the extent of the procedure varied, and less extensive mastectomies are employed today compared to those used in the past, as excessively mutilating procedures did not improve survival. Today, many women receive breast-conserving surgery, usually with radiotherapy to the residual breast, instead of mastectomy, as it has been shown to be as effective as mastectomy in early disease. The relatively new skin-sparing mastectomy, often with immediate breast reconstruction, improves aesthetic outcomes and is oncologically safe. Nipple-sparing mastectomy is newer and used increasingly, with better acceptance by patients, and again appears to be oncologically safe. Breast reconstruction is an important adjunct to mastectomy, as it has a positive psychological impact on the patient, contributing to improved quality of life.

  5. Cine cone beam CT reconstruction using low-rank matrix factorization: algorithm and a proof-of-princple study

    CERN Document Server

    Cai, Jian-Feng; Gao, Hao; Jiang, Steve B; Shen, Zuowei; Zhao, Hongkai

    2012-01-01

    Respiration-correlated CBCT, commonly called 4DCBCT, provide respiratory phase-resolved CBCT images. In many clinical applications, it is more preferable to reconstruct true 4DCBCT with the 4th dimension being time, i.e., each CBCT image is reconstructed based on the corresponding instantaneous projection. We propose in this work a novel algorithm for the reconstruction of this truly time-resolved CBCT, called cine-CBCT, by effectively utilizing the underlying temporal coherence, such as periodicity or repetition, in those cine-CBCT images. Assuming each column of the matrix $\\bm{U}$ represents a CBCT image to be reconstructed and the total number of columns is the same as the number of projections, the central idea of our algorithm is that the rank of $\\bm{U}$ is much smaller than the number of projections and we can use a matrix factorization form $\\bm{U}=\\bm{L}\\bm{R}$ for $\\bm{U}$. The number of columns for the matrix $\\bm{L}$ constraints the rank of $\\bm{U}$ and hence implicitly imposing a temporal cohere...

  6. Algorithmic aspects for the reconstruction of spatio-spectral data cubes in the perspective of the SKA

    Science.gov (United States)

    Mary, D.; Ferrari, A.; Ferrari, C.; Deguignet, J.; Vannier, M.

    2016-12-01

    With millions of receivers leading to TerraByte data cubes, the story of the giant SKA telescope is also that of collaborative efforts from radioastronomy, signal processing, optimization and computer sciences. Reconstructing SKA cubes poses two challenges. First, the majority of existing algorithms work in 2D and cannot be directly translated into 3D. Second, the reconstruction implies solving an inverse problem and it is not clear what ultimate limit we can expect on the error of this solution. This study addresses (of course partially) both challenges. We consider an extremely simple data acquisition model, and we focus on strategies making it possible to implement 3D reconstruction algorithms that use state-of-the-art image/spectral regularization. The proposed approach has two main features: (i) reduced memory storage with respect to a previous approach; (ii) efficient parallelization and ventilation of the computational load over the spectral bands. This work will allow to implement and compare various 3D reconstruction approaches in a large scale framework.

  7. AN INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMMING PROBLEM WITH BOX CONSTRAINTS%框式约束凸二次规划问题的内点算法

    Institute of Scientific and Technical Information of China (English)

    张艺

    2002-01-01

    In this paper,a primal-dual interior point algorithm for convex quadratic progromming problem with box constrains is presented.It can be started at any primal-dual interior feasible point.If the initial point is close to the central path,it becomes a central path-following alogorithm and requires a total of O(√nL)number of iterations,where L is the input length.

  8. A comparative study based on image quality and clinical task performance for CT reconstruction algorithms in radiotherapy.

    Science.gov (United States)

    Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa

    2016-07-01

    CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (<5 HU) and target CT number variations (<1 HU). The radiation

  9. A comparative study based on image quality and clinical task performance for CT reconstruction algorithms in radiotherapy.

    Science.gov (United States)

    Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa

    2016-07-08

    CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4 iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (< 5 HU) and target CT number variations (< 1 HU). The radiation

  10. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  11. The holographic reconstructing algorithm and its error analysis about phase-shifting phase measurement

    Institute of Scientific and Technical Information of China (English)

    LU Xiaoxu; ZHONG Liyun; ZHANG Yimo

    2007-01-01

    Phase-shifting measurement and its error estimation method were studied according to the holographic principle.A function of synchronous superposition of object complex amplitude reconstructed from N-step phase-shifting through one integral period (N-step phase-shifting function for short) was proposed.In N-step phase-shifting measurement,the interferograms are seen as a series of in-line holograms and the reference beam is an ideal parallel-plane wave.So the N-step phase-shifting function can be obtained by multiplying the interferogram by the original referencc wave.In ideal conditions.the proposed method is a kind of synchronous superposition algorithm in which the complex amplitude is separated,measured and superposed.When error exists in measurement,the result of the N-step phase-shifting function is the optimal expected value of the least-squares fitting method.In the above method,the N+1-step phase-shifting function can be obtained from the N-step phase-shifting function.It shows that the N-step phase-shifting function can be separated into two parts:the ideal N-step phase-shifting function and its errors.The phase-shifting errors in N-steps phase-shifting phase measurement can be treated the same as the relative errors of amplitude and intensity under the understanding of the N+1-step phase-shifting function.The difficulties of the error estimation in phase-shifting phase measurement were restricted by this error estimation method.Meanwhile,the maximum error estimation method of phase-shifting phase measurement and its formula were proposed.

  12. Evaluating the sensitivity of the optimization of acquisition geometry to the choice of reconstruction algorithm in digital breast tomosynthesis through a simulation study

    Science.gov (United States)

    Zeng, Rongping; Park, Subok; Bakic, Predrag; Myers, Kyle J.

    2015-02-01

    Due to the limited number of views and limited angular span in digital breast tomosynthesis (DBT), the acquisition geometry design is an important factor that affects the image quality. Therefore, intensive studies have been conducted regarding the optimization of the acquisition geometry. However, different reconstruction algorithms were used in most of the reported studies. Because each type of reconstruction algorithm can provide images with its own image resolution, noise properties and artifact appearance, it is unclear whether the optimal geometries concluded for the DBT system in one study can be generalized to the DBT systems with a reconstruction algorithm different to the one applied in that study. Hence, we investigated the effect of the reconstruction algorithm on the optimization of acquisition geometry parameters through carefully designed simulation studies. Our results show that using various reconstruction algorithms, including the filtered back-projection, the simultaneous algebraic reconstruction technique, the maximum-likelihood method and the total-variation regularized least-square method, gave similar performance trends for the acquisition parameters for detecting lesions. The consistency of system ranking indicates that the choice of the reconstruction algorithm may not be critical for DBT system geometry optimization.

  13. Investigation of undersampling and reconstruction algorithm dependence on respiratory correlated 4D-MRI for online MR-guided radiation therapy

    Science.gov (United States)

    Mickevicius, Nikolai J.; Paulson, Eric S.

    2017-04-01

    The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.

  14. Implementation on GPU-based acceleration of the m-line reconstruction algorithm for circle-plus-line trajectory computed tomography

    Science.gov (United States)

    Li, Zengguang; Xi, Xiaoqi; Han, Yu; Yan, Bin; Li, Lei

    2016-10-01

    The circle-plus-line trajectory satisfies the exact reconstruction data sufficiency condition, which can be applied in C-arm X-ray Computed Tomography (CT) system to increase reconstruction image quality in a large cone angle. The m-line reconstruction algorithm is adopted for this trajectory. The selection of the direction of m-lines is quite flexible and the m-line algorithm needs less data for accurate reconstruction compared with FDK-type algorithms. However, the computation complexity of the algorithm is very large to obtain efficient serial processing calculations. The reconstruction speed has become an important issue which limits its practical applications. Therefore, the acceleration of the algorithm has great meanings. Compared with other hardware accelerations, the graphics processing unit (GPU) has become the mainstream in the CT image reconstruction. GPU acceleration has achieved a better acceleration effect in FDK-type algorithms. But the implementation of the m-line algorithm's acceleration for the circle-plus-line trajectory is different from the FDK algorithm. The parallelism of the circular-plus-line algorithm needs to be analyzed to design the appropriate acceleration strategy. The implementation can be divided into the following steps. First, selecting m-lines to cover the entire object to be rebuilt; second, calculating differentiated back projection of the point on the m-lines; third, performing Hilbert filtering along the m-line direction; finally, the m-line reconstruction results need to be three-dimensional-resembled and then obtain the Cartesian coordinate reconstruction results. In this paper, we design the reasonable GPU acceleration strategies for each step to improve the reconstruction speed as much as possible. The main contribution is to design an appropriate acceleration strategy for the circle-plus-line trajectory m-line reconstruction algorithm. Sheep-Logan phantom is used to simulate the experiment on a single K20 GPU. The

  15. An enhanced reconstruction algorithm to extend CT scan field-of-view with z-axis consistency constraint.

    Science.gov (United States)

    Li, Baojun; Deng, Junjun; Lonn, Albert H; Hsieh, Jiang

    2012-10-01

    To further improve the image quality, in particularly, to suppress the boundary artifacts, in the extended scan field-of-view (SFOV) reconstruction. To combat projection truncation artifacts and to restore truncated objects outside the SFOV, an algorithm has previously been proposed based on fitting a partial water cylinder at the site of the truncation. Previous studies have shown this algorithm can simultaneously eliminate the truncation artifacts inside the SFOV and preserve the total amount of attenuation, owing to its emphasis on consistency conditions of the total attenuation in the parallel sampling geometry. Unfortunately, the water cylinder fitting parameters of this 2D algorithm are inclined to high noise fluctuation in the projection samples from image to image, causing anatomy boundaries artifacts, especially during helical scans with higher pitch (≥1.0). To suppress the boundary artifacts and further improve the image quality, the authors propose to use a roughness penalty function, based on the Huber regularization function, to reinforce the z-dimensional boundary consistency. Extensive phantom and clinical tests have been conducted to test the accuracy and robustness of the enhanced algorithm. Significant reduction in the boundary artifacts is observed in both phantom and clinical cases with the enhanced algorithm. The proposed algorithm also reduces the percent difference error between the horizontal and vertical diameters to well below 1%. It is also noticeable that the algorithm has improved CT number uniformity outside the SFOV compared to the original algorithm. The proposed algorithm is capable of suppressing boundary artifacts and improving the CT number uniformity outside the SFOV.

  16. Exact and efficient cone-beam reconstruction algorithm for a short-scan circle combined with various lines

    Science.gov (United States)

    Dennerlein, Frank; Katsevich, Alexander; Lauritsch, Guenter; Hornegger, Joachim

    2005-04-01

    X-ray 3D rotational angiography based on C-arm systems has become a versatile and established tomographic imaging modality for high contrast objects in interventional environment. Improvements in data acquisition, e.g. by use of flat panel detectors, will enable C-arm systems to resolve even low-contrast details. However, further progress will be limited by the incompleteness of data acquisition on the conventional short-scan circular source trajectories. Cone artifacts, which result from that incompleteness, significantly degrade image quality by severe smearing and shading. To assure data completeness a combination of a partial circle with one or several line segments is investigated. A new and efficient reconstruction algorithm is deduced from a general inversion formula based on 3D Radon theory. The method is theoretically exact, possesses shift-invariant filtered backprojection (FBP) structure, and solves the long object problem. The algorithm is flexible in dealing with various circle and line configurations. The reconstruction method requires nothing more than the theoretically minimum length of scan trajectory. It consists of a conventional short-scan circle and a line segment approximately twice as long as the height of the region-of-interest. Geometrical deviations from the ideal source trajectory are considered in the implementation in order to handle data of real C-arm systems. Reconstruction results show excellent image quality free of cone artifacts. The proposed scan trajectory and reconstruction algorithm assure excellent image quality and allow low-contrast tomographic imaging with C-arm based cone-beam systems. The method can be implemented without any hardware modifications on systems commercially available today.

  17. Effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low dose CT colonography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Sun [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of); Kim, Se Hyung, E-mail: shkim7071@gmail.com [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of); Im, Jong Pil; Kim, Sang Gyun [Department of Internal Medicine, Seoul National University Hospital (Korea, Republic of); Shin, Cheong-il; Han, Joon Koo; Choi, Byung Ihn [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of)

    2015-04-15

    Highlights: •We assessed the effect of reconstruction algorithms on CAD in ultra-low dose CTC. •30 patients underwent ultra-low dose CTC using 120 and 100 kVp with 10 mAs. •CT was reconstructed with FBP, ASiR and Veo and then, we applied a CAD system. •Per-polyp sensitivity of CAD in ULD CT can be improved with the IR algorithms. •Despite of an increase in the number of FPs with IR, it was still acceptable. -- Abstract: Purpose: To assess the effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low-dose CT colonography (ULD CTC). Materials and methods: IRB approval and informed consents were obtained. Thirty prospectively enrolled patients underwent non-contrast CTC at 120 kVp/10 mAs in supine and 100 kVp/10 mAs in prone positions, followed by same-day colonoscopy. Images were reconstructed with filtered back projection (FBP), 80% adaptive statistical iterative reconstruction (ASIR80), and model-based iterative reconstruction (MBIR). A commercial CAD system was applied and per-polyp sensitivities and numbers of false-positives (FPs) were compared among algorithms. Results: Mean effective radiation dose of CTC was 1.02 mSv. Of 101 polyps detected and removed by colonoscopy, 61 polyps were detected on supine and on prone CTC datasets on consensus unblinded review, resulting in 122 visible polyps (32 polyps <6 mm, 52 6–9.9 mm, and 38 ≥ 10 mm). Per-polyp sensitivity of CAD for all polyps was highest with MBIR (56/122, 45.9%), followed by ASIR80 (54/122, 44.3%) and FBP (43/122, 35.2%), with significant differences between FBP and IR algorithms (P < 0.017). Per-polyp sensitivity for polyps ≥ 10 mm was also higher with MBIR (25/38, 65.8%) and ASIR80 (24/38, 63.2%) than with FBP (20/38, 58.8%), albeit without statistical significance (P > 0.017). Mean number of FPs was significantly different among algorithms (FBP, 1.4; ASIR, 2.1; MBIR, 2.4) (P = 0.011). Conclusion: Although the performance of stand-alone CAD

  18. Parallel Algorithm for Reconstruction of TAC Images; Algoritmo Paralelo de Reconstruccion de Imagenes TAC

    Energy Technology Data Exchange (ETDEWEB)

    Vidal Gimeno, V.

    2012-07-01

    The algebraic reconstruction methods are based on solving a system of linear equations. In a previous study, was used and showed as the PETSc library, was and is a scientific computing tool, which facilitates and enables the optimal use of a computer system in the image reconstruction process.

  19. Modified reconstruction algorithm based on space-time adaptive processing for multichannel synthetic aperture radar systems in azimuth

    Science.gov (United States)

    Guo, Xiaojiang; Gao, Yesheng; Wang, Kaizhi; Liu, Xingzhao

    2016-07-01

    A spectrum reconstruction algorithm based on space-time adaptive processing (STAP) can effectively suppress azimuth ambiguity for multichannel synthetic aperture radar (SAR) systems in azimuth. However, the traditional STAP-based reconstruction approach has to estimate the covariance matrix and calculate matrix inversion (MI) for each Doppler frequency bin, which will result in a very large computational load. In addition, the traditional STAP-based approach has to know the exact platform velocity, pulse repetition frequency, and array configuration. Errors involving these parameters will significantly degrade the performance of ambiguity suppression. A modified STAP-based approach to solve these problems is presented. The traditional array steering vectors and corresponding covariance matrices are Doppler-variant in the range-Doppler domain. After preprocessing by a proposed phase compensation method, they would be independent of Doppler bins. Therefore, the modified STAP-based approach needs to estimate the covariance matrix and calculate MI only once. The computation load could be greatly reduced. Moreover, by combining the reconstruction method and a proposed adaptive parameter estimation method, the modified method is able to successfully achieve multichannel SAR signal reconstruction and suppress azimuth ambiguity without knowing the above parameters. Theoretical analysis and experiments showed the simplicity and efficiency of the proposed methods.

  20. Interior Design.

    Science.gov (United States)

    Texas Tech Univ., Lubbock. Home Economics Curriculum Center.

    This document contains teacher's materials for an eight-unit secondary education vocational home economics course on interior design. The units cover period styles of interiors, furniture and accessories, surface treatments and lighting, appliances and equipment, design and space planning in home and business settings, occupant needs, acquisition…

  1. Interior Design.

    Science.gov (United States)

    Texas Tech Univ., Lubbock. Home Economics Curriculum Center.

    This document contains teacher's materials for an eight-unit secondary education vocational home economics course on interior design. The units cover period styles of interiors, furniture and accessories, surface treatments and lighting, appliances and equipment, design and space planning in home and business settings, occupant needs, acquisition…

  2. Application of phase space reconstruction and v-SVR algorithm in predicting displacement of underground engineering surrounding rock

    Institute of Scientific and Technical Information of China (English)

    SHI Chao; CHEN Yi-feng; YU Zhi-xiong; YANG Kun

    2006-01-01

    A new method for predicting the trend of displacement evolution of surrounding rock was presented in this paper. According to the nonlinear characteristics of displacement time series of underground engineering surrounding rock, based on phase space reconstruction theory and the powerful nonlinear mapping ability of support vector machines, the information offered by the time series datum sets was fully exploited and the non-linearity of the displacement evolution system of surrounding rock was well described.The example suggests that the methods based on phase space reconstruction and modified v-SVR algorithm are very accurate, and the study can help to build the displacement forecast system to analyze the stability of underground engineering surrounding rock.

  3. Summer Student Project Report. Parallelization of the path reconstruction algorithm for the inner detector of the ATLAS experiment.

    CERN Document Server

    Maldonado Puente, Bryan Patricio

    2014-01-01

    The inner detector of the ATLAS experiment has two types of silicon detectors used for tracking: Pixel Detector and SCT (semiconductor tracker). Once a proton-proton collision occurs, the result- ing particles pass through these detectors and these are recorded as hits on the detector surfaces. A medium to high energy particle passes through seven different surfaces of the two detectors, leaving seven hits, while lower energy particles can leave many more hits as they circle through the detector. For a typical event during the expected operational conditions, there are 30 000 hits in average recorded by the sensors. Only high energy particles are of interest for physics analysis and are taken into account for the path reconstruction; thus, a filtering process helps to discard the low energy particles produced in the collision. The following report presents a solution for increasing the speed of the filtering process in the path reconstruction algorithm.

  4. Use of a ray-based reconstruction algorithm to accurately quantify preclinical microSPECT images

    OpenAIRE

    Bert Vandeghinste; Roel Van Holen; Christian Vanhove; Filip De Vos; Stefaan Vandenberghe; Steven Staelens

    2014-01-01

    This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-singlephoton emission computed tomography (SPECT). This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using (99m) Tc and In-111, measured on a commercially available cadmium zinc telluride (CZT)-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1) scatter correcti...

  5. State exact reconstruction for switched linear systems via a super-twisting algorithm

    Science.gov (United States)

    Bejarano, Francisco J.; Fridman, Leonid

    2011-05-01

    This article discusses the problem of state reconstruction synthesis for switched linear systems. Based only on the continuous output information, an observer is proposed ensuring the reconstruction of the entire state (continuous and discrete) in finite time. For the observer design an exact sliding mode differentiator is used, which allows the finite time convergence of the observer trajectories to the actual trajectories. The design scheme includes both cases: zero control input and nonzero control input. Simulations illustrate the effectiveness of the proposed observer.

  6. Decision making in double-pedicled DIEP and SIEA abdominal free flap breast reconstructions: An algorithmic approach and comprehensive classification.

    Directory of Open Access Journals (Sweden)

    Charles M Malata

    2015-10-01

    Full Text Available Introduction: The deep inferior epigastric artery perforator (DIEP free flap is the gold standard for autologous breast reconstruction. However, using a single vascular pedicle may not yield sufficient tissue in patients with midline scars or insufficient lower abdominal pannus. Double-pedicled free flaps overcome this problem using different vascular arrangements to harvest the entire lower abdominal flap. The literature is, however, sparse regarding technique selection. We therefore reviewed our experience in order to formulate an algorithm and comprehensive classification for this purpose. Methods: All patients undergoing unilateral double-pedicled abdominal perforator free flap breast reconstruction (AFFBR by a single surgeon (CMM over 40 months were reviewed from a prospectively collected database. Results: Of the 112 consecutive breast free flaps performed, 25 (22% utilised two vascular pedicles. The mean patient age was 45 years (range=27-54. All flaps but one (which used the thoracodorsal system were anastomosed to the internal mammary vessels using the rib-preservation technique. The surgical duration was 656 minutes (range=468-690 mins. The median flap weight was 618g (range=432-1275g and the mastectomy weight was 445g (range=220-896g. All flaps were successful and only three patients requested minor liposuction to reduce and reshape their reconstructed breasts.Conclusion: Bipedicled free abdominal perforator flaps, employed in a fifth of all our AFFBRs, are a reliable and safe option for unilateral breast reconstruction. They, however, necessitate clear indications to justify the additional technical complexity and surgical duration. Our algorithm and comprehensive classification facilitate technique selection for the anastomotic permutations and successful execution of these operations.

  7. Study of a reconstruction algorithm for electrons in the ATLAS experiment in LHC; Etude d'un algorithme de reconstruction des electrons dans l'experience Atlas aupres du LHC

    Energy Technology Data Exchange (ETDEWEB)

    Kerschen, N

    2006-09-15

    The ATLAS experiment is a general purpose particle physics experiment mainly aimed at the discovery of the origin of mass through the research of the Higgs boson. In order to achieve this, the Large Hadron Collider at CERN will accelerate two proton beams and make them collide at the centre of the experiment. ATLAS will discover new particles through the measurement of their decay products. Electrons are such decay products: they produce an electromagnetic shower in the calorimeter by which they lose all their energy. The calorimeter is divided into cells and the deposited energy is reconstructed using an algorithm to assemble the cells into clusters. The purpose of this thesis is to study a new kind of algorithm adapting the cluster to the shower topology. In order to reconstruct the energy of the initially created electron, the cluster has to be calibrated by taking into account the energy lost in the dead material in front of the calorimeter. Therefore. a Monte-Carlo simulation of the ATLAS detector has been used to correct for effects of response modulation in position and in energy and to optimise the energy resolution as well as the linearity. An analysis of test beam data has been performed to study the behaviour of the algorithm in a more realistic environment. We show that the requirements of the experiment can be met for the linearity and resolution. The improvement of this new algorithm, compared to a fixed sized cluster. is the better recovery of Bremsstrahlung photons emitted by the electron in the material in front of the calorimeter. A Monte-Carlo analysis of the Higgs boson decay in four electrons confirms this result. (author)

  8. An algorithm for the reconstruction of high-energy neutrino-induced particle showers and its application to the ANTARES neutrino telescope

    NARCIS (Netherlands)

    Albert, A.; Andre, M.; Anghinolfi; Anton; Ardid; Aubert; Avgitas; Baret, B.; Barrios-Marti; Basa, S.; Bertin; Biagi, S.; Bormuth; Bourret; Bouwhuis; Bruijn; Brunner; Busto, J.; Capone; Caramete; Carr; Celli; Chiarusi; Circella, M.; Coelho, C.O.A.; Coleiro; Coniglione; Costantini; Coyle, P.; Creusot, A.; Deschamps; De Bonis; Distefano; Di Palma; Domi; Donzaud; Dornic; Drouhin; Eberl; El Bojaddaini; Elsässer; Enzenhöfer; Felis; Folger, F.; Fusco; Galatà; Gay; Giordano; Glotin; Grégoire; Gracia-Ruiz; Graf; Hallmann; van Haren, H.; Heijboer; Hello; Hernández-Rey; Hößl; Hofestädt; Hugon; Illuminati; James, C.W.; de Jong; Jongen; Kadler, M.; Kalekin; Katz; Kießling; Kouchner; Kreter; Kreykenbohm; Kulikovskiy; Lachaud; Lahmann; Lefevre, D.; Leonora; Lotze; Loucatos; Marcelin; Margiotta; Marinelli; Martinez-Mora, J.A.; Mele; Melis; Michael; Migliozzi; Moussa; Nezri; Organokov; Pavalas; Pellegrino; Perrina; Piattelli; Popa; Pradier; Quinn; Racca; Riccobene; Sánchez-Losa; Saldaña; Salvadori; Samtleben; Sanguineti; Sapienza, P.; Schussler, F.; Sieger; Spurio; Stolarczyk; Taiuti; Tayalati; Trovato; Turpin; Tönnis; Vallage; Van Elewyck; Versari; Vivolo; Vizzoca; Wilms, J.; Zornoza, J.D.; Zúñiga

    2017-01-01

    A novel algorithm to reconstruct neutrino-induced particle showers within the ANTARES neutrino telescope is presented. The method achieves a median angular resolution of 6∘ for shower energies below 100 TeV. Applying this algorithm to 6 years of data taken with the ANTARES detector, 8 events with re

  9. An algorithm for the reconstruction of high-energy neutrino-induced particle showers and its application to the ANTARES neutrino telescope

    NARCIS (Netherlands)

    Albert, A.; Andre, M.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aubert, J.-J.; Avgitas, T.; Baret, B.; Barrios-Marti, J.; Basa, S.; Bertin, V.; Biagi, S.; Bormuth, R.; Bourret, S.; Bouwhuis, M.C.; Bruijn, R.; Brunner, J.; Busto, J.; Capone, A.; Caramete, L.; Carr, J.; Celli, S.; Chiarusi, T.; Circella, M.; Coelho, C.O.A.; Coleiro, A.; Coniglione, R.; Costantini, H.; Coyle, P.; Creusot, A.; Deschamps, A.; De Bonis, G.; Distefano, C.; Di Palma, I.; Domi, A.; Donzaud, C.; Dornic, D.; Drouhin, D.; Eberl, T.; El Bojaddaini, I.; Elsässer, D.; Enzenhöfer, A.; Felis, I.; Folger, F.; Fusco, L.A.; Galatà, S.; Gay, P.; Giordano, V.; Glotin, H.; Grégoire, T.; Gracia-Ruiz, R.; Graf, K.; Hallmann, S.; van Haren, H.; Heijboer, A.J.; Hello, Y.; Hernández-Rey, J.J.; Hößl, J.; Hofestädt, J.; Hugon, C.; Illuminati, G.; James, C.W.; de Jong, M.; Jongen, M.; Kadler, M.; Kalekin, O.; Katz, U.; Kießling, D.; Kouchner, A.; Kreter, M.; Kreykenbohm, I.; Kulikovskiy, V.; Lachaud, C.; Lahmann, R.; Lefevre, D.; Leonora, E.; Lotze, M.; Loucatos, S.; Marcelin, M.; Margiotta, A.; Marinelli, A.; Martinez-Mora, J.A.; Mele, R.; Melis, K.; Michael, T.; Migliozzi, P.; Moussa, A.; Nezri, E.; Organokov, M.; Pavalas, G.E.; Pellegrino, C.; Perrina, C.; Piattelli, P.; Popa, V.; Pradier, T.; Quinn, L.; Racca, C.; Riccobene, G.; Sánchez-Losa, A.; Saldaña, M.; Salvadori, I.; Samtleben, D.F.E.; Sanguineti, M.; Sapienza, P.; Schussler, F.; Sieger, C.; Spurio, M.; Stolarczyk, T.; Taiuti, M.; Tayalati, Y.; Trovato, A.; Turpin, D.; Tönnis, C.; Vallage, B.; Van Elewyck, V.; Versari, F.; Vivolo, D.; Vizzoca, A.; Wilms, J.; Zornoza, J.D.; Zúñiga, J.

    2017-01-01

    A novel algorithm to reconstruct neutrino-induced particle showers within the ANTARES neutrino telescope is presented. The method achieves a median angular resolution of 6∘ for shower energies below 100 TeV. Applying this algorithm to 6 years of data taken with the ANTARES detector, 8 events with

  10. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    Energy Technology Data Exchange (ETDEWEB)

    Puchner, Stefan B. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Ferencik, Maros [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Harvard Medical School, Division of Cardiology, Massachusetts General Hospital, Boston, MA (United States); Maurovich-Horvat, Pal [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Semmelweis University, MTA-SE Lenduelet Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu [CV Path Institute Inc., Gaithersburg, MD (United States); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Hoffmann, Udo [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Schlett, Christopher L. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany)

    2015-01-15

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area <60 HU was semiautomatically quantified in CCTA. LCP was defined by histology as fibroatheroma with a large lipid/necrotic core. Area under the curve (AUC) was derived from logistic regression analysis as a measure of diagnostic accuracy. Overall, 173 CCTA triplets (FBP/ASIR/MBIR) were co-registered with histology. LCP was present in 26 cross-sections. Average measured plaque area <60 HU was significantly larger in LCP compared to non-LCP cross-sections (mm{sup 2}: 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  11. 一种改进的MC三维重建算法%An Improved MC Three-Dimensional Reconstruction Algorithm

    Institute of Scientific and Technical Information of China (English)

    帅仁俊; 陈书晶

    2016-01-01

    For traditional Marching Cubes algorithm in the process of 3d reconstruction operation time is too long, low efficiency of algorithm, this paper puts forward a kind of based on golden section point Marching Cubes algorithm, using edge golden point instead of Marching Cubes algorithm of contour surface and edge node. Make public the intersection of edge and normal vector linear interpolation calculation into basic mathematical operation, and makes the calculation of the number of by 4 times reduced to 1 times. Experiments show that this algorithm is effective to reduce the operation time of the algorithm and improve the execution efficiency of the algorithm.%针对传统Marching Cubes算法在进行三维重构过程中运算时间过长、算法效率低下的问题,提出了一种基于黄金分割点的Marching Cubes算法,使用棱边的黄金分割点代替Marching Cubes算法中等值面与棱边的交点。使公共棱边的交点和法向量的线性插值计算变为基本的数学运算,并且使计算的次数由4次减少为1次。实验证明,本次算法有效减少了算法的运算时间,提高了算法的执行效率。

  12. MREIT experiments with 200μA injected currents: a feasibility study using two reconstruction algorithms, SMM and Harmonic BZ

    Science.gov (United States)

    Arpinar, V E; Hamamura, M J; Degirmenci, E; Muftuler, L T

    2012-01-01

    Magnetic resonance electrical impedance tomography (MREIT) is a technique that produces images of conductivity in tissues and phantoms. In this technique electrical currents are applied to an object and the resulting magnetic flux density is measured using magnetic resonance imaging (MRI) and the conductivity distribution is reconstructed using these MRI data. Currently the technique is used in research environments, primarily studying phantoms and animals. In order to translate MREIT to clinical applications, strict safety standards need to be established, especially for safe current limits. However, there are currently no standards for safe current limits specific to MREIT. Until such standards are established, human MREIT applications need to conform to existing electrical safety standards in medical instrumentation, such as the IEC601. This protocol limits patient auxiliary currents to 100μA for low frequencies. However, published MREIT studies have utilized currents 10 to 400 times larger than this limit, bringing into question whether the clinical applications of MREIT are attainable under current standards. In this study, we investigated the feasibility of MREIT to accurately reconstruct the relative conductivity of a simple agarose phantom using 200μA total injected current and we tested the performance of two MREIT reconstruction algorithms. These reconstruction algorithms used are the iterative sensitivity matrix method (SMM) by Ider and Birgul in 1998 with Tikhonov regularization and the Harmonic BZ proposed by Oh et al in 2003. The reconstruction techniques were tested at both 200μA and 5mA injected currents to investigate their noise sensitivity at low and high current conditions. It should be noted that 200μA total injected current into a cylindrical phantom generates only 14.7μA current in imaging slice. Similarly, 5mA total injected current results in 367μA in imaging slice. Total acquisition time for 200μA and 5mA experiments were about one

  13. Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering

    Science.gov (United States)

    Tang, Shaojie; Tang, Xiangyang

    2016-03-01

    Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).

  14. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  15. A 15-year review of midface reconstruction after total and subtotal maxillectomy: part I. Algorithm and outcomes.

    Science.gov (United States)

    Cordeiro, Peter G; Chen, Constance M

    2012-01-01

    Reconstruction of complex midfacial defects is best approached with a clear algorithm. The goals of reconstruction are functional and aesthetic. Over a 15-year period (1992 to 2006), a single surgeon (P.G.C.) performed 100 flaps to reconstruct the following midfacial defects: type I, limited maxillectomy (n = 20); type IIA, subtotal maxillectomy with resection of less than 50 percent of the palate (n = 8); type IIB, subtotal maxillectomy with resection of greater than 50 percent of the palate (n = 8); type IIIA, total maxillectomy with preservation of the orbital contents (n = 22); type IIIB, total maxillectomy with orbital exenteration (n = 23); and type IV, orbitomaxillectomy (n = 19). Free flaps were used in 94 cases (94 percent), and pedicled flaps were used in six (6 percent). One hundred flaps were performed in 96 patients (69 males, 72 percent; 27 females, 28 percent); four patients underwent a second flap reconstruction due to recurrent disease (n = 4, 4 percent). Average patient age was 49.2 years (range, 13 to 81 years). Free-flap survival was 100 percent, with one partial flap loss (1 percent). Five patients suffered systemic complications (5.2 percent), and four died within 30 days of hospitalization (4.2 percent). Over 50 percent of patients returned to normal diet and speech. Almost 60 percent were judged to have an excellent aesthetic result. Free-tissue transfer offers the most effective and reliable form of reconstruction for complex maxillectomy defects. Rectus abdominis and radial forearm free flaps in combination with immediate bone grafting or as osteocutaneous flaps consistently provide the best functional and aesthetic results. Therapeutic, IV.

  16. Clinical Application of Iterative Reconstruction Algorithm in CT%迭代重建算法在CT中的应用

    Institute of Scientific and Technical Information of China (English)

    陆秀良; 曾蒙苏

    2012-01-01

    This paper reviewed history and classification of the main commercial iterative reconstruction algorithms. As one of the main factors for image quality, the authors compared some commonly used algorithms regarding their theory basics and diverse clinical applications, aiming to guide radiologists and technicians to improve image quality but with lower dose as much as possible in clinic.%本文叙述了主要商业迭代算法的历史以及分类.作为影响图像质量的重要因素之一,作者从理论基础和临床应用角度对比较广泛应用的几种商用算法进行了比较,旨在指导医生和技师的临床实际,合理提高图像质量和降低辐射剂量.

  17. A nonlinear fuzzy assisted image reconstruction algorithm for electrical capacitance tomography.

    Science.gov (United States)

    Deabes, W A; Abdelrahman, M A

    2010-01-01

    A nonlinear method based on a Fuzzy Inference System (FIS) to improve the images obtained from Electrical Capacitance Tomography (ECT) is proposed. Estimation of the molten metal characteristic in the Lost Foam Casting (LFC) process is a novel application in the area of the tomography process. The convergence rate of iterative image reconstruction techniques is dependent on the accuracy of the first image. The possibility of the existence of metal in the first image is computed by the proposed fuzzy system. This first image is passed to an iterative image reconstruction technique to get more precise images and to speed up the convergence rate. The proposed technique is able to detect the position of the metal on the periphery of the imaging area by using just eight capacitive sensors. The final results demonstrate the advantage of using the FIS compared to the performance of the iterative back projection image reconstruction technique.

  18. A New Full-Newton Step Interior-point Algorithm for Convex Quadratic Semi-definite Programming%一种新的求解CQSDP的全-Newton步内点算法

    Institute of Scientific and Technical Information of China (English)

    李鑫; 季萍; 张明望

    2015-01-01

    In this paper, we propose a new full-Newton step primal-dual interior-point algorithm for solving convex quadratic semi-definite programming. By establishing and using new technical results, we show that the iteration complexity of algorithm asO(nlogn)ε is as good as the currently best iteration complexity for small-update interior-point algorithms of convex quadratic semi-definite programming.%对凸二次半定规划提出了一种新的全-Newton步原始-对偶内点算法.通过建立和应用一些新的技术性结果,证明了算法的迭代复杂性为O( n log n )ε ,这与目前凸二次半定规划的小步校正内点算法最好的迭代复杂性一致.

  19. Interior-point methods

    Science.gov (United States)

    Potra, Florian A.; Wright, Stephen J.

    2000-12-01

    The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semi-definite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semi-definite programming, monotone linear complementarity, and convex programming over sets that can be characterized by self-concordant barrier functions.

  20. Maximum-entropy expectation-maximization algorithm for image reconstruction and sensor field estimation.

    Science.gov (United States)

    Hong, Hunsop; Schonfeld, Dan

    2008-06-01

    In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.

  1. Backbone building from quadrilaterals: a fast and accurate algorithm for protein backbone reconstruction from alpha carbon coordinates.

    Science.gov (United States)

    Gront, Dominik; Kmiecik, Sebastian; Kolinski, Andrzej

    2007-07-15

    In this contribution, we present an algorithm for protein backbone reconstruction that comprises very high computational efficiency with high accuracy. Reconstruction of the main chain atomic coordinates from the alpha carbon trace is a common task in protein modeling, including de novo structure prediction, comparative modeling, and processing experimental data. The method employed in this work follows the main idea of some earlier approaches to the problem. The details and careful design of the present approach are new and lead to the algorithm that outperforms all commonly used earlier applications. BBQ (Backbone Building from Quadrilaterals) program has been extensively tested both on native structures as well as on near-native decoy models and compared with the different available existing methods. Obtained results provide a comprehensive benchmark of existing tools and evaluate their applicability to a large scale modeling using a reduced representation of protein conformational space. The BBQ package is available for downloading from our website at http://biocomp.chem.uw.edu.pl/services/BBQ/. This webpage also provides a user manual that describes BBQ functions in detail.

  2. Computed Tomography Image Origin Identification based on Original Sensor Pattern Noise and 3D Image Reconstruction Algorithm Footprints.

    Science.gov (United States)

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2016-06-08

    In this paper, we focus on the "blind" identification of the Computed Tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-Scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT-Scanner based on an Original Sensor Pattern Noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its 3D image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train an SVM based classifier so as to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-Scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than Sensor Pattern Noise (SPN) based strategy proposed for general public camera devices.

  3. A platform for Image Reconstruction in X-ray Imaging: Medical Applications using CBCT and DTS algorithms

    Directory of Open Access Journals (Sweden)

    Zacharias Kamarianakis

    2014-07-01

    Full Text Available This paper presents the architecture of a software platform implemented in C++, for the purpose of testing and evaluation of reconstruction algorithms in X-ray imaging. The fundamental elements of the platform are classes, tightened together in a logical hierarchy. Real world objects as an X-ray source or a flat detector can be defined and implemented as instances of corresponding classes. Various operations (e.g. 3D transformations, loading, saving, filtering of images, creation of planar or curved objects of various dimensions have been incorporated in the software tool as class methods, as well. The user can easily set up any arrangement of the imaging chain objects in 3D space and experiment with many different trajectories and configurations. Selected 3D volume reconstructions using simulated data acquired in specific scanning trajectories are used as a demonstration of the tool. The platform is considered as a basic tool for future investigations of new reconstruction methods in combination with various scanning configurations.

  4. Effect of anatomical variability, reconstruction algorithms and scattered photons on the SPM output of brain PET studies.

    Science.gov (United States)

    Aguiar, P; Pareto, D; Gispert, J D; Crespo, C; Falcón, C; Cot, A; Lomeña, F; Pavía, J; Ros, D

    2008-02-01

    Statistical parametric mapping (SPM) has become the standard technique to statistically evaluate differences between functional images. The aim of this paper was to assess the effect of anatomical variability of skull, the reconstruction algorithm and the scattering of photons in the brain on the output of an SPM analysis of brain PET studies. To this end, Monte Carlo simulation was used to generate suitable PET sinograms and bootstrap techniques were employed to increase the reliability of the conclusions. Activity distribution maps were obtained by segmenting thirty nine T1-weighted magnetic resonance images. Foci were placed on the posterior cingulate cortex (PCC) and the superior temporal cortex (STC) and activation factors ranging between -25% and +25% were simulated. Preprocessing of the reconstructed images and statistical analysis were performed using SPM2. Our findings show that intersubject anatomical differences can cause the minimum sample size to increase between 10 and 42% for posterior cingulate Cortex and between 40 and 80% for superior temporal cortex. Ideal scatter correction (ISC) allowed us to diminish the sample size up to 18% and fully 3D reconstruction reduced the minimum sample size between 8 and 33%. Detection sensitivity was higher for hypo-activation than for hyper-activation situations and higher for superior temporal cortex than for posterior cingulate cortex.

  5. Research on object-plane constraints and hologram expansion in phase retrieval algorithms for continuous-wave terahertz inline digital holography reconstruction.

    Science.gov (United States)

    Hu, Jiaqi; Li, Qi; Cui, Shanshan

    2014-10-20

    In terahertz inline digital holography, zero-order diffraction light and conjugate images can cause the reconstructed image to be blurred. In this paper, three phase retrieval algorithms are applied to conduct reconstruction based on the same near-field diffraction propagation conditions and image-plane constraints. The impact of different object-plane constraints on CW terahertz inline digital holographic reconstruction is studied. The results show that in the phase retrieval algorithm it is not suitable to impose restriction on the phase when the object is not isolated in the transmission-type CW terahertz inline digital holography. In addition, the effects of zero-padding expansion, boundary replication expansion, and apodization operation on reconstructed images are studied. The results indicate that the conjugate image can be eliminated, and a better reconstructed image can be obtained by adopting an appropriate phase retrieval algorithm after the normalized hologram extending to the minimum area, which meets the applicable range of the angular spectrum reconstruction algorithm by means of boundary replication.

  6. An ordered-subsets proximal preconditioned gradient algorithm for edge-preserving PET image reconstruction

    NARCIS (Netherlands)

    Mehranian, Abolfazl; Rahmim, Arman; Ay, Mohammad Reza; Kotasidis, Fotis; Zaidi, Habib

    Purpose: In iterative positron emission tomography (PET) image reconstruction, the statistical variability of the PET data precorrected for random coincidences or acquired in sufficiently high count rates can be properly approximated by a Gaussian distribution, which can lead to a penalized weighted

  7. Piecewise-Constant-Model-Based Interior Tomography Applied to Dentin Tubules

    Directory of Open Access Journals (Sweden)

    Peng He

    2013-01-01

    Full Text Available Dentin is a hierarchically structured biomineralized composite material, and dentin’s tubules are difficult to study in situ. Nano-CT provides the requisite resolution, but the field of view typically contains only a few tubules. Using a plate-like specimen allows reconstruction of a volume containing specific tubules from a number of truncated projections typically collected over an angular range of about 140°, which is practically accessible. Classical computed tomography (CT theory cannot exactly reconstruct an object only from truncated projections, needless to say a limited angular range. Recently, interior tomography was developed to reconstruct a region-of-interest (ROI from truncated data in a theoretically exact fashion via the total variation (TV minimization under the condition that the ROI is piecewise constant. In this paper, we employ a TV minimization interior tomography algorithm to reconstruct interior microstructures in dentin from truncated projections over a limited angular range. Compared to the filtered backprojection (FBP reconstruction, our reconstruction method reduces noise and suppresses artifacts. Volume rendering confirms the merits of our method in terms of preserving the interior microstructure of the dentin specimen.

  8. A Practical Reconstruction Algorithm in 2D Industrial CT%一种实用的二维工业CT重建算法

    Institute of Scientific and Technical Information of China (English)

    李慧; 田捷; 张兆田

    2005-01-01

    Computed tomography plays an important role in industrial non-destructive testing, medical applications, astronomy and many other fields to look inside the scanned object and to analysis its inner structures. A non-destructive testing software have been developed to efficiently detect inner flaws of space industrial components. As the core of our software, reconstruction algorithms including preprocess of raw data, re-arrange algorithm and filtered back-projection algorithms have been described in detail in this article. With real raw data from CASC of China, experimental results verified the applied reconstruction algorithm in our software. Furthermore, forward algorithms simulating generation of fan-beam raw data are also presented in this article.

  9. A very fast iterative algorithm for TV-regularized image reconstruction with applications to low-dose and few-view CT

    CERN Document Server

    Kudo, Hiroyuki; Nemoto, Takuya; Takaki, Keita

    2016-01-01

    This paper concerns iterative reconstruction for low-dose and few-view CT by minimizing a data-fidelity term regularized with the Total Variation (TV) penalty. We propose a very fast iterative algorithm to solve this problem. The algorithm derivation is outlined as follows. First, the original minimization problem is reformulated into the saddle point (primal-dual) problem by using the Lagrangian duality, to which we apply the first-order primal-dual iterative methods. Second, we precondition the iteration formula using the ramp flter of Filtered Backprojection (FBP) reconstruction algorithm in such a way that the problem solution is not altered. The resulting algorithm resembles the structure of so-called iterative FBP algorithm, and it converges to the exact minimizer of cost function very fast.

  10. Possibilities and limitations of the ART-Sample algorithm for reconstruction of 3D temperature fields and the influence of opaque obstacles.

    Science.gov (United States)

    Li, Yuanyang; Herman, Cila

    2013-07-01

    The need for the measurement of complex, unsteady, three-dimensional (3D) temperature distributions arises in a variety of engineering applications, and tomographic techniques are applied to accomplish this goal. Holographic interferometry (HI), one of the optical methods used for visualizing temperature fields, combined with tomographic reconstruction techniques requires multi-directional interferometric data to recover the 3D information. However, the presence of opaque obstacles (such as solid objects in the flow field and heaters) in the measurement volume, prevents the probing light beams from traversing the entire measurement volume. As a consequence, information on the average value of the field variable will be lost in regions located in the shade of the obstacle. The capability of the ART-Sample tomographic reconstruction method to recover 3D temperature distributions both in unobstructed temperature fields and in the presence of opaque obstacles is discussed in this paper. A computer code for tomographic reconstruction of 3D temperature fields from 2D projections was developed. In the paper, the reconstruction accuracy is discussed quantitatively both without and with obstacles in the measurement volume for a set of phantom functions mimicking realistic temperature distributions. The reconstruction performance is optimized while minimizing the number of irradiation directions (experimental hardware requirements) and computational effort. For the smooth temperature field both with and without obstacles, the reconstructions produced by this algorithm are good, both visually and using quantitative criteria. The results suggest that the location and the size of the obstacle and the number of viewing directions will affect the reconstruction of the temperature field. When the best performance parameters of the ART-Sample algorithm identified in this paper are used to reconstruct the 3D temperature field, the 3D reconstructions with and without obstacle are

  11. The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution neuroreceptor PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Angelis, G I; Kotasidis, F A; Matthews, J C [Imaging, Proteomics and Genomics, MAHSC, University of Manchester, Wolfson Molecular Imaging Centre, Manchester (United Kingdom); Reader, A J [Montreal Neurological Institute, McGill University, Montreal (Canada); Lionheart, W R, E-mail: georgios.angelis@mmic.man.ac.uk [School of Mathematics, University of Manchester, Alan Turing Building, Manchester (United Kingdom)

    2011-07-07

    Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [{sup 11}C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice

  12. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    CERN Document Server

    Rescigno, R; Juliani, D; Spiriti, E; Baudot, J; Abou-Haidar, Z; Agodi, C; Alvarez, M A G; Aumann, T; Battistoni, G; Bocci, A; Böhlen, T T; Boudard, A; Brunetti, A; Carpinelli, M; Cirrone, G A P; Cortes-Giraldo, M A; Cuttone, G; De Napoli, M; Durante, M; Gallardo, M I; Golosio, B; Iarocci, E; Iazzi, F; Ickert, G; Introzzi, R; Krimmer, J; Kurz, N; Labalme, M; Leifels, Y; Le Fevre, A; Leray, S; Marchetto, F; Monaco, V; Morone, M C; Oliva, P; Paoloni, A; Patera, V; Piersanti, L; Pleskac, R; Quesada, J M; Randazzo, N; Romano, F; Rossi, D; Rousseau, M; Sacchi, R; Sala, P; Sarti, A; Scheidenberger, C; Schuy, C; Sciubba, A; Sfienti, C; Simon, H; Sipala, V; Tropea, S; Vanstalle, M; Younis, H

    2014-01-01

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different...

  13. Multiple Sparse Measurement Gradient Reconstruction Algorithm for DOA Estimation in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Weijian Si

    2015-01-01

    Full Text Available A novel direction of arrival (DOA estimation method in compressed sensing (CS is proposed, in which the DOA estimation problem is cast as the joint sparse reconstruction from multiple measurement vectors (MMV. The proposed method is derived through transforming quadratically constrained linear programming (QCLP into unconstrained convex optimization which overcomes the drawback that l1-norm is nondifferentiable when sparse sources are reconstructed by minimizing l1-norm. The convergence rate and estimation performance of the proposed method can be significantly improved, since the steepest descent step and Barzilai-Borwein step are alternately used as the search step in the unconstrained convex optimization. The proposed method can obtain satisfactory performance especially in these scenarios with low signal to noise ratio (SNR, small number of snapshots, or coherent sources. Simulation results show the superior performance of the proposed method as compared with existing methods.

  14. A Paleogenomic Algorithm for Reconstruction of Ancient Operons from Complete Microbial Genome Sequences

    Institute of Scientific and Technical Information of China (English)

    WANG Yu-hong; LI Wei; FANG Xue-xun; John P. Rose; WANG Bi-Cheng; LIN Da-wei

    2004-01-01

    Operons, or co-transcribed and co-regulated contiguous sets of genes, in microbial genomes are poorly conserved across different genomes due to gene fusion, deletion, duplication and other genome shuffling processes. The currently available genomes are the results of numerous reshuffling and acceptance iterations. We hypothesized that in ancient times, when life was more primitive, functionally related genes existed in close proximity and operated together as an operon to simplify regulation. As more sophisticated regulation mechanisms became available during evolution the genes forming an operon could be separated by the above mentioned processes. If gene shuffling is a random event, neighbor gene pairs are more likely to be preserved than distant gene pairs. Thus, if enough gene pairs can be identified, the original operon could be reconstructed by assembling the pairs. Here we propose a novel paleogenomic method to reconstruct present neighbor gene pairs into "ancient" operons that possibly existed at some point during evolution.

  15. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Ming [School of Mathematics and System Science, Shandong University of Science and Technology, Qingdao, Shandong 265590, China and Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, Massachusetts 01854 (United States); Yu, Hengyong, E-mail: hengyong-yu@ieee.org [Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, Massachusetts 01854 (United States)

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  16. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET

    OpenAIRE

    Mikhaylova, E.; Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-01-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm3) image reconstruction is a challenge. Therefore...

  17. Simultaneous reconstruction of temperature distribution and radiative properties in participating media using a hybrid LSQR PSO algorithm

    Institute of Scientific and Technical Information of China (English)

    牛春洋; 齐宏; 黄兴; 阮立明; 王伟; 谈和平

    2015-01-01

    A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR–PSO) algorithm was devel-oped to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distribu-tions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR–PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed.

  18. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  19. A linear-time algorithm for reconstructing zero-recombinant haplotype configuration on a pedigree

    Science.gov (United States)

    2012-01-01

    Background When studying genetic diseases in which genetic variations are passed on to offspring, the ability to distinguish between paternal and maternal alleles is essential. Determining haplotypes from genotype data is called haplotype inference. Most existing computational algorithms for haplotype inference have been designed to use genotype data collected from individuals in the form of a pedigree. A haplotype is regarded as a hereditary unit and therefore input pedigrees are preferred that are free of mutational events and have a minimum number of genetic recombinational events. These ideas motivated the zero-recombinant haplotype configuration (ZRHC) problem, which strictly follows the Mendelian law of inheritance, namely that one haplotype of each child is inherited from the father and the other haplotype is inherited from the mother, both without any mutation. So far no linear-time algorithm for ZRHC has been proposed for general pedigrees, even though the number of mating loops in a human pedigree is usually very small and can be regarded as constant. Results Given a pedigree with n individuals, m marker loci, and k mating loops, we proposed an algorithm that can provide a general solution to the zero-recombinant haplotype configuration problem in O(kmn + k2m) time. In addition, this algorithm can be modified to detect inconsistencies within the genotype data without loss of efficiency. The proposed algorithm was subject to 12000 experiments to verify its performance using different (n, m) combinations. The value of k was uniformly distributed between zero and six throughout all experiments. The experimental results show a great linearity in terms of execution time in relation to input size when both n and m are larger than 100. For those experiments where n or m are less than 100, the proposed algorithm runs very fast, in thousandth to hundredth of a second, on a personal desktop computer. Conclusions We have developed the first deterministic linear

  20. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    Energy Technology Data Exchange (ETDEWEB)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Sawall, Stefan; Kachelrieß, Marc [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany and Institute of Medical Physics, University of Erlangen–Nürnberg, 91052 Erlangen (Germany)

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  1. A Direct Numerical Reconstruction Algorithm for the 3D Calderón Problem

    DEFF Research Database (Denmark)

    Delbary, Fabrice; Hansen, Per Christian; Knudsen, Kim

    2011-01-01

    In three dimensions Calderón's problem was addressed and solved in theory in the 1980s in a series of papers, but only recently the numerical implementation of the algorithm was initiated. The main ingredients in the solution of the problem are complex geometrical optics solutions to the conducti...

  2. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    Science.gov (United States)

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  3. Interior Tomography - Depict with Direct Data

    OpenAIRE

    Wang, Ge

    2008-01-01

    While the conventional wisdom states that the interior problem - to reconstruct a region of interest (ROI) only from projection data through the ROI - does not have a unique solution, in June 2007 we published the first paper on interior tomography to solve the interior problem exactly and stably, aided by the prior knowledge on a subregion in the ROI. We underline that interior tomography is potentially a powerful, even indispensable tool to handle large objects, reduce radiation dose, suppr...

  4. Potentials and Limits of Super-Resolution Algorithms and Signal Reconstruction from Sparse Data

    CERN Document Server

    Shabat, Gil

    2012-01-01

    A common distortion in videos is image instability in the form of chaotic (global and local displacements). Those instabilities can be used to enhance image resolution by using subpixel elastic registration. In this work, we investigate the performance of such methods over the ability to improve the resolution by accumulating several frames. The second part of this work deals with reconstruction of discrete signals from a subset of samples under different basis functions such as DFT, Haar, Walsh, Daubechies wavelets and CT (Radon) projections.

  5. Reconstruction of cylindrically layered media using an iterative algorithm with a stable solution

    Institute of Scientific and Technical Information of China (English)

    CHENG Ji-zhen; NIU Zuo-yuan; CHENG Chong-hu

    2007-01-01

    The reconstruction of cylindrically layered media is investigated in this article. The inverse problem is modeled using a source-type integral equation with a series of cylindrical waves as incidences, and a conventional Born iterative procedure is modified for solving the integral equation. In the modified iterative procedure, a conventional single-point approximation for the calculation of the field inside media is replaced by a multi-points approximation to improve the numerical stability of its solution. Numerical simulations for different permittivity distributions are demon- strated in terms of artificial scattering data with the procedure. The result shows that the procedure enjoys both accuracy and stability in the numerical computation.

  6. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    Science.gov (United States)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  7. 框式凸二次规划的原始-对偶不可行内点算法%A Primal-dual Infeasible Interior Point Algorithm for Convex Quadratic Programming Problem with Box Constraints

    Institute of Scientific and Technical Information of China (English)

    张明望; 黄崇超

    2001-01-01

    In this paper, we devise a primal-dual infeasible interior point algorithm for convex quadratic programming problem with box constraints.Under assumption that initial point is in neighborhood of the path of conters,we prove that algorithm enjoys the global convergence.%对框式凸二次规划提出了一种原始—对偶不可行内点算法,在初始点取在中心路径的邻域N时,证明了算法的全局收敛性。

  8. Effects of particle size, slice thickness, and reconstruction algorithm on coronary calcium quantitation using ultrafast computed tomography

    Science.gov (United States)

    Tang, Weiyi; Detrano, Robert; Kang, Xingping; Garner, D.; Nickerson, Sharon; Desimone, P.; Mahaisavariya, Paiboon; Brundage, B.

    1994-05-01

    The recent emphasis on early diagnosis of coronary artery disease has stimulated research for a reliable and non-invasive screening method. Radiographically detectable coronary calcium has been shown to predict both pathologic and angiographic findings. Ultrafast computed tomography (UFCT), in quantifying coronary calcium, may become an accurate non-invasive method to evaluate the severity of coronary disease. The currently applied index of UFCT coronary calcium amount is the coronary calcium score of Agatston et al. This score has not been thoroughly evaluated as to its accuracy and dependence on scanning parameters. A potential drawback of the score is its dependence on predetermined CT number thresholds. In this investigation we used a chest phantom to determine the effects of particle size, slice thickness, and reconstruction algorithm on the coronary calcium score, and on the calcium mass estimated with a new method which is not dependent on thresholds.

  9. Holocene local forest history at two sites in Småland, southern Sweden - insights from quantitative reconstructions using the Landscape Reconstruction Algorithm

    Science.gov (United States)

    Cui, Qiaoyu; Gaillard, Marie-José; Lemdahl, Geoffrey; Olsson, Fredrik; Sugita, Shinya

    2010-05-01

    Quantitative reconstruction of past vegetation using fossil pollen was long very problematic. It is well known that pollen percentages and pollen accumulation rates do not represent vegetation abundance properly because pollen values are influenced by many factors of which inter-taxonomic differences in pollen productivity and vegetation structure are the most important ones. It is also recognized that pollen assemblages from large sites (lakes or bogs) record the characteristics of the regional vegetation, while pollen assemblages from small sites record local features. Based on the theoretical understanding of the factors and mechanisms that affect pollen representation of vegetation, Sugita (2007a and b) proposed the Landscape Reconstruction Algorithm (LRA) to estimate vegetation abundance in percentage cover for well defined spatial scales. The LRA includes two models, REVEALS and LOVE. REVEALS estimates regional vegetation abundance at a spatial scale of 100 km x 100 km. LOVE estimates local vegetation abundance at the spatial scale of the relevant source area of pollen (RSAP sensu Sugita 1993) of the pollen site. REVEALS estimates are needed to apply LOVE in order to calculate the RSAP and the vegetation cover within the RSAP. The two models were validated theoretically and empirically. Two small bogs in southern Sweden were studied for pollen, plant macrofossil, charcoal, and coleoptera in order to reconstruct the local Holocene forest and fire history (e.g. Greisman and Gaillard 2009; Olsson et al. 2009). We applied the LOVE model in order to 1) compare the LOVE estimates with pollen percentages for a better understanding of the local forest history; 2) obtain more precise information on the local vegetation to explain between-sites differences in fire history. We used pollen records from two large lakes in Småland to obtain REVEALS estimates for twelve continuous 500-yrs time windows. Following the strategy of the Swedish VR LANDCLIM project (see Gaillard

  10. 3D weighting in cone beam image reconstruction algorithms: ray-driven vs. pixel-driven.

    Science.gov (United States)

    Tang, Xiangyang; Nilsen, Roy A; Smolin, Alex; Lifland, Ilya; Samsonov, Dmitry; Taha, Basel

    2008-01-01

    A 3D weighting scheme have been proposed previously to reconstruct images at both helical and axial scans in stat-of-the-art volumetric CT scanners for diagnostic imaging. Such a 3D weighting can be implemented in the manner of either ray-driven or pixel-drive, depending on the available computation resources. An experimental study is conducted in this paper to evaluate the difference between the ray-driven and pixel-driven implementations of the 3D weighting from the perspective of image quality, while their computational complexity is analyzed theoretically. Computer simulated data and several phantoms, such as the helical body phantom and humanoid chest phantom, are employed in the experimental study, showing that both the ray-driven and pixel-driven 3D weighting provides superior image quality for diagnostic imaging in clinical applications. With the availability of image reconstruction engine at increasing computational power, it is believed that the pixel-driven 3D weighting will be dominantly employed in state-of-the-art volumetric CT scanners over clinical applications.

  11. A boostrap algorithm for temporal signal reconstruction in the presence of noise from its fractional Fourier transformed intensity spectra

    Energy Technology Data Exchange (ETDEWEB)

    Tan, Cheng-Yang; /Fermilab

    2011-02-01

    A bootstrap algorithm for reconstructing the temporal signal from four of its fractional Fourier intensity spectra in the presence of noise is described. An optical arrangement is proposed which realises the bootstrap method for the measurement of ultrashort laser pulses. The measurement of short laser pulses which are less than 1 ps is an ongoing challenge in optical physics. One reason is that no oscilloscope exists today which can directly measure the time structure of these pulses and so it becomes necessary to invent other techniques which indirectly provide the necessary information for temporal pulse reconstruction. One method called FROG (frequency resolved optical gating) has been in use since 19911 and is one of the popular methods for recovering these types of short pulses. The idea behind FROG is the use of multiple time-correlated pulse measurements in the frequency domain for the reconstruction. Multiple data sets are required because only intensity information is recorded and not phase, and thus by collecting multiple data sets, there is enough redundant measurements to yield the original time structure, but not necessarily uniquely (or even up to an arbitrary constant phase offset). The objective of this paper is to describe another method which is simpler than FROG. Instead of collecting many auto-correlated data sets, only two spectral intensity measurements of the temporal signal are needed in the absence of noise. The first can be from the intensity components of its usual Fourier transform and the second from its FrFT (fractional Fourier transform). In the presence of noise, a minimum of four measurements are required with the same FrFT order but with two different apertures. Armed with these two or four measurements, a unique solution up to a constant phase offset can be constructed.

  12. Study of the radiation dose reduction capability of a CT reconstruction algorithm: LCD performance assessment using mathematical model observers

    Science.gov (United States)

    Fan, Jiahua; Tseng, Hsin-Wu; Kupinski, Matthew; Cao, Guangzhi; Sainath, Paavana; Hsieh, Jiang

    2013-03-01

    Radiation dose on patient has become a major concern today for Computed Tomography (CT) imaging in clinical practice. Various hardware and algorithm solutions have been designed to reduce dose. Among them, iterative reconstruction (IR) has been widely expected to be an effective dose reduction approach for CT. However, there is no clear understanding on the exact amount of dose saving an IR approach can offer for various clinical applications. We know that quantitative image quality assessment should be task-based. This work applied mathematical model observers to study detectability performance of CT scan data reconstructed using an advanced IR approach as well as the conventional filtered back-projection (FBP) approach. The purpose of this work is to establish a practical and robust approach for CT IR detectability image quality evaluation and to assess the dose saving capability of the IR method under study. Low contrast (LC) objects imbedded in head size and body size phantoms were imaged multiple times with different dose levels. Independent signal present and absent pairs were generated for model observer study training and testing. Receiver Operating Characteristic (ROC) curves for location known exact and location ROC (LROC) curves for location unknown as well as their corresponding the area under the curve (AUC) values were calculated. Results showed approximately 3 times dose reduction has been achieved using the IR method under study.

  13. The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.

    Science.gov (United States)

    Dias, José M B; Leitao, José M N

    2002-01-01

    This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.

  14. Using the Landscape Reconstruction Algorithm (LRA) to estimate Holocene regional and local vegetation composition in the Boreal Forests of Alaska

    Science.gov (United States)

    Hopla, Emma-Jayne; Edwards, Mary; Langdon, Pete

    2016-04-01

    Vegetation is already responding to increasing global temperatures, with shrubs expanding northwards in the Arctic in a process called "greening". Lakes are important features within these changing landscapes, and lake ecosystems are affected by the vegetation in their catchments. Use of dated sediment archives can reveal how lake ecosystems responded to past changes over timescales relevant to vegetation dynamics (decades to centuries). Holocene vegetation changes have been reconstructed for small lake catchments in Alaska to help understand the long-term interactions between vegetation and within lake processes. A quantitative estimate of vegetation cover around these small lakes clarifies the catchment drivers of lake ecosystem processes. Pollen productivity is one of the major parameters used to make quantitative estimates of land cover from palaeodata. Based on extensive fieldwork, we obtained first Pollen Productivity Estimates (PPEs) for the main arboreal taxa in interior Alaska. We used the model REVEALS to estimate the regional vegetation abundance from existing pollen data from large lakes in the region based on Alaskan and European pollen productivity estimates (PPEs). Quantitative estimates of vegetation cover differ from those based on pollen percentages alone. The model LOVE will then be applied to smaller lake basins that are the subject of detailed palaeoliminological investigations in order to estimate the local composition at these sites.

  15. This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms - Theory and Practice

    CERN Document Server

    Harmany, Zachary T; Willett, Rebecca M

    2010-01-01

    The observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number of observations and (b) f* admits a sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objectiv...

  16. Use of a ray-based reconstruction algorithm to accurately quantify preclinical microSPECT images.

    Science.gov (United States)

    Vandeghinste, Bert; Van Holen, Roel; Vanhove, Christian; De Vos, Filip; Vandenberghe, Stefaan; Staelens, Steven

    2014-01-01

    This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-single-photon emission computed tomography (SPECT). This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using 99mTc and 111In, measured on a commercially available cadmium zinc telluride (CZT)-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1) scatter correction, (2) computed tomography-based attenuation correction, (3) resolution recovery, and (4) edge-preserving smoothing. It was validated using a National Electrical Manufacturers Association (NEMA) phantom. The in vivo quantification error was determined for two radiotracers: [99mTc]DMSA in naive mice (n  =  10 kidneys) and [111In]octreotide in mice (n  =  6) inoculated with a xenograft neuroendocrine tumor (NCI-H727). The measured energy resolution is 5.3% for 140.51 keV (99mTc), 4.8% for 171.30 keV, and 3.3% for 245.39 keV (111In). For 99mTc, an uncorrected quantification error of 28 ± 3% is reduced to 8 ± 3%. For 111In, the error reduces from 26 ± 14% to 6 ± 22%. The in vivo error obtained with 99mTc-dimercaptosuccinic acid ([99mTc]DMSA) is reduced from 16.2 ± 2.8% to -0.3 ± 2.1% and from 16.7 ± 10.1% to 2.2 ± 10.6% with [111In]octreotide. Absolute quantitative in vivo SPECT is possible without explicit system matrix measurements. An absolute in vivo quantification error smaller than 5% was achieved and exemplified for both [99mTc]DMSA and [111In]octreotide.

  17. Use of a Ray-Based Reconstruction Algorithm to Accurately Quantify Preclinical MicroSPECT Images

    Directory of Open Access Journals (Sweden)

    Bert Vandeghinste

    2014-06-01

    Full Text Available This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-single-photon emission computed tomography (SPECT. This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using 99mTc and 111In, measured on a commercially available cadmium zinc telluride (CZT-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1 scatter correction, (2 computed tomography-based attenuation correction, (3 resolution recovery, and (4 edge-preserving smoothing. It was validated using a National Electrical Manufacturers Association (NEMA phantom. The in vivo quantification error was determined for two radiotracers: [99mTc]DMSA in naive mice (n = 10 kidneys and [111In]octreotide in mice (n = 6 inoculated with a xenograft neuroendocrine tumor (NCI-H727. The measured energy resolution is 5.3% for 140.51 keV (99mTc, 4.8% for 171.30 keV, and 3.3% for 245.39 keV (111In. For 99mTc, an uncorrected quantification error of 28 ± 3% is reduced to 8 ± 3%. For 111In, the error reduces from 26 ± 14% to 6 ± 22%. The in vivo error obtained with “mTc-dimercaptosuccinic acid ([99mTc]DMSA is reduced from 16.2 ± 2.8% to −0.3 ± 2.1% and from 16.7 ± 10.1% to 2.2 ± 10.6% with [111In]octreotide. Absolute quantitative in vivo SPECT is possible without explicit system matrix measurements. An absolute in vivo quantification error smaller than 5% was achieved and exemplified for both [”mTc]DMSA and [111In]octreotide.

  18. 行阶梯观测矩阵、对偶仿射尺度内点重构算法下的语音压缩感知%Compressed Sensing of Speech Signal Based on Row Echelon Measurement Matrix and Dual Affine Scaling Interior Point Reconstruction Method

    Institute of Scientific and Technical Information of China (English)

    叶蕾; 杨震; 王天荆; 孙林慧

    2012-01-01

    基于语音信号在离散余弦域上的近似稀疏性,针对采用随机高斯观测矩阵及线性规划方法进行语音压缩感知与重构时,重构零(近似零)系数定位能力差而导致重构效果不好的缺点,本文提出一种新的行阶梯矩阵做观测矩阵,用对偶仿射尺度内点重构算法对语音进行压缩感知与重构,并对该算法下的重构性能进行理论分析.语音压缩感知仿真结果表明,在离散余弦基下,压缩比(观测序列与原始序列样值数之比)为1∶4时,行阶梯观测矩阵下的平均重构信噪比比随机高斯观测矩阵下提高9.73dB,平均MOS分比随机高斯观测矩阵下提高1.22分.%Based on the approximate sparsity of speech signal in discrete cosine basis, this paper proposes a new algorithm of compressed sensing of speech signal based on special row echelon measurement matrix and dual affine scaling interior point re-construction method, This algorithra can resolve the problem of inaccuracy of location of reconstruction coefficient which is zero ornearly zero of compressed sensing based on Caussian measurement matrix and linear programming to some extent. The reconstruc-tion performance of this algorithm is analyzed theoretically. The simulation results of compressed sensing of speech signal showwhen the reduction ratio(the ratio of numbers of measurements and original samples) is 1:4 based on the discrete cosine basis,theaverage SNR of reconsm~on signal based on the special row echelon measurement matrix is 9.73 dB higher than the Gaussianmeasurement matrix,and the average MOS score of reconstruction signal based on the special row echelon measurement matrix is1.22 higher than the Gaussian measurement matrix.

  19. A three-dimensional weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT under a circular source trajectory

    Science.gov (United States)

    Tang, Xiangyang; Hsieh, Jiang; Hagiwara, Akira; Nilsen, Roy A.; Thibault, Jean-Baptiste; Drapkin, Evgeny

    2005-08-01

    The original FDK algorithm proposed for cone beam (CB) image reconstruction under a circular source trajectory has been extensively employed in medical and industrial imaging applications. With increasing cone angle, CB artefacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few 'circular plus' trajectories have been proposed in the past to help the original FDK algorithm to reduce CB artefacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as head imaging, breast imaging, cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artefacts existing in the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle (namely conjugate ray inconsistency). The conjugate ray inconsistency is pixel dependent, varying dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artefacts that can be avoided if appropriate weighting strategies are exercised. Along with an experimental evaluation and verification, a three-dimensional (3D) weighted axial cone beam filtered backprojection (CB-FBP) algorithm is proposed in this paper for image reconstruction in volumetric CT under a circular source trajectory. Without extra trajectories supplemental to the circular trajectory, the proposed algorithm applies 3D weighting on projection data before 3D backprojection to reduce conjugate ray inconsistency by suppressing the contribution from one of the conjugate rays with a larger cone angle. Furthermore, the 3D weighting is dependent on the distance between the reconstruction plane and the central plane determined by the circular trajectory. The proposed 3D weighted axial CB-FBP algorithm

  20. A trust region-CG algorithm for deblurring problem inatmospheric image reconstruction

    Institute of Scientific and Technical Information of China (English)

    WANG; Yanfei(王彦飞); YUAN; Yaxiang(袁亚湘); ZHANG; Hongchao(张洪超)

    2002-01-01

    In this paper we solve large scale ill-posed problems, particularly the image restoration problem in atmospheric imaging sciences, by a trust region-CG algorithm. Image restoration involves the removal or minimization of degradation (blur, clutter, noise, etc.) in an image using a priori knowledge about the degradation phenomena. Our basic technique is the so-called trust region method, while the subproblem is solved by the truncated conjugate gradient method, which has been well developed for well-posed problems.The trust region method, due to its robustness in global convergence, seems to be a promising way to deal with ill-posed problems.

  1. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    Science.gov (United States)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  2. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    Science.gov (United States)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-07

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  3. Algorithmic framework for X-ray nanocrystallographic reconstruction in the presence of the indexing ambiguity

    Science.gov (United States)

    Donatelli, Jeffrey J.; Sethian, James A.

    2014-01-01

    X-ray nanocrystallography allows the structure of a macromolecule to be determined from a large ensemble of nanocrystals. However, several parameters, including crystal sizes, orientations, and incident photon flux densities, are initially unknown and images are highly corrupted with noise. Autoindexing techniques, commonly used in conventional crystallography, can determine orientations using Bragg peak patterns, but only up to crystal lattice symmetry. This limitation results in an ambiguity in the orientations, known as the indexing ambiguity, when the diffraction pattern displays less symmetry than the lattice and leads to data that appear twinned if left unresolved. Furthermore, missing phase information must be recovered to determine the imaged object’s structure. We present an algorithmic framework to determine crystal size, incident photon flux density, and orientation in the presence of the indexing ambiguity. We show that phase information can be computed from nanocrystallographic diffraction using an iterative phasing algorithm, without extra experimental requirements, atomicity assumptions, or knowledge of similar structures required by current phasing methods. The feasibility of this approach is tested on simulated data with parameters and noise levels common in current experiments. PMID:24344317

  4. Simultaneous Reconstruction and Segmentation with Class-Specific Priors

    DEFF Research Database (Denmark)

    Romanov, Mikhail

    for regularizing the reconstruction process. The thesis provides models and algorithms for simultaneous reconstruction and segmentation and their performance is empirically validated. Two method of simultaneous reconstruction and segmentation are described in the thesis. Also, a method for parameter selection......Studying the interior of objects using tomography often require an image segmentation, such that different material properties can be quantified. This can for example be volume or surface area. Segmentation is typically done as an image analysis step after the image has been reconstructed....... This thesis investigates computing the reconstruction and segmentation simultaneously. The advantage of this is that because the reconstruction and segmentation are computed jointly, reconstruction errors are not propagated to the segmentation step. Furthermore the segmentation procedure can be used...

  5. Evaluation of the image quality in digital breast tomosynthesis (DBT) employed with a compressed-sensing (CS)-based reconstruction algorithm by using the mammographic accreditation phantom

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Cho, Heemoon; Je, Uikyu; Cho, Hyosung, E-mail: hscho1@yonsei.ac.kr; Park, Chulkyu; Lim, Hyunwoo; Kim, Kyuseok; Kim, Guna; Park, Soyoung; Woo, Taeho; Choi, Sungil

    2015-12-21

    In this work, we have developed a prototype digital breast tomosynthesis (DBT) system which mainly consists of an x-ray generator (28 kV{sub p}, 7 mA s), a CMOS-type flat-panel detector (70-μm pixel size, 230.5×339 mm{sup 2} active area), and a rotational arm to move the x-ray generator in an arc. We employed a compressed-sensing (CS)-based reconstruction algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate DBT reconstruction. Here the CS is a state-of-the-art mathematical theory for solving the inverse problems, which exploits the sparsity of the image with substantially high accuracy. We evaluated the reconstruction quality in terms of the detectability, the contrast-to-noise ratio (CNR), and the slice-sensitive profile (SSP) by using the mammographic accreditation phantom (Model 015, CIRS Inc.) and compared it to the FBP-based quality. The CS-based algorithm yielded much better image quality, preserving superior image homogeneity, edge sharpening, and cross-plane resolution, compared to the FBP-based one. - Highlights: • A prototype digital breast tomosynthesis (DBT) system is developed. • Compressed-sensing (CS) based reconstruction framework is employed. • We reconstructed high-quality DBT images by using the proposed reconstruction framework.

  6. Investigation of the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection method: a phantom study

    Science.gov (United States)

    Abuhadi, Nouf; Bradley, David; Katarey, Dev; Podolyak, Zsolt; Sassi, Salem

    2014-03-01

    Introduction: Single-Photon Emission Computed Tomography (SPECT) is used to measure and quantify radiopharmaceutical distribution within the body. The accuracy of quantification depends on acquisition parameters and reconstruction algorithms. Until recently, most SPECT images were constructed using Filtered Back Projection techniques with no attenuation or scatter corrections. The introduction of 3-D Iterative Reconstruction algorithms with the availability of both computed tomography (CT)-based attenuation correction and scatter correction may provide for more accurate measurement of radiotracer bio-distribution. The effect of attenuation and scatter corrections on accuracy of SPECT measurements is well researched. It has been suggested that the combination of CT-based attenuation correction and scatter correction can allow for more accurate quantification of radiopharmaceutical distribution in SPECT studies (Bushberg et al., 2012). However, The effect of respiratory induced cardiac motion on SPECT images acquired using higher resolution algorithms such 3-D iterative reconstruction with attenuation and scatter corrections has not been investigated. Aims: To investigate the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection (FBP) methods implemented on cardiac SPECT/CT imaging with and without CT-attenuation and scatter corrections. Also to investigate the effects of respiratory induced cardiac motion on myocardium perfusion quantification. Lastly, to present a comparison of spatial resolution for FBP and ordered subset expectation maximization (OSEM) Flash 3D together with and without respiratory induced motion, and with and without attenuation and scatter correction. Methods: This study was performed on a Siemens Symbia T16 SPECT/CT system using clinical acquisition protocols. Respiratory induced cardiac motion was simulated by imaging a cardiac phantom insert whilst moving it using a respiratory motion motor

  7. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  8. Virtual patient 3D dose reconstruction using in air EPID measurements and a back-projection algorithm for IMRT and VMAT treatments.

    Science.gov (United States)

    Olaciregui-Ruiz, Igor; Rozendaal, Roel; van Oers, René F M; Mijnheer, Ben; Mans, Anton

    2017-05-01

    At our institute, a transit back-projection algorithm is used clinically to reconstruct in vivo patient and in phantom 3D dose distributions using EPID measurements behind a patient or a polystyrene slab phantom, respectively. In this study, an extension to this algorithm is presented whereby in air EPID measurements are used in combination with CT data to reconstruct 'virtual' 3D dose distributions. By combining virtual and in vivo patient verification data for the same treatment, patient-related errors can be separated from machine, planning and model errors. The virtual back-projection algorithm is described and verified against the transit algorithm with measurements made behind a slab phantom, against dose measurements made with an ionization chamber and with the OCTAVIUS 4D system, as well as against TPS patient data. Virtual and in vivo patient dose verification results are also compared. Virtual dose reconstructions agree within 1% with ionization chamber measurements. The average γ-pass rate values (3% global dose/3mm) in the 3D dose comparison with the OCTAVIUS 4D system and the TPS patient data are 98.5±1.9%(1SD) and 97.1±2.9%(1SD), respectively. For virtual patient dose reconstructions, the differences with the TPS in median dose to the PTV remain within 4%. Virtual patient dose reconstruction makes pre-treatment verification based on deviations of DVH parameters feasible and eliminates the need for phantom positioning and re-planning. Virtual patient dose reconstructions have additional value in the inspection of in vivo deviations, particularly in situations where CBCT data is not available (or not conclusive). Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Performance of the ATLAS Track Reconstruction Algorithms in Dense Environments in LHC run 2

    CERN Document Server

    Aaboud, Morad; ATLAS Collaboration; Abbott, Brad; Abdallah, Jalal; Abdinov, Ovsat; Abeloos, Baptiste; Abidi, Syed Haider; AbouZeid, Ossama; Abraham, Nicola; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Adachi, Shunsuke; Adamczyk, Leszek; Adelman, Jahred; Adersberger, Michael; Adye, Tim; Affolder, Tony; Agatonovic-Jovin, Tatjana; Agheorghiesei, Catalin; Aguilar-Saavedra, Juan Antonio; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Akatsuka, Shunichi; Akerstedt, Henrik; Åkesson, Torsten Paul Ake; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albicocco, Pietro; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Ali, Babar; Aliev, Malik; Alimonti, Gianluca; Alison, John; Alkire, Steven Patrick; Allbrooke, Benedict; Allen, Benjamin William; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Alshehri, Azzah Aziz; Alstaty, Mahmoud; Alvarez Gonzalez, Barbara; Άlvarez Piqueras, Damián; Alviggi, Mariagrazia; Amadio, Brian Thomas; Amaral Coutinho, Yara; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, John Kenneth; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Angerami, Aaron; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antel, Claire; Antonelli, Mario; Antonov, Alexey; Antrim, Daniel Joseph; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Araujo Ferraz, Victor; Arce, Ayana; Ardell, Rose Elisabeth; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Armitage, Lewis James; Arnaez, Olivier; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Artz, Sebastian; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Augsten, Kamil; Avolio, Giuseppe; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baas, Alessandra; Baca, Matthew John; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Bagnaia, Paolo; Bahrasemani, Sina; Baines, John; Bajic, Milena; Baker, Oliver Keith; Baldin, Evgenii; Balek, Petr; Balli, Fabrice; Balunas, William Keaton; Banas, Elzbieta; Banerjee, Swagato; Bannoura, Arwa A E; Barak, Liron; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisits, Martin-Stefan; Barklow, Timothy; Barlow, Nick; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska-Blenessy, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barranco Navarro, Laura; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Basalaev, Artem; Bassalat, Ahmed; Bates, Richard; Batista, Santiago Juan; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bawa, Harinder Singh; Beacham, James; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans~Peter; Becker, Kathrin; Becker, Maurice; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bedognetti, Matteo; Bee, Christopher; Beermann, Thomas; Begalli, Marcia; Begel, Michael; Behr, Janna Katharina; Bell, Andrew Stuart; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Beltramello, Olga; Belyaev, Nikita; Benary, Odette; Benchekroun, Driss; Bender, Michael; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez, Jose; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Beringer, Jürg; Berlendis, Simon; Bernard, Nathan Rogers; Bernardi, Gregorio; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertolucci, Federico; Bertram, Iain Alexander; Bertsche, Carolyn; Bertsche, David; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Betancourt, Christopher; Bethani, Agni; Bethke, Siegfried; Bevan, Adrian John; Beyer, Julien-christopher; Bianchi, Riccardo-Maria; Biebel, Otmar; Biedermann, Dustin; Bielski, Rafal; Biesuz, Nicolo Vladi; Biglietti, Michela

    2017-01-01

    Abstract: With the increase in energy of the Large Hadron Collider to a centre-of-mass energy of 13 TeV for Run 2, events with dense environments, such as in the cores of high-energy jets, became a focus for new physics searches as well as measurements of the Standard Model. These environments are characterized by charged-particle separations of the order of the tracking detectors sensor granularity. Basic track quantities are compared between 3.2 fb$^{-1}$ of data collected by the ATLAS experiment and simulation of proton-proton collisions producing high-transverse-momentum jets at a centre-of-mass energy of 13 TeV. The impact of charged-particle separations and multiplicities on the track reconstruction performance is discussed. The efficiency in the cores of jets with transverse momenta between 200 GeV and 1600 GeV is quantified using a novel, data-driven, method. The method uses the energy loss, dE/dx, to identify pixel clusters originating from two charged particles. Of the charged particles creating the...

  10. Algorithms for Reconstruction of Partially Known, Band Limited Fourier Transform Pairs from Noisy Data

    Science.gov (United States)

    1984-04-01

    CIR (3.2) *. where B(v) is a finite or infinite product of Blaschke factors, i.e. V-v* B(v) = TI B (v) where B lV ) = ( 3.3) i=l k Zk v~vk Furthermore...language, it is: f-t Given the sets {Ti1 with associated projections Pi =P1 i=l 1 T.iM4 1 find G such that GE n T.i=l I_ Gubin, Polyak and Raik 137] have...36. J.R. Fienup, "Phase retrieval algorithms: a comparison." Appl. Opt., 21, 2758-2769 (1982). 37. L. Gubin, B. Polyak , and E. Raik, "The method of

  11. Analysis of Full Charge Reconstruction Algorithms for X-Ray Pixelated Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Baumbaugh, A.; /Fermilab; Carini, G.; /SLAC; Deptuch, G.; /Fermilab; Grybos, P.; /AGH-UST, Cracow; Hoff, J.; /Fermilab; Siddons, P., Maj.; /Brookhaven; Szczygiel, R.; /AGH-UST, Cracow; Trimpl, M.; Yarema, R.; /Fermilab

    2012-05-21

    Existence of the natural diffusive spread of charge carriers on the course of their drift towards collecting electrodes in planar, segmented detectors results in a division of the original cloud of carriers between neighboring channels. This paper presents the analysis of algorithms, implementable with reasonable circuit resources, whose task is to prevent degradation of the detective quantum efficiency in highly granular, digital pixel detectors. The immediate motivation of the work is a photon science application requesting simultaneous timing spectroscopy and 2D position sensitivity. Leading edge discrimination, provided it can be freed from uncertainties associated with the charge sharing, is used for timing the events. Analyzed solutions can naturally be extended to the amplitude spectroscopy with pixel detectors.

  12. Analysis of full charge reconstruction algorithms for x-ray pixelated detectors

    Energy Technology Data Exchange (ETDEWEB)

    Baumbaugh, A.; /Fermilab; Carini, G.; /SLAC; Deptuch, G.; /Fermilab; Grybos, P.; /AGH-UST, Cracow; Hoff, J.; /Fermilab; Siddons, P., Maj.; /Brookhaven; Szczygiel, R.; /AGH-UST, Cracow; Trimpl, M.; Yarema, R.; /Fermilab

    2011-11-01

    Existence of the natural diffusive spread of charge carriers on the course of their drift towards collecting electrodes in planar, segmented detectors results in a division of the original cloud of carriers between neighboring channels. This paper presents the analysis of algorithms, implementable with reasonable circuit resources, whose task is to prevent degradation of the detective quantum efficiency in highly granular, digital pixel detectors. The immediate motivation of the work is a photon science application requesting simultaneous timing spectroscopy and 2D position sensitivity. Leading edge discrimination, provided it can be freed from uncertainties associated with the charge sharing, is used for timing the events. Analyzed solutions can naturally be extended to the amplitude spectroscopy with pixel detectors.

  13. 一类关于半正定规划的多项式原对偶内点算法%A class of polynomial primal-dual interior-point algorithms for semidefinite optimization

    Institute of Scientific and Technical Information of China (English)

    王国强; 白延琴

    2006-01-01

    In the present paper we present a class of polynomial primal-dual interior-point algorithms for semidefinite optimization based on a kernel function. This kernel function is not a so-called self-regular function due to its growth term increasing linearly. Some new analysis tools were developed which can be used to deal with complexity analysis of the algorithms which use analogous strategy in [ 5 ]to design the search directions for the Newton system. The complexity bounds for the algorithms with large- and small-update methods were obtained, namely, O( qn(p+q)/q(p+1)) log n/ε and O( q2√n) log n/ε, respectively.

  14. Atomic resolution tomography reconstruction of tilt series based on a GPU accelerated hybrid input-output algorithm using polar Fourier transform.

    Science.gov (United States)

    Lu, Xiangwen; Gao, Wenpei; Zuo, Jian-Min; Yuan, Jiabin

    2015-02-01

    Advances in diffraction and transmission electron microscopy (TEM) have greatly improved the prospect of three-dimensional (3D) structure reconstruction from two-dimensional (2D) images or diffraction patterns recorded in a tilt series at atomic resolution. Here, we report a new graphics processing unit (GPU) accelerated iterative transformation algorithm (ITA) based on polar fast Fourier transform for reconstructing 3D structure from 2D diffraction patterns. The algorithm also applies to image tilt series by calculating diffraction patterns from the recorded images using the projection-slice theorem. A gold icosahedral nanoparticle of 309 atoms is used as the model to test the feasibility, performance and robustness of the developed algorithm using simulations. Atomic resolution in 3D is achieved for the 309 atoms Au nanoparticle using 75 diffraction patterns covering 150° rotation. The capability demonstrated here provides an opportunity to uncover the 3D structure of small objects of nanometers in size by electron diffraction.

  15. A new reconstruction algorithm of the cell production kinetics for conifer species

    Science.gov (United States)

    Popkova, Margarita; Shishov, Vladimir; Tychkov, Ivan

    2017-04-01

    Tree-rings are important to reconstruct past environmental conditions. To describe and to understand development of tree-ring formation and predict the wood characteristics, a process-based modeling of wood formation have great potentials. Seasonal dynamics of tree growth can be explained by tree-ring growth, individual features of tree and external climatic conditions. The main anatomical characteristics of tree ring structure, e.g. the number of cells, the radial cell size and cell walls thickness are closely related to the kinetic characteristics of seasonal tree-ring formation, especially with the kinetics of cell production. Due to specificity of these processes and complexity of labor-intensive experimental methods (reference) mathematical modeling can be considered as an one possible approach, which requires to develop adequate mathematical methods and corresponded software components. In modern times the most process-based models simulate biomass production only with no possibility to determine the processes of cell production by cambium and differentiation cambial derivatives. A new block of the Vaganov-Shashkin model was proposed to estimate a cell production in tree rings and transfer it into time scale based on the simulated integral growth rates of the model. Here the VS-modeling is extremely important step because the simulated daily tree-ring growth rate is a basis to evaluate intra-seasonal variation of cambial production. The comparative analysis of the growth rates with one of the main tree-ring anatomical characteristics of conifers - radial cells size was carried out to provide a new procedure of timing cambium cell production during the season. Based on the previous research experience when the seasonal tree-growth dynamics were analyzed by direct (cutting, etc.) and indirect methods, the new proposed method is free from any complexity and limitations accompanying previous methods. The work was supported by the Russian Science Foundation (RSF

  16. Multi-detector row computed tomography of the heart: does a multi-segment reconstruction algorithm improve left ventricular volume measurements?

    Energy Technology Data Exchange (ETDEWEB)

    Juergens, Kai Uwe; Maintz, David; Heimes, Britta; Fallenberg, Eva Maria; Heindel, Walter; Fischbach, Roman [University of Muenster, Department of Clinical Radiology, Muenster (Germany); Grude, Matthias [University of Muenster, Department of Cardiology and Angiology, Muenster (Germany); Boese, Jan M. [Siemens Medical Solutions, Forchheim (Germany)

    2005-01-01

    A multi-segment cardiac image reconstruction algorithm in multi-detector row computed tomography (MDCT) was evaluated regarding temporal resolution and determination of left ventricular (LV) volumes and global LV function. MDCT and cine magnetic resonance (CMR) imaging were performed in 12 patients with known or suspected coronary artery disease. Patients gave informed written consent for the MDCT and the CMR exam. MDCT data were reconstructed using the standard adaptive cardiac volume (ACV) algorithm as well as a multi-segment algorithm utilizing data from three, five and seven rotations. LV end-diastolic (LV-EDV) and end-systolic volumes and ejection fraction (LV-EF) were determined from short-axis image reformations and compared to CMR data. Mean temporal resolution achieved was 192{+-}24 ms using the ACV algorithm and improved significantly utilizing the three, five and seven data segments to 139{+-}12, 113{+-}13 and 96{+-}11 ms (P<0.001 for each). Mean LV-EDV was without significant differences using the ACV algorithm, the multi-segment approach and CMR imaging. Despite improved temporal resolution with multi-segment image reconstruction, end-systolic volumes were less accurately measured (mean differences 3.9{+-}11.8 ml to 8.1{+-}13.9 ml), resulting in a consistent underestimation of LV-EF by 2.3-5.4% in comparison to CMR imaging (Bland-Altman analysis). Multi-segment image reconstruction improves temporal resolution compared to the standard ACV algorithm, but this does not result in a benefit for determination of LV volume and function. (orig.)

  17. The impact of CT radiation dose reduction and iterative reconstruction algorithms from four different vendors on coronary calcium scoring

    Energy Technology Data Exchange (ETDEWEB)

    Willemink, Martin J.; Takx, Richard A.P.; Jong, Pim A. de; Budde, Ricardo P.J.; Schilham, Arnold M.R.; Leiner, Tim [Utrecht University Medical Center, Department of Radiology, Utrecht (Netherlands); Bleys, Ronald L.A.W. [Utrecht University Medical Center, Department of Anatomy, Utrecht (Netherlands); Das, Marco; Wildberger, Joachim E. [Maastricht University Medical Center, Department of Radiology, Maastricht (Netherlands); Prokop, Mathias [Radboud University Nijmegen Medical Center, Department of Radiology, Nijmegen (Netherlands); Buls, Nico; Mey, Johan de [UZ Brussel, Department of Radiology, Brussels (Belgium)

    2014-09-15

    To analyse the effects of radiation dose reduction and iterative reconstruction (IR) algorithms on coronary calcium scoring (CCS). Fifteen ex vivo human hearts were examined in an anthropomorphic chest phantom using computed tomography (CT) systems from four vendors and examined at four dose levels using unenhanced prospectively ECG-triggered protocols. Tube voltage was 120 kV and tube current differed between protocols. CT data were reconstructed with filtered back projection (FBP) and reduced dose CT data with IR. CCS was quantified with Agatston scores, calcification mass and calcification volume. Differences were analysed with the Friedman test. Fourteen hearts showed coronary calcifications. Dose reduction with FBP did not significantly change Agatston scores, calcification volumes and calcification masses (P > 0.05). Maximum differences in Agatston scores were 76, 26, 51 and 161 units, in calcification volume 97, 27, 42 and 162 mm{sup 3}, and in calcification mass 23, 23, 20 and 48 mg, respectively. IR resulted in a trend towards lower Agatston scores and calcification volumes with significant differences for one vendor (P < 0.05). Median relative differences between reference FBP and reduced dose IR for Agatston scores remained within 2.0-4.6 %, 1.0-5.3 %, 1.2-7.7 % and 2.6-4.5 %, for calcification volumes within 2.4-3.9 %, 1.0-5.6 %, 1.1-6.4 % and 3.7-4.7 %, for calcification masses within 1.9-4.1 %, 0.9-7.8 %, 2.9-4.7 % and 2.5-3.9 %, respectively. IR resulted in increased, decreased or similar calcification masses. CCS derived from standard FBP acquisitions was not affected by radiation dose reductions up to 80 %. IR resulted in a trend towards lower Agatston scores and calcification volumes. (orig.)

  18. Dendroclimatic transfer functions revisited: Little Ice Age and Medieval Warm Period summer temperatures reconstructed using artificial neural networks and linear algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Helama, S.; Holopainen, J.; Eronen, M. [Department of Geology, University of Helsinki, (Finland); Makarenko, N.G. [Russian Academy of Sciences, St. Petersburg (Russian Federation). Pulkovo Astronomical Observatory; Karimova, L.M.; Kruglun, O.A. [Institute of Mathematics, Almaty (Kazakhstan); Timonen, M. [Finnish Forest Research Institute, Rovaniemi Research Unit (Finland); Merilaeinen, J. [SAIMA Unit of the Savonlinna Department of Teacher Education, University of Joensuu (Finland)

    2009-07-01

    Tree-rings tell of past climates. To do so, tree-ring chronologies comprising numerous climate-sensitive living-tree and subfossil time-series need to be 'transferred' into palaeoclimate estimates using transfer functions. The purpose of this study is to compare different types of transfer functions, especially linear and nonlinear algorithms. Accordingly, multiple linear regression (MLR), linear scaling (LSC) and artificial neural networks (ANN, nonlinear algorithm) were compared. Transfer functions were built using a regional tree-ring chronology and instrumental temperature observations from Lapland (northern Finland and Sweden). In addition, conventional MLR was compared with a hybrid model whereby climate was reconstructed separately for short- and long-period timescales prior to combining the bands of timescales into a single hybrid model. The fidelity of the different reconstructions was validated against instrumental climate data. The reconstructions by MLR and ANN showed reliable reconstruction capabilities over the instrumental period (AD 1802-1998). LCS failed to reach reasonable verification statistics and did not qualify as a reliable reconstruction: this was due mainly to exaggeration of the low-frequency climatic variance. Over this instrumental period, the reconstructed low-frequency amplitudes of climate variability were rather similar by MLR and ANN. Notably greater differences between the models were found over the actual reconstruction period (AD 802-1801). A marked temperature decline, as reconstructed by MLR, from the Medieval Warm Period (AD 931-1180) to the Little Ice Age (AD 1601-1850), was evident in all the models. This decline was approx. 0.5 C as reconstructed by MLR. Different ANN based palaeotemperatures showed simultaneous cooling of 0.2 to 0.5 C, depending on algorithm. The hybrid MLR did not seem to provide further benefit above conventional MLR in our sample. The robustness of the conventional MLR over the calibration

  19. New Developments of exact Cone-beam CT Reconstruction Algorithms%锥束CT精确重建算法研究最新进展

    Institute of Scientific and Technical Information of China (English)

    陈志强; 李亮; 康克军; 张丽

    2005-01-01

    第八届三维图像重建及核医疗学国际会议于2005年7月在美国盐湖城召开.该会议是在CT、PET及SPECT图像重建领域最负盛名的会议之一.本文主要介绍在本次会议上提出的几种最新锥束CT精确重建算法,包括MD-FBP算法、R-line算法等;还讨论了这两种精确锥束重建算法的各自优点,并对CT图像重建领域下一步的研究方向做了展望.%The international meeting on fully three-dimensional image reconstruction meeting in radiology and nuclear medicine was hold in July 2005, USA. It is one of the most famous meetings in CT, PET and SPECT image reconstruction field. This paper introduces some novel developments in PET, SPECT and CT imaging upon this meeting. According to our interest, we focus on exact cone-beam CT reconstruction including Minimum data filtered-backprojection algorithm (MD-FBP), the R-line algorithm and so on. In the end, we discuss the different advantages of the above two exact algorithms and research prospects in cone-beam reconstruction.

  20. Application of the FDK algorithm for multi-slice tomographic image reconstruction; Aplicacao do algoritmo FDK para a reconstrucao de imagens tomograficas multicortes

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Paulo Roberto, E-mail: pcosta@if.usp.b [Universidade de Sao Paulo (IFUSP), SP (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear; Araujo, Ericky Caldas de Almeida [Fine Image Technology, Sao Paulo, SP (Brazil)

    2010-08-15

    This work consisted on the study and application of the FDK (Feldkamp- Davis-Kress) algorithm for tomographic image reconstruction using cone-beam geometry, resulting on the implementation of an adapted multi-slice computed tomography system. For the acquisition of the projections, a rotating platform coupled to a goniometer, an X-ray equipment and a digital image detector charge-coupled device type were used. The FDK algorithm was implemented on a computer with a Pentium{sup R} XEON{sup TM} 3.0 processor, which was used for the reconstruction process. Initially, the original FDK algorithm was applied considering only the ideal physical conditions in the measurement process. Then some artifacts corrections related to the projections measurement process were incorporated. The implemented MSCT system was calibrated. A specially designed and manufactured object with a known linear attenuation coefficient distribution ({mu}(r)) was used for this purpose. Finally, the implemented MSCT system was used for multi-slice tomographic reconstruction of an inhomogeneous object, whose distribution {mu}(r) was unknown. Some aspects of the reconstructed images were analyzed to assess the robustness and reproducibility of the system. During the system calibration, a linear relationship between CT number and linear attenuation coefficients of materials was verified, which validate the application of the implemented multi-slice tomographic system for the characterization of linear attenuation coefficients of distinct several objects. (author)

  1. 求解离散无功优化的非线性原—对偶内点算法%NONLINEAR PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR DISCRETE REACTIVE POWER OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    程莹; 刘明波

    2001-01-01

    针对无功优化计算中离散变量和连续变量共存问题,提出用直接非线性原—对偶内点法内嵌罚函数的新算法。通过对几个不同规模试验系统计算分析,并与Tabu搜索法求得的结果比较,证明了该方法是有效的,而且在计算速度、收敛性和优化精度上都优于Tabu搜索法。这使内点法在解决非线性混合整数规划无功优化的有效性和实用性方面更进了一步。%A new algorithm for reactive power optimization problem involving both discrete and continuous variables is presented, which incorporating a penalty function into the nonlinear primal-dual interior point algorithm. By comparing with Tabu search method, the numerical results of several test systems show that the proposed method is effective, and is superior to Tabu search method on side of computation speed, convergence and optimization accuracy. The effectiveness and practicality of the interior point algorithm in solving nonlinear mixed integer programming model of reactive power optimization is improved.

  2. Interior-point Algorithm for Convex Quadratic Programming Based on A New Class of Kernel Functions%基于新的核函数求解凸二次规划的内点算法

    Institute of Scientific and Technical Information of China (English)

    李鑫

    2016-01-01

    Based on a new class of kernel functions,a large-update primal-dual interior-point algorithm for convex quadratic programming is presented.By using new technical results and favorable properties of the kernel function, the study proves that the iteration complexity for the algorithm is On1/2lognlogn/ε, which is identical with the currently best iteration bound for large-update primal-dual interior-point algorithms of convex quadratic programming.%基于一类新的核函数对凸二次规划(CQP)设计了一种大步校正内点算法。通过应用新的技术性结果和这类核函数良好的性质,证明了算法的迭代复杂性为 O(n1/2lognlogn/ε),这与目前凸二次规划的大步校正原始-对偶内点算法最好的迭代复杂性一致。

  3. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography; Desarrollo de algoritmos de reconstruccion de imagenes en tomografia de capacitancia electrica

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-12-28

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs.

  4. Design of low complexity sharp MDFT filter banks with perfect reconstruction using hybrid harmony-gravitational search algorithm

    Directory of Open Access Journals (Sweden)

    V. Sakthivel

    2015-12-01

    Full Text Available The design of low complexity sharp transition width Modified Discrete Fourier Transform (MDFT filter bank with perfect reconstruction (PR is proposed in this work. The current trends in technology require high data rates and speedy processing along with reduced power consumption, implementation complexity and chip area. Filters with sharp transition width are required for various applications in wireless communication. Frequency response masking (FRM technique is used to reduce the implementation complexity of sharp MDFT filter banks with PR. Further, to reduce the implementation complexity, the continuous coefficients of the filters in the MDFT filter banks are represented in discrete space using canonic signed digit (CSD. The multipliers in the filters are replaced by shifters and adders. The number of non-zero bits is reduced in the conversion process to minimize the number of adders and shifters required for the filter implementation. Hence the performances of the MDFT filter bank with PR may degrade. In this work, the performances of the MDFT filter banks with PR are improved using a hybrid Harmony-Gravitational search algorithm.

  5. Development and Evaluation of Vectorised and Multi-Core Event Reconstruction Algorithms within the CMS Software Framework

    Science.gov (United States)

    Hauth, T.; Innocente and, V.; Piparo, D.

    2012-12-01

    The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.

  6. Optimization of the volume reconstruction for classical Tomo-PIV algorithms (MART, BIMART and SMART): synthetic and experimental studies

    Science.gov (United States)

    Thomas, L.; Tremblais, B.; David, L.

    2014-03-01

    Optimization of multiplicative algebraic reconstruction technique (MART), simultaneous MART and block iterative MART reconstruction techniques was carried out on synthetic and experimental data. Different criteria were defined to improve the preprocessing of the initial images. Knowledge of how each reconstruction parameter influences the quality of particle volume reconstruction and computing time is the key in Tomo-PIV. These criteria were applied to a real case, a jet in cross flow, and were validated.

  7. Primal-Dual Interior-Point Algorithms with Dynamic Step-Size Based on Kernel Functions for Linear Programming%用于线性优化的基于核函数的动态步长原-对偶内点算法

    Institute of Scientific and Technical Information of China (English)

    钱忠根; 白延琴

    2005-01-01

    In this paper, primal-dual interior-point algorithm with dynamic step size is implemented for linear programming (LP) problems. The algorithms are based on a few kernel functions, including both self-regular functions and non-self-regular ones. The dynamic step size is compared with fixed step size for the algorithms in inner iteration of Newton step. Numerical tests show that the algorithms with dynamic step size are more efficient than those with fixed step size.

  8. Evaluation of hybrid SART  +  OS  +  TV iterative reconstruction algorithm for optical-CT gel dosimeter imaging

    Science.gov (United States)

    Du, Yi; Wang, Xiangang; Xiang, Xincheng; Wei, Zhouping

    2016-12-01

    Optical computed tomography (optical-CT) is a high-resolution, fast, and easily accessible readout modality for gel dosimeters. This paper evaluates a hybrid iterative image reconstruction algorithm for optical-CT gel dosimeter imaging, namely, the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization. The mathematical theory and implementation workflow of the algorithm are detailed. Experiments on two different optical-CT scanners were performed for cross-platform validation. For algorithm evaluation, the iterative convergence is first shown, and peak-to-noise-ratio (PNR) and contrast-to-noise ratio (CNR) results are given with the cone-beam filtered backprojection (FDK) algorithm and the FDK results followed by median filtering (mFDK) as reference. The effect on spatial gradients and reconstruction artefacts is also investigated. The PNR curve illustrates that the results of SART  +  OS  +  TV finally converges to that of FDK but with less noise, which implies that the dose-OD calibration method for FDK is also applicable to the proposed algorithm. The CNR in selected regions-of-interest (ROIs) of SART  +  OS  +  TV results is almost double that of FDK and 50% higher than that of mFDK. The artefacts in SART  +  OS  +  TV results are still visible, but have been much suppressed with little spatial gradient loss. Based on the assessment, we can conclude that this hybrid SART  +  OS  +  TV algorithm outperforms both FDK and mFDK in denoising, preserving spatial dose gradients and reducing artefacts, and its effectiveness and efficiency are platform independent.

  9. 车内噪声主动控制变步长LMS算法%Active Noise Control for Vehicle Interior Noise Using Variable Incremental Step LMS Algorithm

    Institute of Scientific and Technical Information of China (English)

    余荣平; 张心光; 王岩松; 郭辉

    2015-01-01

    通过对轨道车辆车内含噪样本数据的分析,应用步长因子μ(n)与误差信号e(n)呈正弦函数关系的变步长LMS算法。分别对自适应滤波器中的权向量按照最速下降算法进行更新,并利用建立的自适应滤波器进行车内噪声主动控制。结果表明,提出的变步长LMS算法解决了LMS算法因固定步长不能同时兼顾算法收敛速度和稳态误差的固有缺陷,具有更快的算法收敛速度和较小的稳态误差。%By analyzing the noise signal sample inside the railway vehicle, the plain LMS algorithm and the LMS algo-rithm with variable-incremental-steps were applied respectively to update the weight vectors in the adaptive filtering based on the steepest descent algorithm. The relation between step factor μ(n) and error signal e(n) is a sinusoidal function in the variable-step LMS algorithm. The adaptive filter was used for active internal noise control for the vehicle. Result shows that the proposed variable-step LMS algorithm can overcome the inherent contradiction in the plain LMS algorithm between al-gorithm convergence speed and steady-state error, and has faster algorithm convergence speed and less steady-state error si-multaneously.

  10. Estimation of the parameter covariance matrix for aone-compartment cardiac perfusion model estimated from a dynamic sequencereconstructed using map iterative reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Gullberg, Grant T.; Huesman, Ronald H.; Reutter, Bryan W.; Qi,Jinyi; Ghosh Roy, Dilip N.

    2004-01-01

    In dynamic cardiac SPECT estimates of kinetic parameters ofa one-compartment perfusion model are usually obtained in a two stepprocess: 1) first a MAP iterative algorithm, which properly models thePoisson statistics and the physics of the data acquisition, reconstructsa sequence of dynamic reconstructions, 2) then kinetic parameters areestimated from time activity curves generated from the dynamicreconstructions. This paper provides a method for calculating thecovariance matrix of the kinetic parameters, which are determined usingweighted least squares fitting that incorporates the estimated varianceand covariance of the dynamic reconstructions. For each transaxial slicesets of sequential tomographic projections are reconstructed into asequence of transaxial reconstructions usingfor each reconstruction inthe time sequence an iterative MAP reconstruction to calculate themaximum a priori reconstructed estimate. Time-activity curves for a sumof activity in a blood region inside the left ventricle and a sum in acardiac tissue region are generated. Also, curves for the variance of thetwo estimates of the sum and for the covariance between the two ROIestimates are generated as a function of time at convergence using anexpression obtained from the fixed-point solution of the statisticalerror of the reconstruction. A one-compartment model is fit to the tissueactivity curves assuming a noisy blood input function to give weightedleast squares estimates of blood volume fraction, wash-in and wash-outrate constants specifying the kinetics of 99mTc-teboroxime for theleftventricular myocardium. Numerical methods are used to calculate thesecond derivative of the chi-square criterion to obtain estimates of thecovariance matrix for the weighted least square parameter estimates. Eventhough the method requires one matrix inverse for each time interval oftomographic acquisition, efficient estimates of the tissue kineticparameters in a dynamic cardiac SPECT study can be obtained with

  11. Computerized Ultrasound Risk Evaluation (CURE) System: Development of Combined Transmission and Reflection Ultrasound with New Reconstruction Algorithms for Breast Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Littrup, P J; Duric, N; Azevedo, S; Chambers, D; Candy, J V; Johnson, S; Auner, G; Rather, J; Holsapple, E T

    2001-09-07

    processing, but the operator dependent nature of using a moveable transducer head remains a significant problem for thorough coverage of the entire breast. We have therefore undertaken the development of a whole breast (i.e., including auxiliary tail) system, with improved resolution and tissue characterization abilities. The extensive ultrasound physics considerations, engineering, materials process development and subsequent algorithm reconstruction are beyond the scope of this initial paper. The proprietary nature of these processes will be forthcoming as the intellectual property is fully secured. We will focus here on the imaging outcomes as they apply to eventual expansion into clinical use.

  12. An algorithm for the reconstruction of high-energy neutrino-induced particle showers and its application to the ANTARES neutrino telescope.

    Science.gov (United States)

    Albert, A; André, M; Anghinolfi, M; Anton, G; Ardid, M; Aubert, J-J; Avgitas, T; Baret, B; Barrios-Martí, J; Basa, S; Bertin, V; Biagi, S; Bormuth, R; Bourret, S; Bouwhuis, M C; Bruijn, R; Brunner, J; Busto, J; Capone, A; Caramete, L; Carr, J; Celli, S; Chiarusi, T; Circella, M; Coelho, J A B; Coleiro, A; Coniglione, R; Costantini, H; Coyle, P; Creusot, A; Deschamps, A; De Bonis, G; Distefano, C; Di Palma, I; Domi, A; Donzaud, C; Dornic, D; Drouhin, D; Eberl, T; El Bojaddaini, I; Elsässer, D; Enzenhöfer, A; Felis, I; Folger, F; Fusco, L A; Galatà, S; Gay, P; Giordano, V; Glotin, H; Grégoire, T; Gracia Ruiz, R; Graf, K; Hallmann, S; van Haren, H; Heijboer, A J; Hello, Y; Hernández-Rey, J J; Hößl, J; Hofestädt, J; Hugon, C; Illuminati, G; James, C W; de Jong, M; Jongen, M; Kadler, M; Kalekin, O; Katz, U; Kießling, D; Kouchner, A; Kreter, M; Kreykenbohm, I; Kulikovskiy, V; Lachaud, C; Lahmann, R; Lefèvre, D; Leonora, E; Lotze, M; Loucatos, S; Marcelin, M; Margiotta, A; Marinelli, A; Martínez-Mora, J A; Mele, R; Melis, K; Michael, T; Migliozzi, P; Moussa, A; Nezri, E; Organokov, M; Păvălaş, G E; Pellegrino, C; Perrina, C; Piattelli, P; Popa, V; Pradier, T; Quinn, L; Racca, C; Riccobene, G; Sánchez-Losa, A; Saldaña, M; Salvadori, I; Samtleben, D F E; Sanguineti, M; Sapienza, P; Schüssler, F; Sieger, C; Spurio, M; Stolarczyk, Th; Taiuti, M; Tayalati, Y; Trovato, A; Turpin, D; Tönnis, C; Vallage, B; Van Elewyck, V; Versari, F; Vivolo, D; Vizzoca, A; Wilms, J; Zornoza, J D; Zúñiga, J

    2017-01-01

    A novel algorithm to reconstruct neutrino-induced particle showers within the ANTARES neutrino telescope is presented. The method achieves a median angular resolution of [Formula: see text] for shower energies below 100 TeV. Applying this algorithm to 6 years of data taken with the ANTARES detector, 8 events with reconstructed shower energies above 10 TeV are observed. This is consistent with the expectation of about 5 events from atmospheric backgrounds, but also compatible with diffuse astrophysical flux measurements by the IceCube collaboration, from which 2-4 additional events are expected. A [Formula: see text] C.L. upper limit on the diffuse astrophysical neutrino flux with a value per neutrino flavour of [Formula: see text] is set, applicable to the energy range from 23 TeV to 7.8 PeV, assuming an unbroken [Formula: see text] spectrum and neutrino flavour equipartition at Earth.

  13. Gray Weighted CT Reconstruction Algorithm Based on Variable Voltage%基于灰度加权的变电压CT重建算法

    Institute of Scientific and Technical Information of China (English)

    李权; 陈平; 潘晋孝

    2014-01-01

    In conventional CT reconstruction based on fixed Voltage, the projective data often appears overex-posed or underexposed, and so the reconstructive results are poor.To solve this problem, variable voltage CT reconstruction has advanced.The effective projective sequences of a structural component are obtained through the variable voltages.Adjust and minimize the total variation to optimize the reconstructive results on the basis of iterative image using ART algorithm.In the process of reconstruction, the reconstructive image of the low voltage is used as an initial value of the effective projective reconstruction of the adjacent high voltage, and so on until to the highest voltage according to the gray weighted algorithm.That is to say the complete structural information is reconstructed.Experiment shows that the proposed algorithm can completely reflect the informa-tion of a complicated structural component, and the pixel values are more stable.%常规固定电压CT重建,由于过曝光和欠曝光导致的不完全投影信息,成像质量差,为此提出变电压CT重建。通过变电压获得跟工件有效厚度相匹配的有效投影序列,在ART迭代图像的基础上,调整全变差使其最小化,来优化重建。在重建过程中,依据灰度加权,把低电压的重建图像作为初值,应用在相邻高电压有效投影重建中,得到相邻高电压的重建图像,依次类推直至最高电压,工件的全部结构信息重建完毕。实验表明,灰度加权算法不仅实现了变电压图像信息的完整重建,像素值也更加稳定。

  14. Estimation of non-solid lung nodule volume with low-dose CT protocols: effect of reconstruction algorithm and measurement method

    Science.gov (United States)

    Gavrielides, Marios A.; DeFilippo, Gino; Berman, Benjamin P.; Li, Qin; Petrick, Nicholas; Schultz, Kurt; Siegelman, Jenifer

    2017-03-01

    Computed tomography is primarily the modality of choice to assess stability of nonsolid pulmonary nodules (sometimes referred to as ground-glass opacity) for three or more years, with change in size being the primary factor to monitor. Since volume extracted from CT is being examined as a quantitative biomarker of lung nodule size, it is important to examine factors affecting the performance of volumetric CT for this task. More specifically, the effect of reconstruction algorithms and measurement method in the context of low-dose CT protocols has been an under-examined area of research. In this phantom study we assessed volumetric CT with two different measurement methods (model-based and segmentation-based) for nodules with radiodensities of both nonsolid (-800HU and -630HU) and solid (-10HU) nodules, sizes of 5mm and 10mm, and two different shapes (spherical and spiculated). Imaging protocols included CTDIvol typical of screening (1.7mGy) and sub-screening (0.6mGy) scans and different types of reconstruction algorithms across three scanners. Results showed that radio-density was the factor contributing most to overall error based on ANOVA. The choice of reconstruction algorithm or measurement method did not affect substantially the accuracy of measurements; however, measurement method affected repeatability with repeatability coefficients ranging from around 3-5% for the model-based estimator to around 20-30% across reconstruction algorithms for the segmentation-based method. The findings of the study can be valuable toward developing standardized protocols and performance claims for nonsolid nodules.

  15. A Human-vision-based Algorithm for Curve Reconstruction%基于视觉原理的曲线重构算法研究

    Institute of Scientific and Technical Information of China (English)

    蔡大伟; 刘勇; 邱芹军; 曹涛

    2014-01-01

    曲线和曲面的重构是逆向工程中的重要问题,特别是按照计算机图形学中点线面的发展规律,曲线重构更是其中很重要的一步,为后面的曲面重构奠定了研究基础。论文研究和实现了一种曲线重构算法,该算法将人类的视觉具有的接近性和连续性融入到了曲线重构算法中。实验结果表明了该算法的有效性。%Curve and surface reconstruction is one of the most important problems in reverse engineering ,especially ac-cording to the law of development for point ,line and plane in computer graphics ,curve reconstruction is the very important step .It lays foundation for the later surface reconstruction .A kind of curve reconstruction algorithm is studied and realized , which is incorporated into proximity and good continuation of the human visual .Experimental results are presented to show the effectiveness of the algorithm .

  16. Artifact reduction of ultrasound Nakagami imaging by combining multifocus image reconstruction and the noise-assisted correlation algorithm.

    Science.gov (United States)

    Tsui, Po-Hsiang; Tsai, Yu-Wei

    2015-01-01

    Several studies have investigated Nakagami imaging to complement the B-scan in tissue characterization. The noise-induced artifact and the parameter ambiguity effect can affect performance of Nakagami imaging in the detection of variations in scatterer concentration. This study combined multifocus image reconstruction and the noise-assisted correlation algorithm (NCA) into the algorithm of Nakagami imaging to suppress the artifacts. A single-element imaging system equipped with a 5 MHz transducer was used to perform the brightness/depth (B/D) scanning of agar phantoms with scatterer concentrations ranging from 2 to 32 scatterers/mm(3). Experiments were also carried out on a mass with some strong point reflectors in a breast phantom using a commercial scanner with a 7.5 MHz linear array transducer operated at multifocus mode. The multifocus radiofrequency (RF) signals after the NCA process were used for Nakagami imaging. In the experiments on agar phantoms, an increasing scatterer concentration from 2 to 32 scatterers/mm(3) led to backscattered statistics ranging from pre-Rayleigh to Rayleigh distributions, corresponding to the increase in the Nakagami parameter measured in the focal zone from 0.1 to 0.8. However, the artifacts in the far field resulted in the Nakagami parameters of various scatterer concentrations to be close to 1 (Rayleigh distribution), making Nakagami imaging difficult to characterize scatterers. In the same scatterer concentration range, multifocus Nakagami imaging with the NCA simultaneously suppressed two types of artifacts, making the Nakagami parameter increase from 0.1 to 0.8 in the focal zone and from 0.18 to 0.7 in the far field, respectively. In the breast phantom experiments, the backscattered statistics of the mass corresponded to a high degree of pre-Rayleigh distribution. The Nakagami parameter of the mass before and after artifact reduction was 0.7 and 0.37, respectively. The results demonstrated that the proposed method for

  17. 变电压 CT 重建的灰度加权算法%Gray weighted algorithm for variable voltage CT reconstruction

    Institute of Scientific and Technical Information of China (English)

    李权; 陈平; 潘晋孝

    2014-01-01

    In conventional computed tomography (CT) reconstruction based on fixed voltage ,the projective data often ap-pear overexposed or underexposed ,as a result ,the reconstructive results are poor .To solve this problem ,variable voltage CT reconstruction has been proposed .The effective projective sequences of a structural component are obtained through the variable voltage .The total variation is adjusted and minimized to optimize the reconstructive results on the basis of iterative image using algebraic reconstruction technique (ART) .In the process of reconstruction ,the reconstructive image of low voltage is used as an initial value of the effective projective reconstruction of the adjacent high voltage ,and so on until to the highest voltage according to the gray weighted algorithm .Thereby the complete structural information is reconstructed . Simulation results show that the proposed algorithm can completely reflect the information of a complicated structural com -ponent ,and the pixel values are more stable than those of the conventional .%常规固定电压的 CT 重建,因成像系统动态范围受限,投影数据易出现过曝光和欠曝光共存现象,造成信息缺失多,成像质量差,为此提出变电压 CT 重建。通过变电压获得跟工件有效厚度相匹配的有效投影序列,在 ART 迭代图像的基础上,调整全变差使其最小化,从而优化重建。在重建过程中,依据灰度加权,把低电压的重建图像作为初值,应用在相邻高电压有效投影重建中,得到相邻高电压的重建图像,依次类推直至最高电压。至此,工件的全部结构信息重建完毕。仿真结果表明,灰度加权算法不仅实现了变电压图像信息的完整重建,而且像素值更加稳定。

  18. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  19. Primal-dual predictor-corrector interior point algorithm for quadratic semidefinite programming%二次半定规划的原始对偶预估校正内点算法

    Institute of Scientific and Technical Information of China (English)

    黄静静; 商朋见; 王爱文

    2011-01-01

    将半定规划(Semidefinite Programming,SDP)的内点算法推广到二次半定规划(QuadraticSemidefinite Programming,QSDP),重点讨论了AHO搜索方向的产生方法.首先利用Wolfe对偶理论推导得到了求解二次半定规划的非线性方程组,利用牛顿法求解该方程组,得到了求解QSDP的内点算法的AHO搜索方向,证明了该搜索方向的存在唯一性,最后给出了求解二次半定规划的预估校正内点算法的具体步骤,并对基于不同搜索方向的内点算法进行了数值实验,结果表明基于NT方向的内点算法最为稳健.%This paper extends the interior point algorithm for solving Semidefinite Programming (SDP) to Quadratic Semidefinite Programming(QSDP) and especially discusses the generation of AHO search direction. Firstly, we derive the nonlinear equations for solving QSDP using Wolfe's dual theorem.The AHO search direction is got by applying Newton' s method to the equations. Then we prove the existence and uniqueness of the search direction, and give the detaied steps of predictor-corrector interior</