WorldWideScience

Sample records for reconstruction algorithm based

  1. Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm

    Institute of Scientific and Technical Information of China (English)

    Mingjian Sun; Naizhang Feng; Yi Shen; Jiangang Li; Liyong Ma; Zhenghua Wu

    2011-01-01

    The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain. However, the sparsity of photoacoustic signals is destroyed because noises always exist. Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm. In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic images based on a set of noisy CS measurements. Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.%@@ The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain.However, the sparsity of photoacoustic signals is destroyed because noises always exist.Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm.In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic inages based on a set of noisy CS measurements.Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.

  2. A CUDA-based reverse gridding algorithm for MR reconstruction.

    Science.gov (United States)

    Yang, Jingzhu; Feng, Chaolu; Zhao, Dazhe

    2013-02-01

    MR raw data collected using non-Cartesian method can be transformed on Cartesian grids by traditional gridding algorithm (GA) and reconstructed by Fourier transform. However, its runtime complexity is O(K×N(2)), where resolution of raw data is N×N and size of convolution window (CW) is K. And it involves a large number of matrix calculation including modulus, addition, multiplication and convolution. Therefore, a Compute Unified Device Architecture (CUDA)-based algorithm is proposed to improve the reconstruction efficiency of PROPELLER (a globally recognized non-Cartesian sampling method). Experiment shows a write-write conflict among multiple CUDA threads. This induces an inconsistent result when synchronously convoluting multiple k-space data onto the same grid. To overcome this problem, a reverse gridding algorithm (RGA) was developed. Different from the method of generating a grid window for each trajectory as in traditional GA, RGA calculates a trajectory window for each grid. This is what "reverse" means. For each k-space point in the CW, contribution is cumulated to this grid. Although this algorithm can be easily extended to reconstruct other non-Cartesian sampled raw data, we only implement it based on PROPELLER. Experiment illustrates that this CUDA-based RGA has successfully solved the write-write conflict and its reconstruction speed is 7.5 times higher than that of traditional GA.

  3. Electromagnetic Model and Image Reconstruction Algorithms Based on EIT System

    Institute of Scientific and Technical Information of China (English)

    CAO Zhang; WANG Huaxiang

    2006-01-01

    An intuitive 2 D model of circular electrical impedance tomography ( EIT) sensor with small size electrodes is established based on the theory of analytic functions.The validation of the model is proved using the result from the solution of Laplace equation.Suggestions on to electrode optimization and explanation to the ill-condition property of the sensitivity matrix are provided based on the model,which takes electrode distance into account and can be generalized to the sensor with any simple connected region through a conformal transformation.Image reconstruction algorithms based on the model are implemented to show feasibility of the model using experimental data collected from the EIT system developed in Tianjin University.In the simulation with a human chestlike configuration,electrical conductivity distributions are reconstructed using equi-potential backprojection (EBP) and Tikhonov regularization (TR) based on a conformal transformation of the model.The algorithms based on the model are suitable for online image reconstruction and the reconstructed results are good both in size and position.

  4. Flame slice algebraic reconstruction technique reconstruction algorithm based on radial total variation

    Science.gov (United States)

    Zhang, Shufang; Wang, Fuyao; Zhang, Cong; Xie, Hui; Wan, Minggang

    2016-09-01

    The engine flame is an important representation of the combustion process in the cylinder, and the three-dimensional (3-D) shape reconstruction of the flame can provide more information for the quantitative analysis of the flame, so as to contribute to further research on the mechanism of the combustion flame. One important method of 3-D shape reconstruction is to reconstruct the two-dimensional (2-D) projection image of the flame, so the optimization problem of the flame 2-D slice reconstruction algorithm is studied in this paper. According to the gradient sparsity characteristics in the total variation (TV) domain and radial diffusion characteristics of the engine combustion flame, a flame 2-D slice algebraic reconstruction technique (ART) reconstruction algorithm based on radial TV (ART-R-TV) is proposed. Numerical simulation results show that the new proposed ART-R-TV algorithm can reconstruct flame slice images more stably and have a better robustness than the two traditional ART algorithms especially in a limited-angle situation.

  5. Medical image reconstruction algorithm based on the geometric information between sensor detector and ROI

    Science.gov (United States)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk

    2016-05-01

    In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.

  6. A novel reconstruction algorithm for bioluminescent tomography based on Bayesian compressive sensing

    Science.gov (United States)

    Wang, Yaqi; Feng, Jinchao; Jia, Kebin; Sun, Zhonghua; Wei, Huijun

    2016-03-01

    Bioluminescence tomography (BLT) is becoming a promising tool because it can resolve the biodistribution of bioluminescent reporters associated with cellular and subcellular function through several millimeters with to centimeters of tissues in vivo. However, BLT reconstruction is an ill-posed problem. By incorporating sparse a priori information about bioluminescent source, enhanced image quality is obtained for sparsity based reconstruction algorithm. Therefore, sparsity based BLT reconstruction algorithm has a great potential. Here, we proposed a novel reconstruction method based on Bayesian compressive sensing and investigated its feasibility and effectiveness with a heterogeneous phantom. The results demonstrate the potential and merits of the proposed algorithm.

  7. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    Science.gov (United States)

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  9. Algorithm study of wavefront reconstruction based on the cyclic radial shear interferometer

    CERN Document Server

    Li Da Hai; Chen Huai Xin; Chen Zhen Pei; Chen Bo Fei; Jing Feng

    2002-01-01

    The author presents a new algorithm of wavefront reconstruction based on the cyclic radial shear interferometer. The algorithm is a technique that the actual wavefront can be reconstructed directly and accurately from the distribution of phase difference which is obtained from the radial shearing pattern by Fourier transform. It can help to measure accurately the distorted wavefront of ICF in-process. An experiment is presented to test the algorithm

  10. Single image super-resolution reconstruction method based on LC-KSVD algorithm

    Science.gov (United States)

    Zhang, Yaolan; Liu, Yijun

    2017-05-01

    A good dictionary has direct impact to the result of super-resolution image reconstruction. For solving the problem that dictionary learning only contains representation ability but no class information using K-SVD algorithm, this paper proposes single image super-resolution algorithm based on LC-KSVD (Label consist K-SVD). The algorithm adds classifier parameter constraints into the process of dictionary learning and classifier parameters in the process, making the dictionary possess good representation and discrimination ability. The experimental results show that the algorithm has high reconstruction results and good robustness.

  11. DELAUNAY-BASED SURFACE RECONSTRUCTION ALGORITHM IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Triangulation of scattered points is the first important section during reverse engineering. New concepts of dynamic circle and closed point are put forward based on current basic method. These new concepts can narrow the extent which triangulation process should seek through and optimize the triangles during producing them. Updating the searching edges dynamically controls progress of triangulation. Intersection judgment between new triangle and produced triangles is changed into intersection judgment between new triangle and searching edges. Examples illustrate superiorities of this new algorithm.

  12. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  13. Noise Equivalent Counts Based Emission Image Reconstruction Algorithm of Tomographic Gamma Scanning

    CERN Document Server

    Wang, Ke; Feng, Wei; Han, Dong

    2014-01-01

    Tomographic Gamma Scanning (TGS) is a technique used to assay the nuclide distribution and radioactivity in nuclear waste drums. Both transmission and emission scans are performed in TGS and the transmission image is used for the attenuation correction in emission reconstructions. The error of the transmission image, which is not considered by the existing reconstruction algorithms, negatively affects the final results. An emission reconstruction method based on Noise Equivalent Counts (NEC) is presented. Noises from the attenuation image are concentrated to the projection data to apply the NEC Maximum-Likelihood Expectation-Maximization algorithm. Experiments are performed to verify the effectiveness of the proposed method.

  14. A practical local tomography reconstruction algorithm based on known subregion

    CERN Document Server

    Paleo, Pierre; Mirone, Alessandro

    2016-01-01

    We propose a new method to reconstruct data acquired in a local tomography setup. This method uses an initial reconstruction and refines it by correcting the low frequency artifacts known as the cupping effect. A basis of Gaussian functions is used to correct the initial reconstruction. The coefficients of this basis are iteratively optimized under the constraint of a known subregion. Using a coarse basis reduces the degrees of freedom of the problem while actually correcting the cupping effect. Simulations show that the known region constraint yields an unbiased reconstruction, in accordance to uniqueness theorems stated in local tomography.

  15. On a Gradient-Based Algorithm for Sparse Signal Reconstruction in the Signal/Measurements Domain

    Directory of Open Access Journals (Sweden)

    Ljubiša Stanković

    2016-01-01

    Full Text Available Sparse signals can be recovered from a reduced set of samples by using compressive sensing algorithms. In common compressive sensing methods the signal is recovered in the sparsity domain. A method for the reconstruction of sparse signals which reconstructs the missing/unavailable samples/measurements is recently proposed. This method can be efficiently used in signal processing applications where a complete set of signal samples exists. The missing samples are considered as the minimization variables, while the available samples are fixed. Reconstruction of the unavailable signal samples/measurements is preformed using a gradient-based algorithm in the time domain, with an adaptive step. Performance of this algorithm with respect to the step-size and convergence are analyzed and a criterion for the step-size adaptation is proposed in this paper. The step adaptation is based on the gradient direction angles. Illustrative examples and statistical study are presented. Computational efficiency of this algorithm is compared with other two commonly used gradient algorithms that reconstruct signal in the sparsity domain. Uniqueness of the recovered signal is checked using a recently introduced theorem. The algorithm application to the reconstruction of highly corrupted images is presented as well.

  16. A Compton scattering image reconstruction algorithm based on total variation minimization

    Institute of Scientific and Technical Information of China (English)

    Li Shou-Peng; Wang Lin-Yuan; Yan Bin; Li Lei; Liu Yong-Jun

    2012-01-01

    Compton scattering imaging is a novel radiation imaging method using scattered photons.Its main characteristics are detectors that do not have to be on the opposite side of the source,so avoiding the rotation process.The reconstruction problem of Compton scattering imaging is the inverse problem to solve electron densities from nonlinear equations,which is ill-posed.This means the solution exhibits instability and sensitivity to noise or erroneous measurements.Using the theory for reconstruction of sparse images,a reconstruction algorithm based on total variation minimization is proposed.The reconstruction problem is described as an optimization problem with nonlinear data-consistency constraint.The simulated results show that the proposed algorithm could reduce reconstruction error and improve image quality,especially when there are not enough measurements.

  17. A New Track Reconstruction Algorithm suitable for Parallel Processing based on Hit Triplets and Broken Lines

    CERN Document Server

    Schöning, Andre

    2016-01-01

    Track reconstruction in high track multiplicity environments at current and future high rate particle physics experiments is a big challenge and very time consuming. The search for track seeds and the fitting of track candidates are usually the most time consuming steps in the track reconstruction. Here, a new and fast track reconstruction method based on hit triplets is proposed which exploits a three-dimensional fit model including multiple scattering and hit uncertainties from the very start, including the search for track seeds. The hit triplet based reconstruction method assumes a homogeneous magnetic field which allows to give an analytical solutions for the triplet fit result. This method is highly parallelizable, needs fewer operations than other standard track reconstruction methods and is therefore ideal for the implementation on parallel computing architectures. The proposed track reconstruction algorithm has been studied in the context of the Mu3e-experiment and a typical LHC experiment.

  18. A New Track Reconstruction Algorithm suitable for Parallel Processing based on Hit Triplets and Broken Lines

    Science.gov (United States)

    Schöning, André

    2016-11-01

    Track reconstruction in high track multiplicity environments at current and future high rate particle physics experiments is a big challenge and very time consuming. The search for track seeds and the fitting of track candidates are usually the most time consuming steps in the track reconstruction. Here, a new and fast track reconstruction method based on hit triplets is proposed which exploits a three-dimensional fit model including multiple scattering and hit uncertainties from the very start, including the search for track seeds. The hit triplet based reconstruction method assumes a homogeneous magnetic field which allows to give an analytical solutions for the triplet fit result. This method is highly parallelizable, needs fewer operations than other standard track reconstruction methods and is therefore ideal for the implementation on parallel computing architectures. The proposed track reconstruction algorithm has been studied in the context of the Mu3e-experiment and a typical LHC experiment.

  19. IPED: Inheritance Path-based Pedigree Reconstruction Algorithm Using Genotype Data

    Science.gov (United States)

    Wang, Zhanyong; Han, Buhm; Parida, Laxmi; Eskin, Eleazar

    2013-01-01

    Abstract The problem of inference of family trees, or pedigree reconstruction, for a group of individuals is a fundamental problem in genetics. Various methods have been proposed to automate the process of pedigree reconstruction given the genotypes or haplotypes of a set of individuals. Current methods, unfortunately, are very time-consuming and inaccurate for complicated pedigrees, such as pedigrees with inbreeding. In this work, we propose an efficient algorithm that is able to reconstruct large pedigrees with reasonable accuracy. Our algorithm reconstructs the pedigrees generation by generation, backward in time from the extant generation. We predict the relationships between individuals in the same generation using an inheritance path-based approach implemented with an efficient dynamic programming algorithm. Experiments show that our algorithm runs in linear time with respect to the number of reconstructed generations, and therefore, it can reconstruct pedigrees that have a large number of generations. Indeed it is the first practical method for reconstruction of large pedigrees from genotype data. PMID:24093229

  20. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    Science.gov (United States)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  1. A level set based algorithm to reconstruct the urinary bladder from multiple views.

    Science.gov (United States)

    Ma, Zhen; Jorge, Renato Natal; Mascarenhas, T; Tavares, João Manuel R S

    2013-12-01

    The urinary bladder can be visualized from different views by imaging facilities such as computerized tomography and magnetic resonance imaging. Multi-view imaging can present more details of this pelvic organ and contribute to a more reliable reconstruction. Based on the information from multi-view planes, a level set based algorithm is proposed to reconstruct the 3D shape of the bladder using the cross-sectional boundaries. The algorithm provides a flexible solution to handle the discrepancies from different view planes and can obtain an accurate bladder surface with more geometric details. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  3. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward Steven,

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  4. A new algorithm for EEG source reconstruction based on LORETA by contracting the source region

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new method is presented for EEG source reconstruction based on multichannel surface EEG recordings. From the low-resolution tomography obtained by the low resolution electromagnetic tomography algorithm (LORETA), this method acquires the source tomography, which has high-resolution by contracting the source region. In contrast to focal underdetermined system solver (FOCUSS), this method can gain more accurate result under certain circumstances.

  5. A task-based comparison of two reconstruction algorithms for digital breast tomosynthesis

    Science.gov (United States)

    Mahadevan, Ravi; Ikejimba, Lynda C.; Lin, Yuan; Samei, Ehsan; Lo, Joseph Y.

    2014-03-01

    Digital breast tomosynthesis (DBT) generates 3-D reconstructions of the breast by taking X-Ray projections at various angles around the breast. DBT improves cancer detection as it minimizes tissue overlap that is present in traditional 2-D mammography. In this work, two methods of reconstruction, filtered backprojection (FBP) and the Newton-Raphson iterative reconstruction were used to create 3-D reconstructions from phantom images acquired on a breast tomosynthesis system. The task based image analysis method was used to compare the performance of each reconstruction technique. The task simulated a 10mm lesion within the breast containing iodine concentrations between 0.0mg/ml and 8.6mg/ml. The TTF was calculated using the reconstruction of an edge phantom, and the NPS was measured with a structured breast phantom (CIRS 020) over different exposure levels. The detectability index d' was calculated to assess image quality of the reconstructed phantom images. Image quality was assessed for both conventional, single energy and dual energy subtracted reconstructions. Dose allocation between the high and low energy scans was also examined. Over the full range of dose allocations, the iterative reconstruction yielded a higher detectability index than the FBP for single energy reconstructions. For dual energy subtraction, detectability index was maximized when most of the dose was allocated to the high energy image. With that dose allocation, the performance trend for reconstruction algorithms reversed; FBP performed better than the corresponding iterative reconstruction. However, FBP performance varied very erratically with changing dose allocation. Therefore, iterative reconstruction is preferred for both imaging modalities despite underperforming dual energy FBP, as it provides stable results.

  6. A Hierarchical NeuroBayes-based Algorithm for Full Reconstruction of B Mesons at B Factories

    CERN Document Server

    Feindt, Michael; Kreps, Michal; Kuhr, Thomas; Neubauer, Sebastian; Zander, Daniel; Zupanc, Anze

    2011-01-01

    We describe a new B-meson full reconstruction algorithm designed for the Belle experiment at the B-factory KEKB, an asymmetric e+e- collider. To maximize the number of reconstructed B decay channels, it utilizes a hierarchical reconstruction procedure and probabilistic calculus instead of classical selection cuts. The multivariate analysis package NeuroBayes was used extensively to hold the balance between highest possible efficiency, robustness and acceptable CPU time consumption. In total, 1042 exclusive decay channels were reconstructed, employing 71 neural networks altogether. Overall, we correctly reconstruct one B+/- or B0 candidate in 0.3% or 0.2% of the BBbar events, respectively. This is an improvement in efficiency by roughly a factor of 2, depending on the analysis considered, compared to the cut-based classical reconstruction algorithm used at Belle. The new framework also features the ability to choose the desired purity or efficiency of the fully reconstructed sample. If the same purity as for t...

  7. A Hierarchical Optimization Algorithm Based on GPU for Real-Time 3D Reconstruction

    Science.gov (United States)

    Lin, Jin-hua; Wang, Lu; Wang, Yan-jie

    2017-06-01

    In machine vision sensing system, it is important to realize high-quality real-time 3D reconstruction in large-scale scene. The recent online approach performed well, but scaling up the reconstruction, it causes pose estimation drift, resulting in the cumulative error, usually requiring a large number of off-line operation to completely correct the error, reducing the reconstruction performance. In order to optimize the traditional volume fusion method and improve the old frame-to-frame pose estimation strategy, this paper presents a real-time CPU to Graphic Processing Unit reconstruction system. Based on a robust camera pose estimation strategy, the algorithm fuses all the RGB-D input values into an effective hierarchical optimization framework, and optimizes each frame according to the global camera attitude, eliminating the serious dependence on the tracking timeliness and continuously tracking globally optimized frames. The system estimates the global optimization of gestures (bundling) in real-time, supports for robust tracking recovery (re-positioning), and re-estimation of large-scale 3D scenes to ensure global consistency. It uses a set of sparse corresponding features, geometric and ray matching functions in one of the parallel optimization systems. The experimental results show that the average reconstruction time is 415 ms per frame, the ICP pose is estimated 20 times in 100.0 ms. For large scale 3D reconstruction scene, the system performs well in online reconstruction area, keeping the reconstruction accuracy at the same time.

  8. Beam hardening correction using iterative total variation (ITV)-based algorithm in CBCT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Sungchae; Huh, Young [Converged Medical Device Research Center, Advanced Medical Device Research Division, KERI, Gyeonggido 426-910 (Korea, Republic of)

    2015-07-01

    Recently, beam hardening reduction is required to produce high-quality reconstructions of X-ray cone-beam computed tomography (CBCT) system for medical applications. This paper introduces the iterative total variation (ITV) for filtered-backprojection suffering from the serious beam hardening problems. Feldkamp, Davis, and Kress (FDK) reconstruction algorithm for CBCT system is widely used reconstruction technique. FDK reconstruction algorithm could be realized by generating the weighted projection data, filtering the projection images, and back-projecting the filtered projection data into the volume. However, FDK algorithm suffers from the beam hardening artifacts by X-ray attenuation coefficients. Recently, total variation (TV) method for compressed sensing (CS) has been particularly useful in exploiting the prior knowledge of minimal variation in the X-ray attenuation characteristics across object or human body. But a practical implementation of this method still remains a challenge. The main problem is the iterative nature of solving the TV-based CS formulation, which generally requires multiple iterations of forward and backward projections of a large dataset in clinically or industrially feasible time frame. In this paper, we propose ITV method after FDK reconstruction for reducing the beam hardening artifacts. The beam hardening problems are reduced by the ITV method to promote sparsity inherent in the X-ray attenuation characteristics. (authors)

  9. A hybrid ECT image reconstruction based on Tikhonov regularization theory and SIRT algorithm

    Science.gov (United States)

    Lei, Wang; Xiaotong, Du; Xiaoyin, Shao

    2007-07-01

    Electrical Capacitance Tomography (ECT) image reconstruction is a key problem that is not well solved due to the influence of soft-field in the ECT system. In this paper, a new hybrid ECT image reconstruction algorithm is proposed by combining Tikhonov regularization theory and Simultaneous Reconstruction Technique (SIRT) algorithm. Tikhonov regularization theory is used to solve ill-posed image reconstruction problem to obtain a stable original reconstructed image in the region of the optimized solution aggregate. Then, SIRT algorithm is used to improve the quality of the final reconstructed image. In order to satisfy the industrial requirement of real-time computation, the proposed algorithm is further been modified to improve the calculation speed. Test results show that the quality of reconstructed image is better than that of the well-known Filter Linear Back Projection (FLBP) algorithm and the time consumption of the new algorithm is less than 0.1 second that satisfies the online requirements.

  10. A hybrid ECT image reconstruction based on Tikhonov regularization theory and SIRT algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wang Lei [School of Control Science and Engineering, Shandong University, 250061, Jinan (China); Du Xiaotong [School of Control Science and Engineering, Shandong University, 250061, Jinan (China); Shao Xiaoyin [Department of Manufacture Engineering and Engineering Management, City University of Hong Kong (China)

    2007-07-15

    Electrical Capacitance Tomography (ECT) image reconstruction is a key problem that is not well solved due to the influence of soft-field in the ECT system. In this paper, a new hybrid ECT image reconstruction algorithm is proposed by combining Tikhonov regularization theory and Simultaneous Reconstruction Technique (SIRT) algorithm. Tikhonov regularization theory is used to solve ill-posed image reconstruction problem to obtain a stable original reconstructed image in the region of the optimized solution aggregate. Then, SIRT algorithm is used to improve the quality of the final reconstructed image. In order to satisfy the industrial requirement of real-time computation, the proposed algorithm is further been modified to improve the calculation speed. Test results show that the quality of reconstructed image is better than that of the well-known Filter Linear Back Projection (FLBP) algorithm and the time consumption of the new algorithm is less than 0.1 second that satisfies the online requirements.

  11. BPF-based reconstruction algorithm for multiple rotation-translation scan mode

    Institute of Scientific and Technical Information of China (English)

    Ming Chen; Huitao Zhang; Peng Zhang

    2008-01-01

    In industrial CT, it is often required to inspect large objects using short line-detectors. To acquire the complete CT data for the scanning slice of large objects using short line-detectors, some multi-scan modes have been developed. But the existing methods reconstructing an image from the data scanned by multi-scan modes have to rebin the data into fan-beam or parallel-beam form via data interpolation. The data rebinning process not only increases great computational cost, but also degrades image resolution. In this paper, we propose a backprojection-filtration (BPF)-based reconstruction algorithm for rotation-translation (RT) multi-scan mode. An important feature of the proposed algorithm is that data rebinning process is not introduced. The simulation results have verified the validity of the proposed algorithm.

  12. Fast hybrid CPU- and GPU-based CT reconstruction algorithm using air skipping technique.

    Science.gov (United States)

    Lee, Byeonghun; Lee, Ho; Shin, Yeong Gil

    2010-01-01

    This paper presents a fast hybrid CPU- and GPU-based CT reconstruction algorithm to reduce the amount of back-projection operation using air skipping involving polygon clipping. The algorithm easily and rapidly selects air areas that have significantly higher contrast in each projection image by applying K-means clustering method on CPU, and then generates boundary tables for verifying valid region using segmented air areas. Based on these boundary tables of each projection image, clipped polygon that indicates active region when back-projection operation is performed on GPU is determined on each volume slice. This polygon clipping process makes it possible to use smaller number of voxels to be back-projected, which leads to a faster GPU-based reconstruction method. This approach has been applied to a clinical data set and Shepp-Logan phantom data sets having various ratio of air region for quantitative and qualitative comparison and analysis of our and conventional GPU-based reconstruction methods. The algorithm has been proved to reduce computational time to half without losing any diagnostic information, compared to conventional GPU-based approaches.

  13. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    Science.gov (United States)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  14. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  15. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    Science.gov (United States)

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  16. A perspective matrix-based seed reconstruction algorithm with applications to C-arm based intra-operative dosimetry

    Science.gov (United States)

    Narayanan, Sreeram; Cho, Paul S.

    2006-03-01

    Currently available seed reconstruction algorithms are based on the assumption that accurate information about the imaging geometry is known. The assumption is valid for isocentric x-ray units such as radiotherapy simulators. However, the large majority of the clinics performing prostate brachytherapy today use C-arms for which imaging parameters such as source to axis distance, image acquisition angles, central axis of the image are not accurately known. We propose a seed reconstruction algorithm that requires no such knowledge of geometry. The new algorithm makes use of perspective projection matrix, which can be easily derived from a set of known reference points. The perspective matrix calculates the transformation of a point in 3D space to the imaging coordinate system. An accurate representation of the imaging geometry can be derived from the generalized projection matrix (GPM) with eleven degrees of freedom. In this paper we show how GPM can be derived given a theoretical minimum number of reference points. We propose an algorithm to compute the line equation that defines the backprojection operation given the GPM. The algorithm can be extended to any ray-tracing based seed reconstruction algorithms. Reconstruction using the GPM does not require calibration of C-arms and the images can be acquired at arbitrary angles. The reconstruction is performed in near real-time. Our simulations show that reconstruction using GPM is robust and accuracy is independent of the source to detector distance and location of the reference points used to generate the GPM. Seed reconstruction from C-arm images acquired at unknown geometry provides a useful tool for intra-operative dosimetry in prostate brachytherapy.

  17. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    Science.gov (United States)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  18. An image reconstruction algorithm of EIT based on pulmonary prior information

    Institute of Scientific and Technical Information of China (English)

    Huaxiang WANG; Li HU; ling WANG; Lu LI

    2009-01-01

    Using a CT scan of the pulmonary tissue, a human pulmonary model is established combined with the structure property of the human lung tissue using the software COMSOL. Combined with the conductivity contribution information of the human tissue and organ,an image reconstruction method of electrical impedance tomography based on pulmonary prior information is proposed using the conjugate gradient method. Simulation results show that the uniformity index of sensitivity distribution of the pulmonary model is 15.568, which is significantly reduced compared with 34.218 based on the round field. The proposed algorithm improves the uniformity of the sensing field, the image resolution of the conductivity distribution of pulmonary tissue and the quality of the reconstruction image based on pulmonary prior information.

  19. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization

    KAUST Repository

    Jin, Bangti

    2011-08-24

    This paper develops a novel sparse reconstruction algorithm for the electrical impedance tomography problem of determining a conductivity parameter from boundary measurements. The sparsity of the \\'inhomogeneity\\' with respect to a certain basis is a priori assumed. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity-promoting ℓ 1-penalty term, and it allows us to obtain quantitative results when the assumption is valid. A novel iterative algorithm of soft shrinkage type was proposed. Numerical results for several two-dimensional problems with both single and multiple convex and nonconvex inclusions were presented to illustrate the features of the proposed algorithm and were compared with one conventional approach based on smoothness regularization. © 2011 John Wiley & Sons, Ltd.

  20. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    Science.gov (United States)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  1. Seamless texture mapping algorithm for image-based three-dimensional reconstruction

    Science.gov (United States)

    Liu, Jiapeng; Liu, Bin; Fang, Tao; Huo, Hong; Zhao, Yuming

    2016-09-01

    Texture information plays an important role in rendering true objects, especially with the wide application of image-based three-dimensional (3-D) reconstruction and 3-D laser scanning. This paper proposes a seamless texture mapping algorithm to achieve a high-quality visual effect for 3-D reconstruction. At first, a series of image sets is produced by analyzing the visibility of triangular facets, the image sets are clustered and segmented into a number of optimal reference texture patches. Second, the generated texture patches are sequenced to create a rough texture map, then a weighting process is adopted to reduce the color discrepancies between adjacent patches. Finally, a multiresolution decomposition and fusion technique is used to generate the transition section and eliminate the boundary effect. Experiments show that the proposed algorithm is effective and practical for obtaining high-quality 3-D texture mapping for 3-D reconstruction. Compared with traditional methods, it maintains the texture clarity while eliminating the color seams, in addition, it also supports 3-D texture mapping for big data application.

  2. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    Science.gov (United States)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  3. Layout Optimization of Sensor-based Reconstruction of Explosion Overpressure Field Based on the Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Miaomiao Bai

    2014-11-01

    Full Text Available In underwater blasting experiment, the layout of the sensor has always been highly concerned. From the perspective of reconstruction with explosion overpressure field, the paper presents four indicators, which can obtain the optimal sensor layout scheme and guide sensor layout in practical experiment, combining with the genetic algorithm with global search. Then, a multi-scale model in every subregion of underwater blasting field was established to be used simulation experiments. By Matlab, the variation of these four indicators with different sensor layout, and reconstruction accuracy are analyzed and discussed. Finally, a conclusion has been raised through the analysis and comparison of simulation results, that the program can get a better sensor layout. It requires fewer number of sensors to be able to get good results with high accuracy. In the actual test explosions, we can refer to this scheme laid sensors.

  4. A compressed sensing-based iterative algorithm for CT reconstruction and its possible application to phase contrast imaging

    Directory of Open Access Journals (Sweden)

    Li Xueli

    2011-08-01

    Full Text Available Abstract Background Computed Tomography (CT is a technology that obtains the tomogram of the observed objects. In real-world applications, especially the biomedical applications, lower radiation dose have been constantly pursued. To shorten scanning time and reduce radiation dose, one can decrease X-ray exposure time at each projection view or decrease the number of projections. Until quite recently, the traditional filtered back projection (FBP method has been commonly exploited in CT image reconstruction. Applying the FBP method requires using a large amount of projection data. Especially when the exposure speed is limited by the mechanical characteristic of the imaging facilities, using FBP method may prolong scanning time and cumulate with a high dose of radiation consequently damaging the biological specimens. Methods In this paper, we present a compressed sensing-based (CS-based iterative algorithm for CT reconstruction. The algorithm minimizes the l1-norm of the sparse image as the constraint factor for the iteration procedure. With this method, we can reconstruct images from substantially reduced projection data and reduce the impact of artifacts introduced into the CT reconstructed image by insufficient projection information. Results To validate and evaluate the performance of this CS-base iterative algorithm, we carried out quantitative evaluation studies in imaging of both software Shepp-Logan phantom and real polystyrene sample. The former is completely absorption based and the later is imaged in phase contrast. The results show that the CS-based iterative algorithm can yield images with quality comparable to that obtained with existing FBP and traditional algebraic reconstruction technique (ART algorithms. Discussion Compared with the common reconstruction from 180 projection images, this algorithm completes CT reconstruction from only 60 projection images, cuts the scan time, and maintains the acceptable quality of the

  5. Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

    Science.gov (United States)

    Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi

    2017-03-01

    Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin–Osher–Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.

  6. a Line-Based 3d Roof Model Reconstruction Algorithm: Tin-Merging and Reshaping (tmr)

    Science.gov (United States)

    Rau, J.-Y.

    2012-07-01

    Three-dimensional building model is one of the major components of a cyber-city and is vital for the realization of 3D GIS applications. In the last decade, the airborne laser scanning (ALS) data is widely used for 3D building model reconstruction and object extraction. Instead, based on 3D roof structural lines, this paper presents a novel algorithm for automatic roof models reconstruction. A line-based roof model reconstruction algorithm, called TIN-Merging and Reshaping (TMR), is proposed. The roof structural line, such as edges, eaves and ridges, can be measured manually from aerial stereo-pair, derived by feature line matching or inferred from ALS data. The originality of the TMR algorithm for 3D roof modelling is to perform geometric analysis and topology reconstruction among those unstructured lines and then reshapes the roof-type using elevation information from the 3D structural lines. For topology reconstruction, a line constrained Delaunay Triangulation algorithm is adopted where the input structural lines act as constraint and their vertex act as input points. Thus, the constructed TINs will not across the structural lines. Later at the stage of Merging, the shared edge between two TINs will be check if the original structural line exists. If not, those two TINs will be merged into a polygon. Iterative checking and merging of any two neighboured TINs/Polygons will result in roof polygons on the horizontal plane. Finally, at the Reshaping stage any two structural lines with fixed height will be used to adjust a planar function for the whole roof polygon. In case ALS data exist, the Reshaping stage can be simplified by adjusting the point cloud within the roof polygon. The proposed scheme reduces the complexity of 3D roof modelling and makes the modelling process easier. Five test datasets provided by ISPRS WG III/4 located at downtown Toronto, Canada and Vaihingen, Germany are used for experiment. The test sites cover high rise buildings and residential

  7. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    2017-08-12

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  8. Statistical and systematic uncertainties in pixel-based source reconstruction algorithms for gravitational lensing

    CERN Document Server

    Tagore, Amitpal

    2014-01-01

    Gravitational lens modeling of spatially resolved sources is a challenging inverse problem with many observational constraints and model parameters. We examine established pixel-based source reconstruction algorithms for de-lensing the source and constraining lens model parameters. Using test data for four canonical lens configurations, we explore statistical and systematic uncertainties associated with gridding, source regularisation, interpolation errors, noise, and telescope pointing. Specifically, we compare two gridding schemes in the source plane: a fully adaptive grid that follows the lens mapping but is irregular, and an adaptive Cartesian grid. We also consider regularisation schemes that minimise derivatives of the source (using two finite difference methods) and introduce a scheme that minimises deviations from an analytic source profile. Careful choice of gridding and regularisation can reduce "discreteness noise" in the $\\chi^2$ surface that is inherent in the pixel-based methodology. With a grid...

  9. Research on reconstruction algorithms for 2D temperature field based on TDLAS

    Science.gov (United States)

    Peng, Dong; Jin, Yi; Zhai, Chao

    2015-10-01

    Tunable Diode Laser Absorption Tomography(TDLAT), as a promising technique which combines Tunable Diode Laser Absorption Spectroscopy(TDLAS) and computer tomography, has shown the advantages of high spatial resolution for temperature measurement. Given the large number of tomography algorithms, it is necessary to understand the feature of tomography algorithms and find suitable ones for the specific experiment. This paper illustrates two different algorithms including algebraic reconstruction technique (ART) and simulated annealing (SA) which are implemented using Matlab. The reconstruction simulations of unimodal and bimodal temperature phantom were done under different conditions, and the results of the simulation were analyzed. It shows that for the unimodal temperature phantom, the both algorithms work well, the reconstruction quality is acceptable under suitable conditions and the result of ART is better. But for the bimodal temperature phantom, the result of SA is much better. More specifically, the reconstruction quality of ART is mainly affected by the ray coverage, the maximum deviation for the unimodal temperature phantom is 5.9%, while for the bimodal temperature field, it is up to 25%. The reconstruction quality of SA is mainly affected by the number of the transitions, the maximum deviation for the unimodal temperature phantom is 9.2% when 6 transitions are used which is a little worse than the result of ART; however, the maximum deviation for the bimodal temperature phantom is much better than ART's, which is about 5.2% when 6 transitions are used.

  10. Reconstruction of Gene Regulatory Networks Based on Two-Stage Bayesian Network Structure Learning Algorithm

    Institute of Scientific and Technical Information of China (English)

    Gui-xia Liu; Wei Feng; Han Wang; Lei Liu; Chun-guang Zhou

    2009-01-01

    In the post-genomic biology era, the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system, and it has been a challenging task in bioinformatics. The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages, but how to determine the network structure and parameters is still important to be explored. This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network .The new algorithm is evaluated with the use of both simulated and yeast cell cycle data. The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.

  11. Towards clinical application of a Laplace operator-based region of interest reconstruction algorithm in C-arm CT.

    Science.gov (United States)

    Xia, Yan; Hofmann, Hannes; Dennerlein, Frank; Mueller, Kerstin; Schwemmer, Chris; Bauer, Sebastian; Chintalapani, Gouthami; Chinnadurai, Ponraj; Hornegger, Joachim; Maier, Andreas

    2014-03-01

    It is known that a reduction of the field-of-view in 3-D X-ray imaging is proportional to a reduction in radiation dose. The resulting truncation, however, is incompatible with conventional reconstruction algorithms. Recently, a novel method for region of interest reconstruction that uses neither prior knowledge nor extrapolation has been published, named approximated truncation robust algorithm for computed tomography (ATRACT). It is based on a decomposition of the standard ramp filter into a 2-D Laplace filtering and a 2-D Radon-based residual filtering step. In this paper, we present two variants of the original ATRACT. One is based on expressing the residual filter as an efficient 2-D convolution with an analytically derived kernel. The second variant is to apply ATRACT in 1-D to further reduce computational complexity. The proposed algorithms were evaluated by using a reconstruction benchmark, as well as two clinical data sets. The results are encouraging since the proposed algorithms achieve a speed-up factor of up to 245 compared to the 2-D Radon-based ATRACT. Reconstructions of high accuracy are obtained, e.g., even real-data reconstruction in the presence of severe truncation achieve a relative root mean square error of as little as 0.92% with respect to nontruncated data.

  12. A Solution to Reconstruct Cross-Cut Shredded Text Documents Based on Character Recognition and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Hedong Xu

    2014-01-01

    Full Text Available The reconstruction of destroyed paper documents is of more interest during the last years. This topic is relevant to the fields of forensics, investigative sciences, and archeology. Previous research and analysis on the reconstruction of cross-cut shredded text document (RCCSTD are mainly based on the likelihood and the traditional heuristic algorithm. In this paper, a feature-matching algorithm based on the character recognition via establishing the database of the letters is presented, reconstructing the shredded document by row clustering, intrarow splicing, and interrow splicing. Row clustering is executed through the clustering algorithm according to the clustering vectors of the fragments. Intrarow splicing regarded as the travelling salesman problem is solved by the improved genetic algorithm. Finally, the document is reconstructed by the interrow splicing according to the line spacing and the proximity of the fragments. Computational experiments suggest that the presented algorithm is of high precision and efficiency, and that the algorithm may be useful for the different size of cross-cut shredded text document.

  13. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Johnson, Christi R [ORNL; Clayton, Dwight A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2017-01-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  14. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    Science.gov (United States)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  15. An image reconstruction algorithm for electrical capacitance tomography based on simulated annealing particle swarm optimization

    Directory of Open Access Journals (Sweden)

    P. Wang

    2015-04-01

    Full Text Available In this paper, we introduce a novel image reconstruction algorithm with Least Squares Support Vector Machines (LS-SVM and Simulated Annealing Particle Swarm Optimization (APSO, named SAP. This algorithm introduces simulated annealing ideas into Particle Swarm Optimization (PSO, which adopts cooling process functions to replace the inertia weight function and constructs the time variant inertia weight function featured in annealing mechanism. Meanwhile, it employs the APSO procedure to search for the optimized resolution of Electrical Capacitance Tomography (ECT for image reconstruction. In order to overcome the soft field characteristics of ECT sensitivity field, some image samples with typical flow patterns are chosen for training with LS-SVM. Under the training procedure, the capacitance error caused by the soft field characteristics is predicted, and then is used to construct the fitness function of the particle swarm optimization on basis of the capacitance error. Experimental results demonstrated that the proposed SAP algorithm has a quick convergence rate. Moreover, the proposed SAP outperforms the classic Landweber algorithm and Newton-Raphson algorithm on image reconstruction.

  16. Prostate tissue decomposition via DECT using the model based iterative image reconstruction algorithm DIRA

    Science.gov (United States)

    Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Westin, Robin; Alm Carlsson, Gudrun

    2014-03-01

    Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.

  17. A 3D reconstruction algorithm for magneto-acoustic tomography with magnetic induction based on ultrasound transducer characteristics

    Science.gov (United States)

    Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-12-01

    In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the

  18. Use of a ray-based reconstruction algorithm to accurately quantify preclinical microSPECT images

    OpenAIRE

    Bert Vandeghinste; Roel Van Holen; Christian Vanhove; Filip De Vos; Stefaan Vandenberghe; Steven Staelens

    2014-01-01

    This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-singlephoton emission computed tomography (SPECT). This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using (99m) Tc and In-111, measured on a commercially available cadmium zinc telluride (CZT)-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1) scatter correcti...

  19. Nonlinear multifunctional sensor signal reconstruction based on least squares support vector machines and total least squares algorithm

    Institute of Scientific and Technical Information of China (English)

    Xin LIU; Guo WEI; Jin-wei SUN; Dan LIU

    2009-01-01

    Least squares support vector machines (LS-SVMs) are modified support vector machines (SVMs) that involve equality constraints and work with a least squares cost function, which simplifies the optimization procedure. In this paper, a novel training algorithm based on total least squares (TLS) for an LS-SVM is presented and applied to muhifunctional sensor signal reconstruction. For three different nonlinearities of a multi functional sensor model, the reconstruction accuracies of input signals are 0.001 36%, 0.03184% and 0.504 80%, respectively. The experimental results demonstrate the higher reliability and accuracy of the proposed method for multi functional sensor signal reconstruction than the original LS-SVM training algorithm, and verify the feasibility and stability of the proposed method.

  20. A reconstruction algorithm based on topological gradient for an inverse problem related to a semilinear elliptic boundary value problem

    Science.gov (United States)

    Beretta, Elena; Manzoni, Andrea; Ratti, Luca

    2017-03-01

    In this paper we develop a reconstruction algorithm for the solution of an inverse boundary value problem dealing with a semilinear elliptic partial differential equation of interest in cardiac electrophysiology. The goal is the detection of small inhomogeneities located inside a domain Ω , where the coefficients of the equation are altered, starting from observations of the solution of the equation on the boundary \\partial Ω . Exploiting theoretical results recently achieved in [13], we implement a reconstruction procedure based on the computation of the topological gradient of a suitable cost functional. Numerical results obtained for several test cases finally assess the feasibility and the accuracy of the proposed technique.

  1. Electron bunch profile reconstruction based on phase-constrained iterative algorithm

    Science.gov (United States)

    Bakkali Taheri, F.; Konoplev, I. V.; Doucas, G.; Baddoo, P.; Bartolini, R.; Cowley, J.; Hooker, S. M.

    2016-03-01

    The phase retrieval problem occurs in a number of areas in physics and is the subject of continuing investigation. The one-dimensional case, e.g., the reconstruction of the temporal profile of a charged particle bunch, is particularly challenging and important for particle accelerators. Accurate knowledge of the longitudinal (time) profile of the bunch is important in the context of linear colliders, wakefield accelerators and for the next generation of light sources, including x-ray SASE FELs. Frequently applied methods, e.g., minimal phase retrieval or other iterative algorithms, are reliable if the Blaschke phase contribution is negligible. This, however, is neither known a priori nor can it be assumed to apply to an arbitrary bunch profile. We present a novel approach which gives reproducible, most-probable and stable reconstructions for bunch profiles (both artificial and experimental) that would otherwise remain unresolved by the existing techniques.

  2. A new wavelet-based reconstruction algorithm for twin image removal in digital in-line holography

    Science.gov (United States)

    Hattay, Jamel; Belaid, Samir; Aguili, Taoufik; Lebrun, Denis

    2016-07-01

    Two original methods are proposed here for digital in-line hologram processing. Firstly, we propose an entropy-based method to retrieve the focus plane which is very useful for digital hologram reconstruction. Secondly, we introduce a new approach to remove the so-called twin images reconstructed by holograms. This is achieved owing to the Blind Source Separation (BSS) technique. The proposed method is made up of two steps: an Adaptive Quincunx Lifting Scheme (AQLS) and a statistical unmixing algorithm. The AQLS tool is based on wavelet packet transform, whose role is to maximize the sparseness of the input holograms. The unmixing algorithm uses the Independent Component Analysis (ICA) tool. Experimental results confirm the ability of convolutive blind source separation to discard the unwanted twin image from in-line digital holograms.

  3. Advanced reconstruction algorithms for electron tomography: From comparison to combination

    Energy Technology Data Exchange (ETDEWEB)

    Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2013-04-15

    In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.

  4. A comparative study based on image quality and clinical task performance for CT reconstruction algorithms in radiotherapy.

    Science.gov (United States)

    Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa

    2016-07-01

    CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (<5 HU) and target CT number variations (<1 HU). The radiation

  5. A comparative study based on image quality and clinical task performance for CT reconstruction algorithms in radiotherapy.

    Science.gov (United States)

    Li, Hua; Dolly, Steven; Chen, Hsin-Chen; Anastasio, Mark A; Low, Daniel A; Li, Harold H; Michalski, Jeff M; Thorstad, Wade L; Gay, Hiram; Mutic, Sasa

    2016-07-08

    CT image reconstruction is typically evaluated based on the ability to reduce the radiation dose to as-low-as-reasonably-achievable (ALARA) while maintaining acceptable image quality. However, the determination of common image quality metrics, such as noise, contrast, and contrast-to-noise ratio, is often insufficient for describing clinical radiotherapy task performance. In this study we designed and implemented a new comparative analysis method associating image quality, radiation dose, and patient size with radiotherapy task performance, with the purpose of guiding the clinical radiotherapy usage of CT reconstruction algorithms. The iDose4 iterative reconstruction algorithm was selected as the target for comparison, wherein filtered back-projection (FBP) reconstruction was regarded as the baseline. Both phantom and patient images were analyzed. A layer-adjustable anthropomorphic pelvis phantom capable of mimicking 38-58 cm lateral diameter-sized patients was imaged and reconstructed by the FBP and iDose4 algorithms with varying noise-reduction-levels, respectively. The resulting image sets were quantitatively assessed by two image quality indices, noise and contrast-to-noise ratio, and two clinical task-based indices, target CT Hounsfield number (for electron density determination) and structure contouring accuracy (for dose-volume calculations). Additionally, CT images of 34 patients reconstructed with iDose4 with six noise reduction levels were qualitatively evaluated by two radiation oncologists using a five-point scoring mechanism. For the phantom experiments, iDose4 achieved noise reduction up to 66.1% and CNR improvement up to 53.2%, compared to FBP without considering the changes of spatial resolution among images and the clinical acceptance of reconstructed images. Such improvements consistently appeared across different iDose4 noise reduction levels, exhibiting limited interlevel noise (< 5 HU) and target CT number variations (< 1 HU). The radiation

  6. Modified reconstruction algorithm based on space-time adaptive processing for multichannel synthetic aperture radar systems in azimuth

    Science.gov (United States)

    Guo, Xiaojiang; Gao, Yesheng; Wang, Kaizhi; Liu, Xingzhao

    2016-07-01

    A spectrum reconstruction algorithm based on space-time adaptive processing (STAP) can effectively suppress azimuth ambiguity for multichannel synthetic aperture radar (SAR) systems in azimuth. However, the traditional STAP-based reconstruction approach has to estimate the covariance matrix and calculate matrix inversion (MI) for each Doppler frequency bin, which will result in a very large computational load. In addition, the traditional STAP-based approach has to know the exact platform velocity, pulse repetition frequency, and array configuration. Errors involving these parameters will significantly degrade the performance of ambiguity suppression. A modified STAP-based approach to solve these problems is presented. The traditional array steering vectors and corresponding covariance matrices are Doppler-variant in the range-Doppler domain. After preprocessing by a proposed phase compensation method, they would be independent of Doppler bins. Therefore, the modified STAP-based approach needs to estimate the covariance matrix and calculate MI only once. The computation load could be greatly reduced. Moreover, by combining the reconstruction method and a proposed adaptive parameter estimation method, the modified method is able to successfully achieve multichannel SAR signal reconstruction and suppress azimuth ambiguity without knowing the above parameters. Theoretical analysis and experiments showed the simplicity and efficiency of the proposed methods.

  7. Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV

    Science.gov (United States)

    Fahringer, Timothy W.; Thurow, Brian S.

    2016-09-01

    A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.

  8. 3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm

    CERN Document Server

    Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia

    2015-01-01

    Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...

  9. A new algorithm for 3D reconstruction from support functions

    OpenAIRE

    2009-01-01

    We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab and allows, for the first time, good 3D reconstructions to be performed on an ordinary PC. Under mild conditions, theory guarantees that outputs of the algorithm will converge to the input shape as the number of measurements increases. Reconstructions ...

  10. Computed laminography and reconstruction algorithm

    Institute of Scientific and Technical Information of China (English)

    QUE Jie-Min; YU Zhong-Qiang; YAN Yong-Lian; CAO Da-Quan; ZHAO Wei; TANG Xiao; SUN Cui-Li; WANG Yan-Fang; WEI Cun-Feng; SHI Rong-Jian; WEI Long

    2012-01-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution.This is especially true for planar objects.In this paper,we set up a new scanning geometry for CL,and study the algebraic reconstruction technique (ART) for CL imaging.We compare the results of ART with variant weighted functions by computer simulation with a digital phantom.It proves that ART algorithm is a good choice for the CL system.

  11. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  12. Computed Tomography Image Origin Identification based on Original Sensor Pattern Noise and 3D Image Reconstruction Algorithm Footprints.

    Science.gov (United States)

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2016-06-08

    In this paper, we focus on the "blind" identification of the Computed Tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-Scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT-Scanner based on an Original Sensor Pattern Noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its 3D image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train an SVM based classifier so as to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-Scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than Sensor Pattern Noise (SPN) based strategy proposed for general public camera devices.

  13. Use of a ray-based reconstruction algorithm to accurately quantify preclinical microSPECT images.

    Science.gov (United States)

    Vandeghinste, Bert; Van Holen, Roel; Vanhove, Christian; De Vos, Filip; Vandenberghe, Stefaan; Staelens, Steven

    2014-01-01

    This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-single-photon emission computed tomography (SPECT). This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using 99mTc and 111In, measured on a commercially available cadmium zinc telluride (CZT)-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1) scatter correction, (2) computed tomography-based attenuation correction, (3) resolution recovery, and (4) edge-preserving smoothing. It was validated using a National Electrical Manufacturers Association (NEMA) phantom. The in vivo quantification error was determined for two radiotracers: [99mTc]DMSA in naive mice (n  =  10 kidneys) and [111In]octreotide in mice (n  =  6) inoculated with a xenograft neuroendocrine tumor (NCI-H727). The measured energy resolution is 5.3% for 140.51 keV (99mTc), 4.8% for 171.30 keV, and 3.3% for 245.39 keV (111In). For 99mTc, an uncorrected quantification error of 28 ± 3% is reduced to 8 ± 3%. For 111In, the error reduces from 26 ± 14% to 6 ± 22%. The in vivo error obtained with 99mTc-dimercaptosuccinic acid ([99mTc]DMSA) is reduced from 16.2 ± 2.8% to -0.3 ± 2.1% and from 16.7 ± 10.1% to 2.2 ± 10.6% with [111In]octreotide. Absolute quantitative in vivo SPECT is possible without explicit system matrix measurements. An absolute in vivo quantification error smaller than 5% was achieved and exemplified for both [99mTc]DMSA and [111In]octreotide.

  14. Use of a Ray-Based Reconstruction Algorithm to Accurately Quantify Preclinical MicroSPECT Images

    Directory of Open Access Journals (Sweden)

    Bert Vandeghinste

    2014-06-01

    Full Text Available This work aimed to measure the in vivo quantification errors obtained when ray-based iterative reconstruction is used in micro-single-photon emission computed tomography (SPECT. This was investigated with an extensive phantom-based evaluation and two typical in vivo studies using 99mTc and 111In, measured on a commercially available cadmium zinc telluride (CZT-based small-animal scanner. Iterative reconstruction was implemented on the GPU using ray tracing, including (1 scatter correction, (2 computed tomography-based attenuation correction, (3 resolution recovery, and (4 edge-preserving smoothing. It was validated using a National Electrical Manufacturers Association (NEMA phantom. The in vivo quantification error was determined for two radiotracers: [99mTc]DMSA in naive mice (n = 10 kidneys and [111In]octreotide in mice (n = 6 inoculated with a xenograft neuroendocrine tumor (NCI-H727. The measured energy resolution is 5.3% for 140.51 keV (99mTc, 4.8% for 171.30 keV, and 3.3% for 245.39 keV (111In. For 99mTc, an uncorrected quantification error of 28 ± 3% is reduced to 8 ± 3%. For 111In, the error reduces from 26 ± 14% to 6 ± 22%. The in vivo error obtained with “mTc-dimercaptosuccinic acid ([99mTc]DMSA is reduced from 16.2 ± 2.8% to −0.3 ± 2.1% and from 16.7 ± 10.1% to 2.2 ± 10.6% with [111In]octreotide. Absolute quantitative in vivo SPECT is possible without explicit system matrix measurements. An absolute in vivo quantification error smaller than 5% was achieved and exemplified for both [”mTc]DMSA and [111In]octreotide.

  15. Spatially adaptive regularized iterative high-resolution image reconstruction algorithm

    Science.gov (United States)

    Lim, Won Bae; Park, Min K.; Kang, Moon Gi

    2000-12-01

    High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The

  16. Improved wavefront reconstruction algorithm from slope measurements

    Science.gov (United States)

    Phuc, Phan Huy; Manh, Nguyen The; Rhee, Hyug-Gyo; Ghim, Young-Sik; Yang, Ho-Soon; Lee, Yun-Woo

    2017-03-01

    In this paper, we propose a wavefront reconstruction algorithm from slope measurements based on a zonal method. In this algorithm, the slope measurement sampling geometry used is the Southwell geometry, in which the phase values and the slope data are measured at the same nodes. The proposed algorithm estimates the phase value at a node point using the slope measurements of eight points around the node, as doing so is believed to result in better accuracy with regard to the wavefront. For optimization of the processing time, a successive over-relaxation method is applied to iteration loops. We use a trial-and-error method to determine the best relaxation factor for each type of wavefront in order to optimize the iteration time and, thus, the processing time of the algorithm. Specifically, for a circularly symmetric wavefront, the convergence rate of the algorithm can be improved by using the result of a Fourier Transform as an initial value for the iteration. Various simulations are presented to demonstrate the improvements realized when using the proposed algorithm. Several experimental measurements of deflectometry are also processed by using the proposed algorithm.

  17. A FAST CONVERGING SPARSE RECONSTRUCTION ALGORITHM IN GHOST IMAGING

    Institute of Scientific and Technical Information of China (English)

    Li Enrong; Chen Mingliang; Gong Wenlin; Wang Hui; Han Shensheng

    2012-01-01

    A fast converging sparse reconstruction algorithm in ghost imaging is presented.It utilizes total variation regularization and its formulation is based on the Karush-Kuhn-Tucker (KKT) theorem in the theory of convex optimization.Tests using experimental data show that,compared with the algorithm of Gradient Projection for Sparse Reconstruction (GPSR),the proposed algorithm yields better results with less computation work.

  18. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    Science.gov (United States)

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  19. Phantom-based evaluations of two binning algorithms for four-dimensional CT reconstruction in lung cancer radiation therapy

    Institute of Scientific and Technical Information of China (English)

    Fuli Zhang; Huayong Jiang; Weidong Xu; Yadi Wang ; Qingzhi Liu; Na Lu; Diandian Chen; Bo Yao

    2014-01-01

    Objective: The purpose of this study was to evaluate the performance of the phase-binning algorithm and am-plitude-binning algorithm for four-dimensional computed tomography (4DCT) reconstruction in lung cancer radiation therapy. Methods: Quasar phantom data were used for evaluation.Aphantom of known geometry was mounted on a four-dimensional (4D) motion platform programmed with twelve respiratory waves (twelve lung patients trajectories) and scanned with a Philips Bril iance Big bore 16-slice CT simulator. The 4DCT images were reconstructed using both phase- and amplitude-binning algorithms. Internal target volumes (ITVs) of the phase- and amplitude-binned image sets were compared by evaluation of shape and volume distortions. Results: The phantom experiments il ustrated that, as expected, maximum inhalation occurred at the 0% amplitude and maximum exhalation occurred at the 50% amplitude of the amplitude-binned 4DCT image sets. The amplitude-binned algorithm rendered smal er ITV than the phase-binning algorithm. Conclusion: The amplitude-binning algorithm for 4DCT reconstruction may have a potential advantage in reducing the margin and protecting normal lung tissue from unnecessary irradiation.

  20. 3-Dimensional stereo implementation of photoacoustic imaging based on a new image reconstruction algorithm without using discrete Fourier transform

    Science.gov (United States)

    Ham, Woonchul; Song, Chulgyu

    2017-05-01

    In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.

  1. Convergence of iterative image reconstruction algorithms for Digital Breast Tomosynthesis

    DEFF Research Database (Denmark)

    Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    solutions can aid in iterative image reconstruction algorithm design. This issue is particularly acute for iterative image reconstruction in Digital Breast Tomosynthesis (DBT), where the corresponding data model IS particularly poorly conditioned. The impact of this poor conditioning is that iterative......Most iterative image reconstruction algorithms are based on some form of optimization, such as minimization of a data-fidelity term plus an image regularizing penalty term. While achieving the solution of these optimization problems may not directly be clinically relevant, accurate optimization....... Math. Imag. Vol. 40, pgs 120-145) and apply it to iterative image reconstruction in DBT....

  2. A new algorithm for 3D reconstruction from support functions

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus

    2009-01-01

    We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab and allows, for the first time, good 3D reconstructions...... to be performed on an ordinary PC. Under mild conditions, theory guarantees that outputs of the algorithm will converge to the input shape as the number of measurements increases. Reconstructions may be obtained without any pre- or post-processing steps and with no restriction on the sets of measurement...

  3. A COMPARISON OF EXISTING ALGORITHMS FOR 3D TREE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    E. Bournez

    2017-02-01

    Full Text Available 3D models of tree geometry are important for numerous studies, such as for urban planning or agricultural studies. In climatology, tree models can be necessary for simulating the cooling effect of trees by estimating their evapotranspiration. The literature shows that the more accurate the 3D structure of a tree is, the more accurate microclimate models are. This is the reason why, since 2013, we have been developing an algorithm for the reconstruction of trees from terrestrial laser scanner (TLS data, which we call TreeArchitecture. Meanwhile, new promising algorithms dedicated to tree reconstruction have emerged in the literature. In this paper, we assess the capacity of our algorithm and of two others -PlantScan3D and SimpleTree- to reconstruct the 3D structure of trees. The aim of this reconstruction is to be able to characterize the geometric complexity of trees, with different heights, sizes and shapes of branches. Based on a specific surveying workflow with a TLS, we have acquired dense point clouds of six different urban trees, with specific architectures, before reconstructing them with each algorithm. Finally, qualitative and quantitative assessments of the models are performed using reference tree reconstructions and field measurements. Based on this assessment, the advantages and the limits of every reconstruction algorithm are highlighted. Anyway, very satisfying results can be reached for 3D reconstructions of tree topology as well as of tree volume.

  4. Evaluation of the image quality in digital breast tomosynthesis (DBT) employed with a compressed-sensing (CS)-based reconstruction algorithm by using the mammographic accreditation phantom

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Cho, Heemoon; Je, Uikyu; Cho, Hyosung, E-mail: hscho1@yonsei.ac.kr; Park, Chulkyu; Lim, Hyunwoo; Kim, Kyuseok; Kim, Guna; Park, Soyoung; Woo, Taeho; Choi, Sungil

    2015-12-21

    In this work, we have developed a prototype digital breast tomosynthesis (DBT) system which mainly consists of an x-ray generator (28 kV{sub p}, 7 mA s), a CMOS-type flat-panel detector (70-μm pixel size, 230.5×339 mm{sup 2} active area), and a rotational arm to move the x-ray generator in an arc. We employed a compressed-sensing (CS)-based reconstruction algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate DBT reconstruction. Here the CS is a state-of-the-art mathematical theory for solving the inverse problems, which exploits the sparsity of the image with substantially high accuracy. We evaluated the reconstruction quality in terms of the detectability, the contrast-to-noise ratio (CNR), and the slice-sensitive profile (SSP) by using the mammographic accreditation phantom (Model 015, CIRS Inc.) and compared it to the FBP-based quality. The CS-based algorithm yielded much better image quality, preserving superior image homogeneity, edge sharpening, and cross-plane resolution, compared to the FBP-based one. - Highlights: • A prototype digital breast tomosynthesis (DBT) system is developed. • Compressed-sensing (CS) based reconstruction framework is employed. • We reconstructed high-quality DBT images by using the proposed reconstruction framework.

  5. Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)

    Science.gov (United States)

    2013-02-01

    Reconstruction Technique ( SIRT ) are applied to the data. Model based iterative reconstruction (MBIR) provides a powerful framework for tomographic...the reconstruction when the typical algorithms such as Filtered Back Projection (FBP) and Simultaneous Iterative Reconstruction Technique ( SIRT ) are

  6. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    Raman Khurana

    2012-11-01

    CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by measuring identification efficiency and misidentification rates from electrons, muons and hadronic jets. These algorithms enable extended reach for the searches for MSSM Higgs, and other exotic particles.

  7. Multi-scale UDCT dictionary learning based highly undersampled MR image reconstruction using patch-based constraint splitting augmented Lagrangian shrinkage algorithm

    Institute of Scientific and Technical Information of China (English)

    Min YUAN; Bing-xin YANG; Yi-de MA‡; Jiu-wen ZHANG; Fu-xiang LU; Tong-feng ZHANG

    2015-01-01

    Recently, dictionary learning (DL) based methods have been introduced to compressed sensing magnetic resonance imaging (CS-MRI), which outperforms pre-defined analytic sparse priors. However, single-scale trained dictionary directly from image patches is incapable of representing image features from multi-scale, multi-directional perspective, which influences the reconstruction performance. In this paper, incorporating the superior multi-scale properties of uniform discrete curvelet transform (UDCT) with the data matching adaptability of trained dictionaries, we propose a flexible sparsity framework to allow sparser representation and prominent hierarchical essential features capture for magnetic resonance (MR) images. Multi-scale decompo-sition is implemented by using UDCT due to its prominent properties of lower redundancy ratio, hierarchical data structure, and ease of implementation. Each sub-dictionary of different sub-bands is trained independently to form the multi-scale dictionaries. Corresponding to this brand-new sparsity model, we modify the constraint splitting augmented Lagrangian shrinkage algorithm (C-SALSA) as patch-based C-SALSA (PB C-SALSA) to solve the constraint optimization problem of regularized image recon-struction. Experimental results demonstrate that the trained sub-dictionaries at different scales, enforcing sparsity at multiple scales, can then be efficiently used for MRI reconstruction to obtain satisfactory results with further reduced undersampling rate. Multi-scale UDCT dictionaries potentially outperform both single-scale trained dictionaries and multi-scale analytic transforms. Our proposed sparsity model achieves sparser representation for reconstructed data, which results in fast convergence of recon-struction exploiting PB C-SALSA. Simulation results demonstrate that the proposed method outperforms conventional CS-MRI methods in maintaining intrinsic properties, eliminating aliasing, reducing unexpected artifacts, and removing

  8. Wind reconstruction algorithm for Viking Lander 1

    Science.gov (United States)

    Kynkäänniemi, Tuomas; Kemppinen, Osku; Harri, Ari-Matti; Schmidt, Walter

    2017-06-01

    The wind measurement sensors of Viking Lander 1 (VL1) were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.

  9. Sampling conditions for gradient-magnitude sparsity based image reconstruction algorithms

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    in the compressive sensing (CS) community for this type of sparsity. The preliminary finding here, based on simulations using images of realistic sparsity levels, is that necessary sampling can go as low as N/4 views for an NxN pixel array. This work sets the stage for fixed-exposure studies where the number...

  10. Atomic resolution tomography reconstruction of tilt series based on a GPU accelerated hybrid input-output algorithm using polar Fourier transform.

    Science.gov (United States)

    Lu, Xiangwen; Gao, Wenpei; Zuo, Jian-Min; Yuan, Jiabin

    2015-02-01

    Advances in diffraction and transmission electron microscopy (TEM) have greatly improved the prospect of three-dimensional (3D) structure reconstruction from two-dimensional (2D) images or diffraction patterns recorded in a tilt series at atomic resolution. Here, we report a new graphics processing unit (GPU) accelerated iterative transformation algorithm (ITA) based on polar fast Fourier transform for reconstructing 3D structure from 2D diffraction patterns. The algorithm also applies to image tilt series by calculating diffraction patterns from the recorded images using the projection-slice theorem. A gold icosahedral nanoparticle of 309 atoms is used as the model to test the feasibility, performance and robustness of the developed algorithm using simulations. Atomic resolution in 3D is achieved for the 309 atoms Au nanoparticle using 75 diffraction patterns covering 150° rotation. The capability demonstrated here provides an opportunity to uncover the 3D structure of small objects of nanometers in size by electron diffraction.

  11. A new iterative algorithm to reconstruct the refractive index.

    Science.gov (United States)

    Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y

    2007-06-21

    The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.

  12. Refractive index tomography based on optical coherence tomography and tomographic reconstruction algorithm

    Science.gov (United States)

    Kitazawa, Takahiro; Nomura, Takanori

    2017-09-01

    Refractive index (RI) tomography based on not quantitative phase imaging (QPI) but optical coherence tomography (OCT) is proposed. In conventional RI tomography, the phase unwrapping process deteriorates measurement accuracy owing to the unwrapping error. To eliminate the unwrapping process, the introduction of OCT is proposed, because OCT directly provides optical thickness. The proposed method can improve measurement accuracy owing to the removal of the phase unwrapping error. The feasibility of the method is confirmed by numerical simulations and optical experiments. These results show that the proposed method can reduce measurement errors even when an object shows phase changes much larger than a wavelength.

  13. Three-dimensional fracture visualisation of multidetector CT of the skull base in trauma patients: comparison of three reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Ringl, Helmut; Schernthaner, Ruediger; Philipp, Marcel O.; Metz-Schimmerl, Sylvia; Czerny, Christian; Weber, Michael; Steiner-Ringl, Andrea; Peloschek, Philipp; Herold, Christian J.; Schima, Wolfgang [Medical University of Vienna, Department of Radiology, Vienna (Austria); Gaebler, Christian [Medical University of Vienna, Department of Trauma-Surgery, Vienna (Austria)

    2009-10-15

    The purpose of this study was to retrospectively assess the detection rate of skull-base fractures for three different three-dimensional (3D) reconstruction methods of cranial CT examinations in trauma patients. A total of 130 cranial CT examinations of patients with previous head trauma were subjected to 3D reconstruction of the skull base, using solid (SVR) and transparent (TVR) volume-rendering technique and maximum intensity projection (MIP). Three radiologists independently evaluated all reconstructions as well as standard high-resolution multiplanar reformations (HR-MPRs). Mean fracture detection rates for all readers reading rotating reconstructions were 39, 36, 61 and 64% for SVR, TVR, MIP and HR-MPR respectively. Although not significantly different from HR-MPR with respect to sensitivity (P = 0.9), MIP visualised 18% of fractures that were not reported in HR-MPR. Because of the relatively low detection rate using HR-MPRs alone, we recommend reading MIP reconstructions in addition to the obligatory HR-MPRs to improve fracture detection. (orig.)

  14. Algorithms for reconstruction of chromosomal structures.

    Science.gov (United States)

    Lyubetsky, Vassily; Gershgorin, Roman; Seliverstov, Alexander; Gorbunov, Konstantin

    2016-01-19

    One of the main aims of phylogenomics is the reconstruction of objects defined in the leaves along the whole phylogenetic tree to minimize the specified functional, which may also include the phylogenetic tree generation. Such objects can include nucleotide and amino acid sequences, chromosomal structures, etc. The structures can have any set of linear and circular chromosomes, variable gene composition and include any number of paralogs, as well as any weights of individual evolutionary operations to transform a chromosome structure. Many heuristic algorithms were proposed for this purpose, but there are just a few exact algorithms with low (linear, cubic or similar) polynomial computational complexity among them to our knowledge. The algorithms naturally start from the calculation of both the distance between two structures and the shortest sequence of operations transforming one structure into another. Such calculation per se is an NP-hard problem. A general model of chromosomal structure rearrangements is considered. Exact algorithms with almost linear or cubic polynomial complexities have been developed to solve the problems for the case of any chromosomal structure but with certain limitations on operation weights. The computer programs are tested on biological data for the problem of mitochondrial or plastid chromosomal structure reconstruction. To our knowledge, no computer programs are available for this model. Exactness of the proposed algorithms and such low polynomial complexities were proved. The reconstructed evolutionary trees of mitochondrial and plastid chromosomal structures as well as the ancestral states of the structures appear to be reasonable.

  15. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  16. Three penalized EM-type algorithms for PET image reconstruction.

    Science.gov (United States)

    Teng, Yueyang; Zhang, Tie

    2012-06-01

    Based on Bayes theory, Green introduced the maximum a posteriori (MAP) algorithm to obtain a smoothing reconstruction for positron emission tomography. This algorithm is flexible and convenient for most of the penalties, but it is hard to guarantee convergence. For a common goal, Fessler penalized a weighted least squares (WLS) estimator by a quadratic penalty and then solved it with the successive over-relaxation (SOR) algorithm, however, the algorithm was time-consuming and difficultly parallelized. Anderson proposed another WLS estimator for faster convergence, on which there were few regularization methods studied. For three regularized estimators above, we develop three new expectation maximization (EM) type algorithms to solve them. Unlike MAP and SOR, the proposed algorithms yield update rules by minimizing the auxiliary functions constructed on the previous iterations, which ensure the cost functions monotonically decreasing. Experimental results demonstrated the robustness and effectiveness of the proposed algorithms.

  17. Gray Weighted CT Reconstruction Algorithm Based on Variable Voltage%基于灰度加权的变电压CT重建算法

    Institute of Scientific and Technical Information of China (English)

    李权; 陈平; 潘晋孝

    2014-01-01

    In conventional CT reconstruction based on fixed Voltage, the projective data often appears overex-posed or underexposed, and so the reconstructive results are poor.To solve this problem, variable voltage CT reconstruction has advanced.The effective projective sequences of a structural component are obtained through the variable voltages.Adjust and minimize the total variation to optimize the reconstructive results on the basis of iterative image using ART algorithm.In the process of reconstruction, the reconstructive image of the low voltage is used as an initial value of the effective projective reconstruction of the adjacent high voltage, and so on until to the highest voltage according to the gray weighted algorithm.That is to say the complete structural information is reconstructed.Experiment shows that the proposed algorithm can completely reflect the informa-tion of a complicated structural component, and the pixel values are more stable.%常规固定电压CT重建,由于过曝光和欠曝光导致的不完全投影信息,成像质量差,为此提出变电压CT重建。通过变电压获得跟工件有效厚度相匹配的有效投影序列,在ART迭代图像的基础上,调整全变差使其最小化,来优化重建。在重建过程中,依据灰度加权,把低电压的重建图像作为初值,应用在相邻高电压有效投影重建中,得到相邻高电压的重建图像,依次类推直至最高电压,工件的全部结构信息重建完毕。实验表明,灰度加权算法不仅实现了变电压图像信息的完整重建,像素值也更加稳定。

  18. Simulation of Computed Tomography Reconstruction Algorithm Based on Consecutive X -ray Spectrum%基于连续X射线谱的CT重建算法仿真

    Institute of Scientific and Technical Information of China (English)

    蔡彪; 潘晋孝; 陈平

    2011-01-01

    Traditional reconstruction algorithms assume that the X - ray is monochromatic, while in fact, X - ray is polychromatic in actual CT. When the polychromatic projection data are used to reconstruct the images directly,metal artifacts and beam - hardening artifacts appear in the reconstructed images, which reduces image quality and affects medical or industrial diagnosis. This paper considers the consecution of X - ray spectrum, and simulats the statistical reconstruction algorithm based on consecutive X - ray spectrum. Firstly, consecutive spectrum was discretized as monochromatic spectrum. Secondly, according to the workpiece material information and mass attenuation coefficient corresponding to X - ray energy, the workpiece material model was formulated based on consecutive spectrum. Finally, using the polychromatic - energy statistics iterate algorithm, the reconstruction was caried out based on polychromatic projection data. Through the simulation experiment, the algorithm reduces the artifacts to a certain extent, and improves the image quality.%关于提高CT图像精度的问题,传统的CT重建算法都是基于X射线源是单色源的假设,忽略了X射线的多色性.直接用多色投影数据进行图像重建易产生金属、硬化等伪影,降低图像质量,影响CT值标定,从而影响医学或工业诊断.考虑到X射线能谱的连续性,采用仿真手段实现连续X射线谱的统计重建.首先将连续X射线谱离散成若干单能谱,再根据待检工件的材质信息以及射线能量所对应的质量衰减系数,构建基于连续X射线谱的工件材质模型;最后利用多能统计重建算法对多能投影数据进行迭代重建.仿真结果表明,算法充分地利用了X射线的多能性,在一定程度上可以有效地降低图像伪影,提高CT重建图像质量.

  19. 递变电压下投影融合图像重建研究%Projection Fused Image Reconstruction Algorithm Study Based on Graded Variable Voltage

    Institute of Scientific and Technical Information of China (English)

    赵晋利; 胡红萍; 李权

    2015-01-01

    针对变电压投影数据中的不完全投影数据重建问题,提出基于投影数据线性融合的变电压CT重建算法。该算法根据最佳灰度带提取递变电压投影序列中的有效信息,利用线性融合的方法,把低电压下的投影数据通过线性关系融合到相邻高电压投影数据上,依此类推直至最高电压,获得一幅完整信息的高动态投影数据,并利用TV-ART算法重建。实验表明:线性融合算法不仅实现了变电压图像信息的完整重建,像素值也更加稳定。%To solve the problem of the incomplete projection data reconstruction in variable voltage , linear fused CT reconstruction algorithm based on variable voltage has advanced .The algorithm firstly extract the projective sequences according to the optimal gray belt , then the projection data under low voltage is linear fused to the adjacent high voltage projection data , and so on , until the highest voltage for a complete projection information , and use the TV -ART algorithm to reconstruct it .Experiment shows that the proposed algorithm can complete-ly reflect the information of a complicated structural component , and the pixel values are more stable .

  20. FAST ALGORITHM FOR NON-UNIFORMLY SAMPLED SIGNAL SPECTRUM RECONSTRUCTION

    Institute of Scientific and Technical Information of China (English)

    Zhu Zhenqian; Zhang Zhimin; Wang Yu

    2013-01-01

    In this paper,a fast algorithm to reconstruct the spectrum of non-uniformly sampled signals is proposed.Compared with the original algorithm,the fast algorithm has a higher computational efficiency,especially when sampling sequence is long.Particularly,a transformation matrix is built,and the reconstructed spectrum is perfectly synthesized from the spectrum of every sampling channel.The fast algorithm has solved efficiency issues of spectrum reconstruction algorithm,and making it possible for the actual application of spectrum reconstruction algorithm in multi-channel Synthetic Aperture Radar (SAR).

  1. Filtered gradient reconstruction algorithm for compressive spectral imaging

    Science.gov (United States)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  2. Image Reconstruction Using a Genetic Algorithm for Electrical Capacitance Tomography

    Institute of Scientific and Technical Information of China (English)

    MOU Changhua; PENG Lihui; YAO Danya; XIAO Deyun

    2005-01-01

    Electrical capacitance tomography (ECT) has been used for more than a decade for imaging dielectric processes. However, because of its ill-posedness and non-linearity, ECT image reconstruction has always been a challenge. A new genetic algorithm (GA) developed for ECT image reconstruction uses initial results from a linear back-projection, which is widely used for ECT image reconstruction to optimize the threshold and the maximum and minimum gray values for the image. The procedure avoids optimizing the gray values pixel by pixel and significantly reduces the search space dimension. Both simulations and static experimental results show that the method is efficient and capable of reconstructing high quality images. Evaluation criteria show that the GA-based method has smaller image error and greater correlation coefficients. In addition, the GA-based method converges quickly with a small number of iterations.

  3. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  4. Limited angle C-arm tomosynthesis reconstruction algorithms

    Science.gov (United States)

    Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying

    2015-03-01

    In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.

  5. Application of particle filtering algorithm in image reconstruction of EMT

    Science.gov (United States)

    Wang, Jingwen; Wang, Xu

    2015-07-01

    To improve the image quality of electromagnetic tomography (EMT), a new image reconstruction method of EMT based on a particle filtering algorithm is presented. Firstly, the principle of image reconstruction of EMT is analyzed. Then the search process for the optimal solution for image reconstruction of EMT is described as a system state estimation process, and the state space model is established. Secondly, to obtain the minimum variance estimation of image reconstruction, the optimal weights of random samples obtained from the state space are calculated from the measured information. Finally, simulation experiments with five different flow regimes are performed. The experimental results have shown that the average image error of reconstruction results obtained by the method mentioned in this paper is 42.61%, and the average correlation coefficient with the original image is 0.8706, which are much better than corresponding indicators obtained by LBP, Landweber and Kalman Filter algorithms. So, this EMT image reconstruction method has high efficiency and accuracy, and provides a new method and means for EMT research.

  6. Implementation on GPU-based acceleration of the m-line reconstruction algorithm for circle-plus-line trajectory computed tomography

    Science.gov (United States)

    Li, Zengguang; Xi, Xiaoqi; Han, Yu; Yan, Bin; Li, Lei

    2016-10-01

    The circle-plus-line trajectory satisfies the exact reconstruction data sufficiency condition, which can be applied in C-arm X-ray Computed Tomography (CT) system to increase reconstruction image quality in a large cone angle. The m-line reconstruction algorithm is adopted for this trajectory. The selection of the direction of m-lines is quite flexible and the m-line algorithm needs less data for accurate reconstruction compared with FDK-type algorithms. However, the computation complexity of the algorithm is very large to obtain efficient serial processing calculations. The reconstruction speed has become an important issue which limits its practical applications. Therefore, the acceleration of the algorithm has great meanings. Compared with other hardware accelerations, the graphics processing unit (GPU) has become the mainstream in the CT image reconstruction. GPU acceleration has achieved a better acceleration effect in FDK-type algorithms. But the implementation of the m-line algorithm's acceleration for the circle-plus-line trajectory is different from the FDK algorithm. The parallelism of the circular-plus-line algorithm needs to be analyzed to design the appropriate acceleration strategy. The implementation can be divided into the following steps. First, selecting m-lines to cover the entire object to be rebuilt; second, calculating differentiated back projection of the point on the m-lines; third, performing Hilbert filtering along the m-line direction; finally, the m-line reconstruction results need to be three-dimensional-resembled and then obtain the Cartesian coordinate reconstruction results. In this paper, we design the reasonable GPU acceleration strategies for each step to improve the reconstruction speed as much as possible. The main contribution is to design an appropriate acceleration strategy for the circle-plus-line trajectory m-line reconstruction algorithm. Sheep-Logan phantom is used to simulate the experiment on a single K20 GPU. The

  7. Genetic algorithms for minimal source reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, P.S.; Mosher, J.C.

    1993-12-01

    Under-determined linear inverse problems arise in applications in which signals must be estimated from insufficient data. In these problems the number of potentially active sources is greater than the number of observations. In many situations, it is desirable to find a minimal source solution. This can be accomplished by minimizing a cost function that accounts from both the compatibility of the solution with the observations and for its ``sparseness``. Minimizing functions of this form can be a difficult optimization problem. Genetic algorithms are a relatively new and robust approach to the solution of difficult optimization problems, providing a global framework that is not dependent on local continuity or on explicit starting values. In this paper, the authors describe the use of genetic algorithms to find minimal source solutions, using as an example a simulation inspired by the reconstruction of neural currents in the human brain from magnetoencephalographic (MEG) measurements.

  8. Benchmarking procedures for high-throughput context specific reconstruction algorithms

    Directory of Open Access Journals (Sweden)

    Maria ePires Pacheco

    2016-01-01

    Full Text Available Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX (Duarte et al., 2007; Thiele et al., 2013 or HMR (Agren et al., 2013 has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last ten years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding.This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished, consistency testing and comparison based testing. The former includes methods like cross validation or testing with artificial networks. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms, that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms

  9. DSA cone beam reconstruction algorithm based on backprojection weight FDK%基于FDK反投影权重的锥束DSA重建算法

    Institute of Scientific and Technical Information of China (English)

    杨宏成; 高欣; 张涛

    2013-01-01

    To solve the problem of cone beam artifacts resulting from the large cone angle in cone beam digital subtraction angiography of DSA, a novel backprojection weight reconstruction algorithm based on the frame work of FDK(BPW-FDK) was proposed. The cause of the cone beam artifacts away from the rotating track was analyzed. To solve the data deficiency in Randon space, a new backprojection weight function based on distance was designed and incorporated into the original FDK algorithm as a constraint condition for data compensation in the region far away from the rotating track to expand the reconstruction region. The reconstructing experiments were conducted on the images from simulated projections with noise or without noise and the real projections from a self-development DSA scanner. The results show that the proposed algorithm has obvious superiority over the Parker-FDK algorithm in suppression of cone beam artifacts for large cone angle projections. Compared with the Parker-FDK, the normalized mean square distance criterion and the normalized mean absolute distance criterion of the proposed algorithm are decreased by 5%.%针对锥束数字减影血管造影成像系统(DSA)锥角增大而导致锥束伪影严重的问题,提出了一种基于FDK的反投影权重锥束DSA重建算法.分析了圆扫描轨迹远端伪影的成因,针对短扫描阴影区域导致的Radon空间数据缺失,提出了一种距离变量的反投影权重函数,并将其作为约束条件引入到FDK算法中,实现扫描轨迹远端区域的数据补偿,扩大图像重建区域.应用该算法对无噪声和有噪声的模拟投影数据,及自行研发的锥束DSA的实际扫描数据分别进行了重建试验.结果表明,文中算法较FDK类算法(Parker-FDK)对大锥角投影数据可明显抑制锥角伪影,其归一化均方距离判据和归一化平均绝对距离判据比Parker-FDK均降低了5%.

  10. Image Reconstruction Algorithm Based on Compressed Sensing%基于压缩感知的图像重构算法

    Institute of Scientific and Technical Information of China (English)

    史久根; 吴文婷; 刘胜

    2014-01-01

    There are some problems in the typical gradient projection algorithms in the application of Compressed Sensing(CS), such as the large amount of calculation, the low efficiency of convergence process and excessive dependence on the sparsity of the data matrix. In order to deal with these problems, an efficient recovery algorithm is proposed. This algorithm is based on CS which combines the Quasi-Newton method and the gradient projection method. So it can make full use of the estimating and correcting procedure and the global superlinear convergence of the Quasi-Newton method. By correcting the objective function with the Quasi-Newton method, a more accurate searching direction and fewer iteration can be got. It makes the algorithm perform efficiently with a high convergent reconstruction based on compressed sensing. Experimental results prove that this algorithm shows a good reconstruction and anti-noise performance. Compared with the traditional gradient projection recovery method, the proposed method drops the error rate to make a more stable and convergent reconstruction with fewer iteration.%在图像压缩感知中,梯度投影恢复算法存在收敛速度慢、迭代次数多、对数据稀疏度过分敏感的问题。为此,提出一种基于压缩感知的图像重构算法。将拟牛顿法引入稀疏梯度投影算法中,利用拟牛顿法的估计校正机制以及其全局超线性收敛性,通过对目标函数的校正,获得更精确的搜索方向,从而减少迭代次数,构成有效收敛的图像恢复算法。实验结果表明,与传统梯度投影恢复算法相比,该算法在保证较好图像恢复效果的同时具有较好的抗噪性能,并且在减少迭代次数的基础上能有效降低重构误差,得到稳定收敛的重构结果。

  11. Reconstruction Algorithms in Undersampled AFM Imaging

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen

    2016-01-01

    This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby the s...

  12. Microscopic Image 3D Reconstruction Based on Laplace Algorithm%基于Laplace的显微物体三维形貌快速重构

    Institute of Scientific and Technical Information of China (English)

    胡致杰; 何晓昀

    2015-01-01

    显微物体表面三维形貌观测与分析在工业、医学、艺术、计算机等领域具有越来越重要的应用价值,也发挥着越来越重要的作用.该文以能够快速、廉价、非接触地重构出显微物体三维形貌为目标,提出一种基于改进的Laplace算子值进行图像的聚焦评价、高度测量和三维重构的方法,该方法通过计算每个像素点的改进Laplace值,帮助用户快速实现显微物体的三维形貌重构,将显微物体的形貌观测和分析从二维扩展到三维.大量的实验结果和用户体验反馈信息进一步验证了该方法的有效性和实用性.%The observation and analysis of three-dimensional surface of the micro object is more and more important in the ifelds of industry,medicine,art,computer and so on.We propose a efifcient algorithm based on improved Laplace operator, aiming at the effective 3D-reconstruction of the microscopic object surface images fast,cheaply and non-contactly,the algorithm can evaluate focus,gain focus height and reconstruct 3D shape by calculating the improved Laplace operator value of each pixel,extend the morphology observations and analysis of microscopic object from 2D to 3D.A large number of experimental results and a relative user experience demonstrate the effectiveness and application values of the algorithm.

  13. Wavefront reconstruction algorithm based on Legendre polynomials for radial shearing interferometry over a square area and error analysis.

    Science.gov (United States)

    Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai

    2015-08-10

    Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.

  14. Dynamic Data Updating Algorithm for Image Superresolution Reconstruction

    Institute of Scientific and Technical Information of China (English)

    TAN Bing; XU Qing; ZHANG Yan; XING Shuai

    2006-01-01

    A dynamic data updating algorithm for image superesolution is proposed. On the basis of Delaunay triangulation and its local updating property, this algorithm can update the changed region directly under the circumstances that only a part of the source images has been changed. For its high efficiency and adaptability, this algorithm can serve as a fast algorithm for image superesolution reconstruction.

  15. Ptychographic reconstruction algorithm for frequency resolved optical gating: super-resolution and supreme robustness

    CERN Document Server

    Sidorenko, Pavel; Avnat, Zohar; Cohen, Oren

    2016-01-01

    Frequency-resolved optical gating (FROG) is probably the most popular technique for complete characterization of ultrashort laser pulses. In FROG, a reconstruction algorithm retrieves the pulse from a measured spectrogram, yet current FROG reconstruction algorithms require and exhibit several restricting features that weaken FROG performances. For example, the delay step must correspond to the spectral bandwidth measured with large enough SNR a condition that limits the temporal resolution of the reconstructed pulse, obscures measurements of weak broadband pulses, and makes measurement of broadband mid-IR pulses hard and slow because the spectrograms become huge. We develop a new approach for FROG reconstruction, based on ptychography (a scanning coherent diffraction imaging technique), that removes many of the algorithmic restrictions. The ptychographic reconstruction algorithm is significantly faster and more robust to noise than current FROG algorithms, which are based on generalized projections (GP). We d...

  16. Comparison of image quality from filtered back projection, statistical iterative reconstruction, and model-based iterative reconstruction algorithms in abdominal computed tomography.

    Science.gov (United States)

    Kuo, Yu; Lin, Yi-Yang; Lee, Rheun-Chuan; Lin, Chung-Jung; Chiou, Yi-You; Guo, Wan-Yuo

    2016-08-01

    The purpose of this study was to compare the image noise-reducing abilities of iterative model reconstruction (IMR) with those of traditional filtered back projection (FBP) and statistical iterative reconstruction (IR) in abdominal computed tomography (CT) imagesThis institutional review board-approved retrospective study enrolled 103 patients; informed consent was waived. Urinary bladder (n = 83) and renal cysts (n = 44) were used as targets for evaluating imaging quality. Raw data were retrospectively reconstructed using FBP, statistical IR, and IMR. Objective image noise and signal-to-noise ratio (SNR) were calculated and analyzed using one-way analysis of variance. Subjective image quality was evaluated and analyzed using Wilcoxon signed-rank test with Bonferroni correction.Objective analysis revealed a reduction in image noise for statistical IR compared with that for FBP, with no significant differences in SNR. In the urinary bladder group, IMR achieved up to 53.7% noise reduction, demonstrating a superior performance to that of statistical IR. IMR also yielded a significantly superior SNR to that of statistical IR. Similar results were obtained in the cyst group. Subjective analysis revealed reduced image noise for IMR, without inferior margin delineation or diagnostic confidence.IMR reduced noise and increased SNR to greater degrees than did FBP and statistical IR. Applying the IMR technique to abdominal CT imaging has potential for reducing the radiation dose without sacrificing imaging quality.

  17. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  18. Fusion of sparse reconstruction algorithms for multiple measurement vectors

    Indian Academy of Sciences (India)

    K G DEEPA; SOORAJ K AMBAT; K V S HARI

    2016-11-01

    We consider the recovery of sparse signals that share a common support from multiple measurement vectors. The performance of several algorithms developed for this task depends on parameters like dimension of the sparse signal, dimension of measurement vector, sparsity level, measurement noise. We propose a fusion framework, where several multiple measurement vector reconstruction algorithms participate and the final signal estimate is obtained by combining the signal estimates of the participating algorithms. We present the conditions for achieving a better reconstruction performance than the participating algorithms. Numerical simulations demonstrate that the proposed fusion algorithm often performs better than the participating algorithms.

  19. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    Science.gov (United States)

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-05-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  20. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework.

    Science.gov (United States)

    Matej, Samuel; Daube-Witherspoon, Margaret E; Karp, Joel S

    2016-05-07

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  1. Array antenna diagnostics with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Meincke, Peter; Pivnenko, Sergey;

    2012-01-01

    The 3D reconstruction algorithm is applied to a slotted waveguide array measured at the DTU-ESA Spherical Near-Field Antenna Test Facility. One slot of the array is covered by conductive tape and an error is present in the array excitation. Results show the accuracy obtainable by the 3D...... reconstruction algorithm. Considerations on the measurement sampling, the obtainable spatial resolution, and the possibility of taking full advantage of the reconstruction geometry are provided....

  2. A new iterative algorithm for reconstructing a signal from its dyadic wavelet transform modulus maxima

    Institute of Scientific and Technical Information of China (English)

    张茁生; 刘贵忠; 刘峰

    2003-01-01

    A new algorithm for reconstructing a signal from its wavelet transform modulus maxima is presented based on an iterative method for solutions to monotone operator equations in Hilbert spaces. The algorithm's convergence is proved. Numerical simulations for different types of signals are given. The results indicate that compared with Mallat's alternate projection method, the proposed algorithm is sim-pler, faster and more effective.

  3. A combined reconstruction algorithm for computerized ionospheric tomography

    Science.gov (United States)

    Wen, D. B.; Ou, J. K.; Yuan, Y. B.

    Ionospheric electron density profiles inverted by tomographic reconstruction of GPS derived total electron content TEC measurements has the potential to become a tool to quantify ionospheric variability and investigate ionospheric dynamics The problem of reconstructing ionospheric electron density from GPS receiver to satellite TEC measurements are formulated as an ill-posed discrete linear inverse problem A combined reconstruction algorithm of computerized ionospheric tomography CIT is proposed in this paper In this algorithm Tikhonov regularization theory TRT is exploited to solve the ill-posed problem and its estimate from GPS observation data is input as the initial guess of simultaneous iterative reconstruction algorithm SIRT The combined algorithm offer a more reasonable method to choose initial guess of SIRT and the use of SIRT algorithm is to improve the quality of the final reconstructed imaging Numerical experiments from the actual GPS observation data are used to validate the reliability of the method the reconstructed results show that the new algorithm works reasonably and effectively with CIT the overall reconstruction error reduces significantly compared to the reconstruction error of SIRT only or TRT only

  4. Efficient algorithms for reconstructing gene content by co-evolution

    Directory of Open Access Journals (Sweden)

    Tuller Tamir

    2011-10-01

    Full Text Available Abstract Background In a previous study we demonstrated that co-evolutionary information can be utilized for improving the accuracy of ancestral gene content reconstruction. To this end, we defined a new computational problem, the Ancestral Co-Evolutionary (ACE problem, and developed algorithms for solving it. Results In the current paper we generalize our previous study in various ways. First, we describe new efficient computational approaches for solving the ACE problem. The new approaches are based on reductions to classical methods such as linear programming relaxation, quadratic programming, and min-cut. Second, we report new computational hardness results related to the ACE, including practical cases where it can be solved in polynomial time. Third, we generalize the ACE problem and demonstrate how our approach can be used for inferring parts of the genomes of non-ancestral organisms. To this end, we describe a heuristic for finding the portion of the genome ('dominant set’ that can be used to reconstruct the rest of the genome with the lowest error rate. This heuristic utilizes both evolutionary information and co-evolutionary information. We implemented these algorithms on a large input of the ACE problem (95 unicellular organisms, 4,873 protein families, and 10, 576 of co-evolutionary relations, demonstrating that some of these algorithms can outperform the algorithm used in our previous study. In addition, we show that based on our approach a ’dominant set’ cab be used reconstruct a major fraction of a genome (up to 79% with relatively low error-rate (e.g. 0.11. We find that the ’dominant set’ tends to include metabolic and regulatory genes, with high evolutionary rate, and low protein abundance and number of protein-protein interactions. Conclusions The ACE problem can be efficiently extended for inferring the genomes of organisms that exist today. In addition, it may be solved in polynomial time in many practical cases

  5. Performance of the ATLAS primary vertex reconstruction algorithms

    CERN Document Server

    Zhang, Matt

    2017-01-01

    The reconstruction of primary vertices in the busy, high pile up environment of the LHC is a challenging task. The challenges and novel methods developed by the ATLAS experiment to reconstruct vertices in such environments will be presented. Such advances in vertex seeding include methods taken from medical imagining, which allow for reconstruction of very nearby vertices will be highlighted. The performance of the current vertexing algorithms using early Run-2 data will be presented and compared to results from simulation.

  6. Electron tomography based on a total variation minimization reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Goris, B., E-mail: bart.goris@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van den Broek, W. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Heidari Mezerji, H.; Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2012-02-15

    The 3D reconstruction of a tilt series for electron tomography is mostly carried out using the weighted backprojection (WBP) algorithm or using one of the iterative algorithms such as the simultaneous iterative reconstruction technique (SIRT). However, it is known that these reconstruction algorithms cannot compensate for the missing wedge. Here, we apply a new reconstruction algorithm for electron tomography, which is based on compressive sensing. This is a field in image processing specialized in finding a sparse solution or a solution with a sparse gradient to a set of ill-posed linear equations. Therefore, it can be applied to electron tomography where the reconstructed objects often have a sparse gradient at the nanoscale. Using a combination of different simulated and experimental datasets, it is shown that missing wedge artefacts are reduced in the final reconstruction. Moreover, it seems that the reconstructed datasets have a higher fidelity and are easier to segment in comparison to reconstructions obtained by more conventional iterative algorithms. -- Highlights: Black-Right-Pointing-Pointer A reconstruction algorithm for electron tomography is investigated based on total variation minimization. Black-Right-Pointing-Pointer Missing wedge artefacts are reduced by this algorithm. Black-Right-Pointing-Pointer The reconstruction is easier to segment. Black-Right-Pointing-Pointer More reliable quantitative information can be obtained.

  7. A Human-vision-based Algorithm for Curve Reconstruction%基于视觉原理的曲线重构算法研究

    Institute of Scientific and Technical Information of China (English)

    蔡大伟; 刘勇; 邱芹军; 曹涛

    2014-01-01

    曲线和曲面的重构是逆向工程中的重要问题,特别是按照计算机图形学中点线面的发展规律,曲线重构更是其中很重要的一步,为后面的曲面重构奠定了研究基础。论文研究和实现了一种曲线重构算法,该算法将人类的视觉具有的接近性和连续性融入到了曲线重构算法中。实验结果表明了该算法的有效性。%Curve and surface reconstruction is one of the most important problems in reverse engineering ,especially ac-cording to the law of development for point ,line and plane in computer graphics ,curve reconstruction is the very important step .It lays foundation for the later surface reconstruction .A kind of curve reconstruction algorithm is studied and realized , which is incorporated into proximity and good continuation of the human visual .Experimental results are presented to show the effectiveness of the algorithm .

  8. Comparison study of typical algorithms for reconstructing time series from the recurrence plot of dynamical systems

    Institute of Scientific and Technical Information of China (English)

    Liu Jie; Shi Shu-Ting; Zhao Jun-Chan

    2013-01-01

    The three most widely used methods for reconstructing the underlying time series via the recurrence plots (RPs) of a dynamical system are compared with each other in this paper.We aim to reconstruct a toy series,a periodical series,a random series,and a chaotic series to compare the effectiveness of the most widely used typical methods in terms of signal correlation analysis.The application of the most effective algorithm to the typical chaotic Lorenz system verifies the correctness of such an effective algorithm.It is verified that,based on the unthresholded RPs,one can reconstruct the original attractor by choosing different RP thresholds based on the Hirata algorithm.It is shown that,in real applications,it is possible to reconstruct the underlying dynamics by using quite little information from observations of real dynamical systems.Moreover,rules of the threshold chosen in the algorithm are also suggested.

  9. Research on Image Reconstruction Algorithms for Tuber Electrical Resistance Tomography System

    Directory of Open Access Journals (Sweden)

    Jiang Zili

    2016-01-01

    Full Text Available The application of electrical resistance tomography (ERT technology has been expanded to the field of agriculture, and the concept of TERT (Tuber Electrical Resistance Tomography is proposed. On the basis of the research on the forward and the inverse problems of the TERT system, a hybrid algorithm based on genetic algorithm is proposed, which can be used in TERT system to monitor the growth status of the plant tubers. The image reconstruction of TERT system is different from the conventional ERT system for two phase-flow measurement. Imaging of TERT needs more precision measurement and the conventional ERT cares more about the image reconstruction speed. A variety of algorithms are analyzed and optimized for the purpose of making them suitable for TERT system. For example: linear back projection, modified Newton-Raphson and genetic algorithm. Experimental results showed that the novel hybrid algorithm is superior to other algorithm and it can effectively improve the image reconstruction quality.

  10. Convergence of Algorithms for Reconstructing Convex Bodies and Directional Measures

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus; Milanfar, Peyman

    2006-01-01

    We investigate algorithms for reconstructing a convex body K in Rn from noisy measurements of its support function or its brightness function in k directions u1, . . . , uk. The key idea of these algorithms is to construct a convex polytope Pk whose support function (or brightness function) best ...

  11. Convergence rate calculation of simultaneous iterative reconstruction technique algorithm for diffuse optical tomography image reconstruction: A feasibility study

    Science.gov (United States)

    Chuang, Ching-Cheng; Tsai, Jui-che; Chen, Chung-Ming; Yu, Zong-Han; Sun, Chia-Wei

    2012-04-01

    Diffuse optical tomography (DOT) is an emerging technique for functional biological imaging. The imaging quality of DOT depends on the imaging reconstruction algorithm. The SIRT has been widely used for DOT image reconstruction but there is no criterion to truncate based on any kind of residual parameter. The iteration loops will always be decided by experimental rule. This work presents the CR calculation that can be great help for SIRT optimization. In this paper, four inhomogeneities with various shapes of absorption distributions are simulated as imaging targets. The images are reconstructed and analyzed based on the simultaneous iterative reconstruction technique (SIRT) method. For optimization between time consumption and imaging accuracy in reconstruction process, the numbers of iteration loop needed to be optimized with a criterion in algorithm, that is, the root mean square error (RMSE) should be minimized in limited iterations. For clinical applications of DOT, the RMSE cannot be obtained because the measured targets are unknown. Thus, the correlations between the RMSE and the convergence rate (CR) in SIRT algorithm are analyzed in this paper. From the simulation results, the parameter CR reveals the related RMSE value of reconstructed images. The CR calculation offers an optimized criterion of iteration process in SIRT algorithm for DOT imaging. Based on the result, the SIRT can be modified with CR calculation for self-optimization. CR reveals an indicator of SIRT image reconstruction in clinical DOT measurement. Based on the comparison result between RMSE and CR, a threshold value of CR (CRT) can offer an optimized number of iteration steps for DOT image reconstruction. This paper shows the feasibility study by utilizing CR criterion for SIRT in simulation and the clinical application of DOT measurement relies on further investigation.

  12. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  13. Testing the Landscape Reconstruction Algorithm for spatially explicit reconstruction of vegetation in northern Michigan and Wisconsin

    Science.gov (United States)

    Sugita, Shinya; Parshall, Tim; Calcote, Randy; Walker, Karen

    2010-09-01

    The Landscape Reconstruction Algorithm (LRA) overcomes some of the fundamental problems in pollen analysis for quantitative reconstruction of vegetation. LRA first uses the REVEALS model to estimate regional vegetation using pollen data from large sites and then the LOVE model to estimate vegetation composition within the relevant source area of pollen (RSAP) at small sites by subtracting the background pollen estimated from the regional vegetation composition. This study tests LRA using training data from forest hollows in northern Michigan (35 sites) and northwestern Wisconsin (43 sites). In northern Michigan, surface pollen from 152-ha and 332-ha lakes is used for REVEALS. Because of the lack of pollen data from large lakes in northwestern Wisconsin, we use pollen from 21 hollows randomly selected from the 43 sites for REVEALS. RSAP indirectly estimated by LRA is comparable to the expected value in each region. A regression analysis and permutation test validate that the LRA-based vegetation reconstruction is significantly more accurate than pollen percentages alone in both regions. Even though the site selection in northwestern Wisconsin is not ideal, the results are robust. The LRA is a significant step forward in quantitative reconstruction of vegetation.

  14. A study of image reconstruction algorithms for hybrid intensity interferometers

    Science.gov (United States)

    Crabtree, Peter N.; Murray-Krezan, Jeremy; Picard, Richard H.

    2011-09-01

    Phase retrieval is explored for image reconstruction using outputs from both a simulated intensity interferometer (II) and a hybrid system that combines the II outputs with partially resolved imagery from a traditional imaging telescope. Partially resolved imagery provides an additional constraint for the iterative phase retrieval process, as well as an improved starting point. The benefits of this additional a priori information are explored and include lower residual phase error for SNR values above 0.01, increased sensitivity, and improved image quality. Results are also presented for image reconstruction from II measurements alone, via current state-of-the-art phase retrieval techniques. These results are based on the standard hybrid input-output (HIO) algorithm, as well as a recent enhancement to HIO that optimizes step lengths in addition to step directions. The additional step length optimization yields a reduction in residual phase error, but only for SNR values greater than about 10. Image quality for all algorithms studied is quite good for SNR>=10, but it should be noted that the studied phase-recovery techniques yield useful information even for SNRs that are much lower.

  15. The SRT reconstruction algorithm for semiquantification in PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Samartzis, Alexandros P. [Nuclear Medicine Department, Evangelismos General Hospital, Athens 10676 (Greece); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA, United Kingdom and Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece)

    2015-10-15

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  16. A new jet reconstruction algorithm for lepton colliders

    CERN Document Server

    Boronat, Marça; Vos, Marcel

    2014-01-01

    We propose a new sequential jet reconstruction algorithm for future lepton colliders at the energy frontier. The Valencia algorithm combines the natural distance criterion for lepton colliders with the greater robustness against backgrounds of algorithms adapted to hadron colliders. Results on a detailed Monte Carlo simulation of $t\\bar{t}$ and $ZZ$ production at future linear $e^+e^-$ colliders (ILC and CLIC) with a realistic level of background overlaid, show that it achieves better performance in the presence of background.

  17. 基于优化内积模型的压缩感知快速重构算法%A Fast Compressed Sensing Reconstruction Algorithm Based on Inner Product Optimization

    Institute of Scientific and Technical Information of China (English)

    刘勇; 魏东红; 毛京丽

    2013-01-01

    The existing reconstruction algorithms in compressed sensing (CS) theory commonly cost too much time. A novel reconstruction algorithm based on inner product optimization is proposed to reduce reconstruction time. And also stopping criterion is derived from theory. The proposed algorithm computes the inner product of measurement matrix and the residual only in the first iteration during the reconstruction process. In the remaining iterations, the inner product of vectors instead of matrices is calculated. Then least square calculation is done only once to reconstruct the signal after iterations stopped. Experiments show that the proposed algorithm reduces the reconstruction time largely without degrading the quality of the signal.%针对压缩感知理论中现有重构算法耗时过长的问题,提出一种基于优化内积模型的快速重构算法,且理论推导了迭代停止条件.该算法在重构的每次迭代过程中,仅在第1次迭代时采用传感矩阵与余量的矩阵求内积运算,在后续的迭代中则通过向量运算代替矩阵求内积的运算,迭代停止时只需进行一次最小二乘法即可获得重构信号.仿真结果表明,提出的快速重构算法在保证重构信号性能的基础上,大大减少了重构时间.

  18. Digital ridgelet reconstruction based on local dual frame

    Institute of Scientific and Technical Information of China (English)

    BAI Jian; FENG Xiangchu

    2005-01-01

    A global dual frame (GDF) representation for the digital ridgelet reconstruction algorithm is discussed and a novel concept of local dual frame (LDF) is presented. Based on the properties of LDF, we propose a new digital ridgelet reconstruction algorithm. The method reduces the redundancy in the digital ridgelet reconstruction while keeping the characteristics of low computation cost. When applying it to the image compression and denoising, good results are obtained.

  19. 基于MC算法的高质量脊柱CT图像三维重建%HIGH-QUALITY 3D RECONSTRUCTION OF SPINE CT IMAGES BASED ON MC ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    许婉露; 李彬; 田联房

    2013-01-01

    Reconstructing 3D model of spine from its CT images for providing intuitive preoperative lesion information can effectively assist the high-difficulty spine deformity corrective surgery.As traditional marching cubes (MC) algorithm has the limitations in roughness on reconstruction surface and topological ambiguity,as well as too many fragments in human spine reconstruction,in this paper we propose an improved MC algorithm which is based on edge-preserving local Gaussian filtering and 3D region growing.The algorithm adopts the edge-preserving filtering to eliminate the noises and enhance the edges,and uses the local Gaussian filtering to smooth the pending reconstruction areas for changing original cube types and reducing the number of ambiguous voxels,these effectively solve the problems of roughness on reconstruction surface and topological ambiguity.The dual-threshold segmentation algorithm based on 3D region growing is applied,which can significantly reduce the number of bone fragments reconstruction.Experimental results demonstrate that the 3D spine model reconstructed on this high-quality reconstruction algorithm can serve well the purpose of medical 3D visualisation.%从脊柱CT图像中重建出脊柱的三维模型以提供直观的术前病灶信息,能够有效辅助高难度的脊柱畸形矫正手术.针对传统MC(Marching Cubes)算法存在的重建表面不平滑、结构拓扑歧义的局限以及人体脊柱重构碎片过多的特点,提出一种基于保边局部高斯滤波与三维区域增长的改进型MC算法.该算法采用保边滤波去噪并增强边缘,局部高斯滤波平滑待重建区域以改变原有体素类型,减少二义性体素对数,有效地解决了重建表面不平滑与结构拓扑歧义问题;采用基于三维区域增长的双阈值分割算法,大大减少碎骨重建的数量.实验证明,采用高质量重建算法重建的脊柱三维模型能够满足医学三维可视化的要求.

  20. Algorithms and software for total variation image reconstruction via first-order methods

    DEFF Research Database (Denmark)

    Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt

    2010-01-01

    This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...

  1. Advanced reconstruction algorithms for electron tomography: from comparison to combination.

    Science.gov (United States)

    Goris, B; Roelandts, T; Batenburg, K J; Heidari Mezerji, H; Bals, S

    2013-04-01

    In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. New vertex reconstruction algorithms for CMS

    CERN Document Server

    Frühwirth, R; Prokofiev, Kirill; Speer, T.; Vanlaer, P.; Chabanat, E.; Estre, N.

    2003-01-01

    The reconstruction of interaction vertices can be decomposed into a pattern recognition problem (``vertex finding'') and a statistical problem (``vertex fitting''). We briefly review classical methods. We introduce novel approaches and motivate them in the framework of high-luminosity experiments like at the LHC. We then show comparisons with the classical methods in relevant physics channels

  3. 基于精确中心定位的角膜复原算法%An Algorithm of Corneal Reconstruction Based on Precise Location of Corneal Center

    Institute of Scientific and Technical Information of China (English)

    周洪亚; 沈建新; 高绍雷; 唐志豪

    2011-01-01

    Placido盘在角膜地形图系统中应用最为广泛.针对Placido角膜地形图系统中角膜中心定位不准的问题,提出了一种基于Placido盘的角膜复原算法.关键技术是利用图像最内环的数据精确求解角膜中心顶点曲率半径.该方法根据图像分析的结果,利用圆弧连接角膜顶点和第一个反射点,通过迭代精确求解角膜顶点的曲率半径,进而求解角膜各反射点的位置以及屈光度,建立真实人眼的伪彩图.利用标准钢球模拟人眼角膜,结果误差小于0.25 D,表明所采用的算法具有较好的精度和稳定性.%Placido disk is widely used in corneal topography. In order to solve the problem that the convex of the cor-neal can not be precisely located in the Placido corneal topography system, an algorithm of corneal reconstruction based on the Placido disk was introduced. The key of this method is the calculation of radius of corneal convex by u-sing the innermost ring data. Based on image analysis result, we precisely calculated the radius of corneal convex it-eratively by connecting the convex and the first ring using a circle, and then calculated the location of all the reflect point and its power. At last we created the pseudo color map of the human corneal. The corneal was simulated by u-sing standard steel sphere, and the calculating errors of the result were all below 0. 25D. It showed that the algorithm used in this work could get relatively accurate powers and would have fair stability.

  4. Statistical reconstruction algorithms for continuous wave electron spin resonance imaging

    Science.gov (United States)

    Kissos, Imry; Levit, Michael; Feuer, Arie; Blank, Aharon

    2013-06-01

    Electron spin resonance imaging (ESRI) is an important branch of ESR that deals with heterogeneous samples ranging from semiconductor materials to small live animals and even humans. ESRI can produce either spatial images (providing information about the spatially dependent radical concentration) or spectral-spatial images, where an extra dimension is added to describe the absorption spectrum of the sample (which can also be spatially dependent). The mapping of oxygen in biological samples, often referred to as oximetry, is a prime example of an ESRI application. ESRI suffers frequently from a low signal-to-noise ratio (SNR), which results in long acquisition times and poor image quality. A broader use of ESRI is hampered by this slow acquisition, which can also be an obstacle for many biological applications where conditions may change relatively quickly over time. The objective of this work is to develop an image reconstruction scheme for continuous wave (CW) ESRI that would make it possible to reduce the data acquisition time without degrading the reconstruction quality. This is achieved by adapting the so-called "statistical reconstruction" method, recently developed for other medical imaging modalities, to the specific case of CW ESRI. Our new algorithm accounts for unique ESRI aspects such as field modulation, spectral-spatial imaging, and possible limitation on the gradient magnitude (the so-called "limited angle" problem). The reconstruction method shows improved SNR and contrast recovery vs. commonly used back-projection-based methods, for a variety of simulated synthetic samples as well as in actual CW ESRI experiments.

  5. A New Algorithm for Reconstructing Two-Dimensional Temperature Distribution by Ultrasonic Thermometry

    Directory of Open Access Journals (Sweden)

    Xuehua Shen

    2015-01-01

    Full Text Available Temperature, especially temperature distribution, is one of the most fundamental and vital parameters for theoretical study and control of various industrial applications. In this paper, ultrasonic thermometry to reconstruct temperature distribution is investigated, referring to the dependence of ultrasound velocity on temperature. In practical applications of this ultrasonic technique, reconstruction algorithm based on least square method is commonly used. However, it has a limitation that the amount of divided blocks of measure area cannot exceed the amount of effective travel paths, which eventually leads to its inability to offer sufficient temperature information. To make up for this defect, an improved reconstruction algorithm based on least square method and multiquadric interpolation is presented. And then, its reconstruction performance is validated via numerical studies using four temperature distribution models with different complexity and is compared with that of algorithm based on least square method. Comparison and analysis indicate that the algorithm presented in this paper has more excellent reconstruction performance, as the reconstructed temperature distributions will not lose information near the edge of area while with small errors, and its mean reconstruction time is short enough that can meet the real-time demand.

  6. Image reconstruction algorithm based on NSSN for Electrical Capacitance Tomography%类支集神经网络在ECT图像重建中的研究与应用

    Institute of Scientific and Technical Information of China (English)

    李岩; 冯莉; 朱艳丹; 张礼勇

    2011-01-01

    Aiming at improvement of image reconstruction algorithm in 12-Electrical Capacitance Tomography system,this paper conducts an experimental study on closed containers gas/solid two phase flow,which mainly depend on the stability and the speed of image reconstruction algorithm.To keep the stability of solving process and the computational performance, the image reconstruction algorithm based on a new set of neural network types supported algorithm (NSSN) is first applied to the ECT image reconstruction system.Large-scale training of neural network algorithm puts forward a slow sub-division of the network enhancement.The system uses 12-electrode capacitance tomography system for gas-solid flow tube closure data detection, the use of the improved neural network algorithm for image reconstruction.The obtained experimental results fully show that the image reconstruction method has high precision imaging,computing speed and so on.This method can simplify the use of neural network structure, reducing the size of neurons for image reconstruction of electrical capacitance tomography system and providing a new way of thinking.%以12电极电容阵列传感器ECT系统为背景,从图像重建的稳定性和速度两方面对密闭容器中气-固两相流场的图像重建算法优化进行实验室研究.将基于新型类支集函数的神经网络算法(NSSN),应用于ECT系统图像重建算法中,使得图像重建算法的求解过程稳定并具有良好的计算性能.针对大规模神经网络算法训练速度较慢的问题提出了划分子网络的改进方法.通过对封闭管道的气固两相流进行数据检测,并采用改进后的神经网络算法进行图像重建,实验结果验证了改进后的方法弥补了大规模神经网络运算速度慢的不足,可以简化神经网络的结构,减少神经元的规模,为电容层析成像系统图像重建提供了新的思路.

  7. Tomographic reconstructions using map algorithms - application to the SPIDR mission

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh Roy, D.N.; Wilton, K.; Cook, T.A.; Chakrabarti, S.; Qi, J.; Gullberg, G.T.

    2004-01-21

    The spectral image of an astronomical scene is reconstructed from noisy tomographic projections using maximum a posteriori (MAP) and filtered backprojection (FBP) algorithms. Both maximum entropy (ME) and Gibbs prior are used in the MAP reconstructions. The scene, which is a uniform background with a localized emissive source superimposed on it, is reconstructed for a broad range of source counts. The algorithms are compared regarding their ability to detect the source in the background. Detectability is defined in terms of a contrast-to-noise ratio (CNR) which is a Monte Carlo ensemble average of spatially averaged CNRs for the individual reconstructions. Overall, MAP was found to yield improved CNR relative to FBP. Moreover, as a function of the total source counts, the CNR varies distinctly different for source and background regions. This may be important in separating a weak source from the background.

  8. Impact of an advanced image-based monoenergetic reconstruction algorithm on coronary stent visualization using third generation dual-source dual-energy CT: a phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, Stefanie [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Cannao, Paola M. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Milan, Scuola di Specializzazione in Radiodiagnostica, Milan (Italy); Schoepf, U.J. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States); Wichmann, Julian L. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Canstein, Christian [Siemens Medical Solutions, Malvern, PA (United States); Fuller, Stephen R.; Varga-Szemes, Akos [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Muscogiuri, Giuseppe; De Cecco, Carlo N. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Department of Radiological Sciences, Oncology and Pathology, Rome (Italy); Nikolaou, Konstantin [Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany)

    2016-06-15

    To evaluate the impact of an advanced monoenergetic (ME) reconstruction algorithm on CT coronary stent imaging in a phantom model. Three stents with lumen diameters of 2.25, 3.0 and 3.5 mm were examined with a third-generation dual-source dual-energy CT (DECT). Tube potential was set at 90/Sn150 kV for DE and 70, 90 or 120 kV for single-energy (SE) acquisitions and advanced modelled iterative reconstruction was used. Overall, 23 reconstructions were evaluated for each stent including three SE acquisitions and ten advanced and standard ME images with virtual photon energies from 40 to 130 keV, respectively. In-stent luminal diameter was measured and compared to nominal lumen diameter to determine stent lumen visibility. Contrast-to-noise ratio was calculated. Advanced ME reconstructions substantially increased lumen visibility in comparison to SE for stents ≤3 mm. 130 keV images produced the best mean lumen visibility: 86 % for the 2.25 mm stent (82 % for standard ME and 64 % for SE) and 82 % for the 3.0 mm stent (77 % for standard ME and 69 % for SE). Mean DLP for SE 120 kV and DE acquisitions were 114.4 ± 9.8 and 58.9 ± 2.2 mGy x cm, respectively. DECT with advanced ME reconstructions improves the in-lumen visibility of small stents in comparison with standard ME and SE imaging. (orig.)

  9. Stator Current Harmonics Evaluation by Flexible Neural Network Method With Reconstruction Structure During Learning Step Based On CFE/SS Algorithm for ACEC Generator of Rey Power Plant

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Yousefi

    2010-07-01

    Full Text Available One method for on-line fault detection in synchronous generator is stator current harmonics analysis. In this paper, the flexible neural network with reconstruction structure during learning has been used to evaluate the stator current harmonics in different loads. Generator modeling, finite element method and state space model make training set of flexible neural network. Many points from generator capability curve are used to complete this set. Flexible neural network that is used in this paper is a perception network with single hidden layer, flexible hidden layer neuron and back propagation algorithms. Results are show that the trained flexible neural network can identify stator current harmonics for desired load from the capability curve. The error is less than 10% in compared to the values obtained directly from the CFE-SS algorithms. The parameters of modeled generator are 43950(KVA, 11(kV, 3000(rpm, 50(HZ, (PF=0.5.

  10. Frequency domain simultaneous algebraic reconstruction techniques: algorithm and convergence

    Science.gov (United States)

    Wang, Jiong; Zheng, Yibin

    2005-03-01

    We propose a simultaneous algebraic reconstruction technique (SART) in the frequency domain for linear imaging problems. This algorithm has the advantage of efficiently incorporating pixel correlations in an a priori image model. First it is shown that the generalized SART algorithm converges to the weighted minimum norm solution of a weighted least square problem. Then an implementation in the frequency domain is described. The performance of the new algorithm is demonstrated with fan beam computed tomography (CT) examples. Compared to the traditional SART and its major alternative ART, the new algorithm offers superior image quality and potential application to other modalities.

  11. 基于量子免疫克隆的压缩感知数据重构算法%Algorithm of Compressed Sensor Data Reconstruction Based on Quantum-inspired Immune Clon

    Institute of Scientific and Technical Information of China (English)

    祁浩; 刘洲洲

    2014-01-01

    An algorithm of compressed sensor data reconstruction,called Q-CSDR,based on the algorithm of quantum -inspired immune clon,is proposed in this paper.Q -CSDR can increase the probability of data reconstruction through framing the data adaptively.Because of its excellent perform-ance,Q-CSDR uses the algorithm to accurately reconstruct the data.The experiment results show that, according to the sparsity of the original data,the algorithm can automatically adjust compression ratio, raise the accuracy of data reconstruction and adapt well to high sparsity data reconstruction.It is used in the field security system of Emperor Qinshihuang`s mausoleum site museum with good performance.%提出了一种基于量子免疫克隆的压缩感知数据重构算法(Q-CSDR)。算法先提出了一种能够提高数据重构概率的自适应分帧方法,然后利用量子克隆免疫算法的优化组合性能实现数据的精确重构。实验结果表明,Q-CSDR算法能够根据原始信号稀疏度自动调节压缩比率,具有重构速度快,重构精度高,能够适应于高稀疏度数据重构等优点。该算法已应用于秦始皇帝陵博物院野外文物安防系统。经实际检验,收到了良好效果。

  12. Development of a new prior knowledge based image reconstruction algorithm for the cone-beam-CT in radiation therapy; Entwicklung eines neuen vorwissensbasierten Bildrekonstruktionsalgorithmus fuer die Cone-Beam-CT Bildgebung in der Strahlentherapie

    Energy Technology Data Exchange (ETDEWEB)

    Vaegler, Sven

    2016-07-08

    The treatment of cancer in radiation therapy is achievable today by techniques that enable highly conformal dose distributions and steep dose gradients. In order to avoid mistreatment, these irradiation techniques have necessitated enhanced patient localization techniques. With an integrated x-ray tube at modern linear accelerators kV-projections can be acquired over a sufficiently large angular space and can be reconstructed to a volumetric image data set from the current situation of the patient prior to irradiation. The so-called Cone-Beam-CT (CBCT) allows a precise verification of patient positioning as well as adaptive radiotherapy. The benefits of an improved patient positioning due to a daily performed CBCT's is contrary to an increased and not negligible radiation exposure of the patient. In order to decrease the radiation exposure, substantial research effort is focused on various dose reduction strategies. Prominent strategies are the decrease of the charge per projection, the reduction of the number of projections as well as the reduction of the acquisition space. Unfortunately, these acquisition schemes lead to images with degraded quality with the widely used Feldkamp-Davis-Kress image reconstruction algorithm. More sophisticated image reconstruction techniques can deal with these dose-reduction strategies without degrading the image quality. A frequently investigated method is the image reconstruction by minimizing the total variation (TV), which is also known as Compressed Sensing (CS). A Compressed Sensing-based reconstruction framework that includes prior images into the reconstruction algorithm is the Prior-Image-Constrained- Compressed-Sensing algorithm (PICCS). The images reconstructed by PICCS outperform the reconstruction results of the conventional Feldkamp-Davis-Kress algorithm (FDK) based method if only a small number of projections are available. However, a drawback of PICCS is that major deviations between prior image data sets and

  13. Efficient iterative image reconstruction algorithm for dedicated breast CT

    Science.gov (United States)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  14. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    Energy Technology Data Exchange (ETDEWEB)

    Chartrand, Rick [Los Alamos National Laboratory

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  15. Reconstruction algorithm medical imaging DRR; Algoritmo de construccion de imagenes medicas DRR

    Energy Technology Data Exchange (ETDEWEB)

    Estrada Espinosa, J. C.

    2013-07-01

    The method of reconstruction for digital radiographic Imaging (DRR), is based on two orthogonal images, on the dorsal and lateral decubitus position of the simulation. DRR images are reconstructed with an algorithm that simulates running a conventional X-ray, a single rendition team, beam emitted is not divergent, in this case, the rays are considered to be parallel in the image reconstruction DRR, for this purpose, it is necessary to use all the values of the units (HU) hounsfield of each voxel in all axial cuts that form the study TC, finally obtaining the reconstructed image DRR performing a transformation from 3D to 2D. (Author)

  16. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  17. Parallel OSEM Reconstruction Algorithm for Fully 3-D SPECT on a Beowulf Cluster.

    Science.gov (United States)

    Rong, Zhou; Tianyu, Ma; Yongjie, Jin

    2005-01-01

    In order to improve the computation speed of ordered subset expectation maximization (OSEM) algorithm for fully 3-D single photon emission computed tomography (SPECT) reconstruction, an experimental beowulf-type cluster was built and several parallel reconstruction schemes were described. We implemented a single-program-multiple-data (SPMD) parallel 3-D OSEM reconstruction algorithm based on message passing interface (MPI) and tested it with combinations of different number of calculating processors and different size of voxel grid in reconstruction (64×64×64 and 128×128×128). Performance of parallelization was evaluated in terms of the speedup factor and parallel efficiency. This parallel implementation methodology is expected to be helpful to make fully 3-D OSEM algorithms more feasible in clinical SPECT studies.

  18. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET

    Science.gov (United States)

    Mikhaylova, E.; Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm3) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  19. Discrete Spectrum Reconstruction Using Integral Approximation Algorithm.

    Science.gov (United States)

    Sizikov, Valery; Sidorov, Denis

    2017-07-01

    An inverse problem in spectroscopy is considered. The objective is to restore the discrete spectrum from observed spectrum data, taking into account the spectrometer's line spread function. The problem is reduced to solution of a system of linear-nonlinear equations (SLNE) with respect to intensities and frequencies of the discrete spectral lines. The SLNE is linear with respect to lines' intensities and nonlinear with respect to the lines' frequencies. The integral approximation algorithm is proposed for the solution of this SLNE. The algorithm combines solution of linear integral equations with solution of a system of linear algebraic equations and avoids nonlinear equations. Numerical examples of the application of the technique, both to synthetic and experimental spectra, demonstrate the efficacy of the proposed approach in enabling an effective enhancement of the spectrometer's resolution.

  20. A fast and accurate algorithm for diploid individual haplotype reconstruction.

    Science.gov (United States)

    Wu, Jingli; Liang, Binbin

    2013-08-01

    Haplotypes can provide significant information in many research fields, including molecular biology and medical therapy. However, haplotyping is much more difficult than genotyping by using only biological techniques. With the development of sequencing technologies, it becomes possible to obtain haplotypes by combining sequence fragments. The haplotype reconstruction problem of diploid individual has received considerable attention in recent years. It assembles the two haplotypes for a chromosome given the collection of fragments coming from the two haplotypes. Fragment errors significantly increase the difficulty of the problem, and which has been shown to be NP-hard. In this paper, a fast and accurate algorithm, named FAHR, is proposed for haplotyping a single diploid individual. Algorithm FAHR reconstructs the SNP sites of a pair of haplotypes one after another. The SNP fragments that cover some SNP site are partitioned into two groups according to the alleles of the corresponding SNP site, and the SNP values of the pair of haplotypes are ascertained by using the fragments in the group that contains more SNP fragments. The experimental comparisons were conducted among the FAHR, the Fast Hare and the DGS algorithms by using the haplotypes on chromosome 1 of 60 individuals in CEPH samples, which were released by the International HapMap Project. Experimental results under different parameter settings indicate that the reconstruction rate of the FAHR algorithm is higher than those of the Fast Hare and the DGS algorithms, and the running time of the FAHR algorithm is shorter than those of the Fast Hare and the DGS algorithms. Moreover, the FAHR algorithm has high efficiency even for the reconstruction of long haplotypes and is very practical for realistic applications.

  1. Evaluation of the Bresenham algorithm for image reconstruction with ultrasound computer tomography

    Science.gov (United States)

    Spieß, Norbert; Zapf, Michael; Ruiter, Nicole V.

    2011-03-01

    At Karlsruhe Institute of Technology a 3D Ultrasound Computer Tomography (USCT) system is under development for early breast cancer detection. With 3.5 million of acquired raw data and up to one billion voxels for one image, the reconstruction of breast volumes may last for weeks in highest possible resolution. The currently applied backprojection algorithm, based on the synthetic aperture focusing technique (SAFT), offers only limited potential for further decrease of the reconstruction time. An alternative reconstruction method could apply signal detected data and rasterizes the backprojected ellipsoids directly. A well-known rasterization algorithm is the Bresenham algorithm, which was originally designed to rasterize lines. In this work an existing Bresenham concept to rasterize circles is extended to comply with the requirements of image reconstruction in USCT: the circle rasterization was adapted to rasterize spheres and extended to floating point parameterization. The evaluation of the algorithm showed that the quality of the rasterization is comparable to the original algorithm. The achieved performance of the circle and sphere rasterization algorithm was 12MVoxel/s and 3.5MVoxel/s. When taking the performance increase due to the reduced A-Scan data into account, an acceleration of factor 28 in comparison to the currently applied algorithm could be reached. For future work the presented rasterization algorithm offers additional potential for further speed up.

  2. Comparison with reconstruction algorithms in magnetic induction tomography.

    Science.gov (United States)

    Han, Min; Cheng, Xiaolin; Xue, Yuyan

    2016-05-01

    Magnetic induction tomography (MIT) is a kind of imaging technology, which uses the principle of electromagnetic detection to measure the conductivity distribution. In this research, we make an effort to improve the quality of image reconstruction mainly via the image reconstruction of MIT analysis, including solving the forward problem and image reconstruction. With respect to the forward problem, the variational finite element method is adopted. We transform the solution of a nonlinear partial differential equation into linear equations by using field subdividing and the appropriate interpolation function so that the voltage data of the sensing coils can be calculated. With respect to the image reconstruction, a method of modifying the iterative Newton-Raphson (NR) algorithm is presented in order to improve the quality of the image. In the iterative NR, weighting matrix and L1-norm regularization are introduced to overcome the drawbacks of large estimation errors and poor stability of the reconstruction image. On the other hand, within the incomplete-data framework of the expectation maximization (EM) algorithm, the image reconstruction can be converted to the problem of EM through the likelihood function for improving the under-determined problem. In the EM, the missing-data is introduced and the measurement data and the sensitivity matrix are compensated to overcome the drawback that the number of the measurement voltage is far less than the number of the unknown. In addition to the two aspects above, image segmentation is also used to make the lesion more flexible and adaptive to the patients' real conditions, which provides a theoretical reference for the development of the application of the MIT technique in clinical applications. The results show that solving the forward problem with the variational finite element method can provide the measurement voltage data for image reconstruction, the improved iterative NR method and EM algorithm can enhance the image

  3. Optimization of digital breast tomosynthesis (DBT) acquisition parameters for human observers: effect of reconstruction algorithms

    Science.gov (United States)

    Zeng, Rongping; Badano, Aldo; Myers, Kyle J.

    2017-04-01

    We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.

  4. Concluding Report: Quantitative Tomography Simulations and Reconstruction Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M B; Martz, H E; Slone, D M; Jackson, J A; Schach von Wittenau, A E; Goodman, D M; Logan, C M; Hall, J M

    2002-02-01

    In this report we describe the original goals and final achievements of this Laboratory Directed Research and Development project. The Quantitative was Tomography Simulations and Reconstruction Algorithms project (99-ERD-015) funded as a multi-directorate, three-year effort to advance the state of the art in radiographic simulation and tomographic reconstruction by improving simulation and including this simulation in the tomographic reconstruction process. Goals were to improve the accuracy of radiographic simulation, and to couple advanced radiographic simulation tools with a robust, many-variable optimization algorithm. In this project, we were able to demonstrate accuracy in X-Ray simulation at the 2% level, which is an improvement of roughly a factor of 5 in accuracy, and we have successfully coupled our simulation tools with the CCG (Constrained Conjugate Gradient) optimization algorithm, allowing reconstructions that include spectral effects and blurring in the reconstructions. Another result of the project was the assembly of a low-scatter X-Ray imaging facility for use in nondestructive evaluation applications. We conclude with a discussion of future work.

  5. Performance-based assessment of reconstructed images

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, Kenneth [Los Alamos National Laboratory

    2009-01-01

    During the early 90s, I engaged in a productive and enjoyable collaboration with Robert Wagner and his colleague, Kyle Myers. We explored the ramifications of the principle that tbe quality of an image should be assessed on the basis of how well it facilitates the performance of appropriate visual tasks. We applied this principle to algorithms used to reconstruct scenes from incomplete and/or noisy projection data. For binary visual tasks, we used both the conventional disk detection and a new challenging task, inspired by the Rayleigh resolution criterion, of deciding whether an object was a blurred version of two dots or a bar. The results of human and machine observer tests were summarized with the detectability index based on the area under the ROC curve. We investigated a variety of reconstruction algorithms, including ART, with and without a nonnegativity constraint, and the MEMSYS3 algorithm. We concluded that the performance of the Raleigh task was optimized when the strength of the prior was near MEMSYS's default 'classic' value for both human and machine observers. A notable result was that the most-often-used metric of rms error in the reconstruction was not necessarily indicative of the value of a reconstructed image for the purpose of performing visual tasks.

  6. Computationally efficient algorithm for multifocus image reconstruction

    Science.gov (United States)

    Eltoukhy, Helmy A.; Kavusi, Sam

    2003-05-01

    A method for synthesizing enhanced depth of field digital still camera pictures using multiple differently focused images is presented. This technique exploits only spatial image gradients in the initial decision process. The image gradient as a focus measure has been shown to be experimentally valid and theoretically sound under weak assumptions with respect to unimodality and monotonicity. Subsequent majority filtering corroborates decisions with those of neighboring pixels, while the use of soft decisions enables smooth transitions across region boundaries. Furthermore, these last two steps add algorithmic robustness for coping with both sensor noise and optics-related effects, such as misregistration or optical flow, and minor intensity fluctuations. The dependence of these optical effects on several optical parameters is analyzed and potential remedies that can allay their impact with regard to the technique's limitations are discussed. Several examples of image synthesis using the algorithm are presented. Finally, leveraging the increasing functionality and emerging processing capabilities of digital still cameras, the method is shown to entail modest hardware requirements and is implementable using a parallel or general purpose processor.

  7. A novel dual-axis reconstruction algorithm for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Jenna; Midgley, Paul [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge, CB2 3QZ (United Kingdom)

    2006-02-22

    A new algorithm for computing electron microscopy tomograms which combines iterative methods with dual-axis geometry is presented. Initial modelling using test data shows several improvements over both the weighted back-projection (WBP) and Simultaneous Iterative Reconstruction Technique (SIRT) method, and, with increased stability and tomogram fidelity under high-noise conditions.

  8. Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction

    Directory of Open Access Journals (Sweden)

    Chun-mei Li

    2016-01-01

    Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.

  9. 一种基于SfM重建点云的三角网格化算法%Triangulation algorithm based on point clouds reconstructed by SfM

    Institute of Scientific and Technical Information of China (English)

    陈庭旺; 王庆

    2011-01-01

    This paper proposed an improved region growing based triangulation algorithm for surface modeling problem from point clouds reconstructed by SfM. Defined a k-nearest neighbor influence region to improve the topological stability. It organized candidate triangles efficiently by binary sort tree and accomplished holes detection by a searching strategy using undirected loop. Finally, achieved a complete triangular mesh. Experimental results show that, compared to Possion surface reconstruction, the algorithm can significantly improve the computational efficiency and acquire a high reconstructed accuracy,which helps to improve the performance of 3D surface reconstruction and model rendering.%针对SfM重建点云的曲面建模问题,提出一种改进的区域增长网格化算法.定义k近邻影响域提高拓扑稳定性,引入二叉排序树高效地组织候选三角片,采用无向环搜索策略完成孔洞的检测,最终获得完整的三角网格面.实验结果表明,该算法相比于Possion曲面重建,在获得高的重建精度的同时显著提高了计算效率,有助于提升3D曲面重建与模型表现的性能.

  10. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    Science.gov (United States)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  11. Study of cluster reconstruction and track fitting algorithms for CGEM-IT at BESIII

    CERN Document Server

    Guo, Yue; Ju, Xu-Dong; Wu, Ling-Hui; Xiu, Qing-Lei; Wang, Hai-Xia; Dong, Ming-Yi; Hu, Jing-Ran; Li, Wei-Dong; Li, Wei-Guo; Liu, Huai-Min; Ou-Yang, Qun; Shen, Xiao-Yan; Yuan, Ye; Zhang, Yao

    2015-01-01

    Considering the aging effects of existing Inner Drift Chamber (IDC) of BES\\uppercase\\expandafter{\\romannumeral3}, a GEM based inner tracker is proposed to be designed and constructed as an upgrade candidate for IDC. This paper introduces a full simulation package of CGEM-IT with a simplified digitization model, describes the development of the softwares for cluster reconstruction and track fitting algorithm based on Kalman filter method for CGEM-IT. Preliminary results from the reconstruction algorithms are obtained using a Monte Carlo sample of single muon events in CGEM-IT.

  12. ON ACCELERATING CONE BEAM CT IMAGE RECONSTRUCTION ALGORITHM BY CUDA-BASED GPU%基于CUDA的图形处理器加速锥束CT重建算法的研究

    Institute of Scientific and Technical Information of China (English)

    王丽芳

    2014-01-01

    锥束CT图像重建数据量巨大、运算复杂度高,重建时间长,难以满足实际应用的需求。研究基于CUDA的图形处理器加速锥束CT重建算法的方案,通过有效的并行策略来提高滤波和反投影过程的时间,并利用常数存储器和纹理存储器来提高数据访存效率。实验证明在保证重建质量的情况下,重建速度可以提高82倍。%Cone beam CT image reconstruction has huge data volume and high operation complexity,the time of image reconstruction is too long to meet the needs of practical applications.In this paper we study the acceleration solution of cone beam CT image reconstruction algo-rithm with the CUDA-based GPU.It improves the filtering and back projection process time through effective parallel strategy,and improves data access and storage efficiency using constant memory and texture memory.Experimental results show that there can have 82 times im-provement in reconstruction speed under the condition of ensuring the quality of reconstruction.

  13. Event Reconstruction Algorithms for the ATLAS Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca-Martin, T.; /CERN; Abolins, M.; /Michigan State U.; Adragna, P.; /Queen Mary, U. of London; Aleksandrov, E.; /Dubna, JINR; Aleksandrov, I.; /Dubna, JINR; Amorim, A.; /Lisbon, LIFEP; Anderson, K.; /Chicago U., EFI; Anduaga, X.; /La Plata U.; Aracena, I.; /SLAC; Asquith, L.; /University Coll. London; Avolio, G.; /CERN; Backlund, S.; /CERN; Badescu, E.; /Bucharest, IFIN-HH; Baines, J.; /Rutherford; Barria, P.; /Rome U. /INFN, Rome; Bartoldus, R.; /SLAC; Batreanu, S.; /Bucharest, IFIN-HH /CERN; Beck, H.P.; /Bern U.; Bee, C.; /Marseille, CPPM; Bell, P.; /Manchester U.; Bell, W.H.; /Glasgow U. /Pavia U. /INFN, Pavia /Regina U. /CERN /Annecy, LAPP /Paris, IN2P3 /Royal Holloway, U. of London /Napoli Seconda U. /INFN, Naples /Argonne /CERN /UC, Irvine /Barcelona, IFAE /Barcelona, Autonoma U. /CERN /Montreal U. /CERN /Glasgow U. /Michigan State U. /Bucharest, IFIN-HH /Napoli Seconda U. /INFN, Naples /New York U. /Barcelona, IFAE /Barcelona, Autonoma U. /Salento U. /INFN, Lecce /Pisa U. /INFN, Pisa /Bucharest, IFIN-HH /UC, Irvine /CERN /Glasgow U. /INFN, Genoa /Genoa U. /Lisbon, LIFEP /Napoli Seconda U. /INFN, Naples /UC, Irvine /Valencia U. /Rio de Janeiro Federal U. /University Coll. London /New York U.; /more authors..

    2011-11-09

    The ATLAS experiment under construction at CERN is due to begin operation at the end of 2007. The detector will record the results of proton-proton collisions at a center-of-mass energy of 14 TeV. The trigger is a three-tier system designed to identify in real-time potentially interesting events that are then saved for detailed offline analysis. The trigger system will select approximately 200 Hz of potentially interesting events out of the 40 MHz bunch-crossing rate (with 10{sup 9} interactions per second at the nominal luminosity). Algorithms used in the trigger system to identify different event features of interest will be described, as well as their expected performance in terms of selection efficiency, background rejection and computation time per event. The talk will concentrate on recent improvements and on performance studies, using a very detailed simulation of the ATLAS detector and electronics chain that emulates the raw data as it will appear at the input to the trigger system.

  14. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  15. Reconstruction of strain distribution in fiber Bragg grat-ings with differential evolution algorithm

    Institute of Scientific and Technical Information of China (English)

    WEN Xiao-yan; YU Qoan

    2008-01-01

    Differential evolution algorithm is used to solve the inverse problem of strain distribution in tibet Bragg grating (FBG).Linear and nonlinear strain profiles are reconstructed based on the reflection spectra. An approximate solution could beobtained within only 50 rounds of evolutions. Numerical examples show good agreements between target strain profilesand reconstructed ones. Online performance analysis illuminates the efficiency and practicality of differential evolutionalgorithm in solving the inverse problem of FBG.

  16. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  17. Fast reconstruction algorithm from modules maxima of signal wavelet transform and its application in enhancement of medical images

    Science.gov (United States)

    Zhai, Guangtao; Sun, Fengrong; Song, Haohao; Zhang, Mingqiang; Liu, Li; Wang, Changyu

    2003-09-01

    The modulus maxima of a signal's wavelet transform on different levels contain important information of the signal, which can be help to construct wavelet coefficients. A fast algorithm based on Hermite interpolation polynomial for reconstructing signal from its wavelet transform maxima is proposed in this paper. An implementation of this algorithm in medical image enhancement is also discussed. Numerical experiments have shown that compared with the Alternating Projection algorithm proposed by Mallat, this reconstruction algorithm is simpler, more efficient, and at the same time keeps high reconstruction Signal to Noise Ratio. When applied to the image contract enhancement, the computing time of this algorithm is much less compared with the one using Mallat's Alternative Projection, and the results are almost the same, so it is a practical fast reconstruction algorithm.

  18. Flow Based Algorithm

    Directory of Open Access Journals (Sweden)

    T. Karpagam

    2012-01-01

    Full Text Available Problem statement: Network topology design problems find application in several real life scenario. Approach: Most designs in the past either optimize for a single criterion like shortest or cost minimization or maximum flow. Results: This study discussed about solving a multi objective network topology design problem for a realistic traffic model specifically in the pipeline transportation. Here flow based algorithm focusing to transport liquid goods with maximum capacity with shortest distance, this algorithm developed with the sense of basic pert and critical path method. Conclusion/Recommendations: This flow based algorithm helps to give optimal result for transporting maximum capacity with minimum cost. It could be used in the juice factory, milk industry and its best alternate for the vehicle routing problem.

  19. 基于BP神经网络和主元分析法的数码相机光谱重构算法%Spectral Reconstruction Algorithm of Digital Camera Based on BP Neural Network and Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    王勇; 陈梅

    2014-01-01

    从数码相机的RGB信号重构物体表面的光谱反射率是光谱颜色管理研究中的重要课题之一.提出了一种基于误差反向传播前馈神经网络(BP)和主元分析法(PCA)实现色卡的表面光谱反射率重构的新算法.通过对三种色卡进行光谱重构实验研究了BP神经网络的最优结构和主元数的最佳选择,验证了算法的精度.实验结果表明,采用适当的BP神经网络和主元分析相结合的新算法能够精确重构同类色卡的表面光谱反射率.%Reconstructing the spectral reflectence of the object surface from RGB signals of digital camera is one of the important studies of spectral color managament.A new algorithm based on back propagation (BP) neural network and principal component analysis (PCA) is proposed to realize the spectral reflectence reconstruction of color atlas.The optimal structure of BP neural network and the number of principal components are studied in the spectral reflectence reconstruction experiments of three color atlases and the accuracy of the algorithm is also testified.The experimental results show that the new algorithm of appropriate BP neural network combined with PCA is satisfied to accurately reconstruct the spectral reflectence of the same kind of color atlas.

  20. RESEARCH ON COMPRESSED SENSING THEORY-BASED IMAGE RECONSTRUCTION ALGORITHM%基于压缩感知理论的图像重构算法研究

    Institute of Scientific and Technical Information of China (English)

    解成俊; 张铁山

    2012-01-01

    压缩感知理论突破了奈奎斯特采样频率的限制,利用该理论研究和实现了新的图像压缩采样方案.该方案利用小波变换和阈值处理相结合实现图像稀疏化,利用标准伪随机数均匀分布和二维中心傅立叶变换生成随机测量矩阵,并对小波变换后的高频子带进行加权采样,采用分段正交匹配追踪算法实现采样数据重建.仿真实验结果表明,该方案重建图像效果好.%Theory of compressed sensing releases us from the restriction defined by Nyquist sampling frequency, this paper employs the theory to study and realise a new compressed sampling scheme for image data. Sparseness processing of the image is realised in this scheme by combining the wavelet transform and the thresholding;and by using standard pseudo-random number matrix of uniformly distributed sequences and two dimensional central Fourier transform, the random measurement matrix is generated, it is then to be used in weighted sampling of wavelet transformed high frequency sub-bands. Algorithm of stagewise orthogonal matching pursuit is used for sampling data reconstruction. Results of simulation experiment show that the image can be well reconstructed by this scheme.

  1. ROI reconstruction for model-based iterative reconstruction (MBIR) via a coupled dictionary learning

    Science.gov (United States)

    Ye, Dong Hye; Srivastava, Somesh; Thibault, Jean-Baptiste; Sauer, Ken D.; Bouman, Charles A.

    2017-03-01

    Model based iterative reconstruction (MBIR) algorithms have shown significant improvement in CT image quality by increasing resolution as well as reducing noise and artifacts. In diagnostic protocols, radiologists often need the high-resolution reconstruction of a limited region of interest (ROI). This ROI reconstruction is complicated for MBIR which should reconstruct an image in a full field of view (FOV) given full sinogram measurements. Multi-resolution approaches are widely used for this ROI reconstruction of MBIR, in which the image with a full FOV is reconstructed in a low-resolution and the forward projection of non-ROI is subtracted from the original sinogram measurements for high-resolution ROI reconstruction. However, a low-resolution reconstruction of a full FOV can be susceptible to streaking and blurring artifacts and these can be propagated into the following high-resolution ROI reconstruction. To tackle this challenge, we use a coupled dictionary representation model between low- and high-resolution training dataset for artifact removal and super resolution of a low-resolution full FOV reconstruction. Experimental results on phantom data show that the restored full FOV reconstruction via a coupled dictionary learning significantly improve the image quality of high-resolution ROI reconstruction for MBIR.

  2. Energy reconstruction and calibration algorithms for the ATLAS electromagnetic calorimeter

    CERN Document Server

    Delmastro, M

    2003-01-01

    The work of this thesis is devoted to the study, development and optimization of the algorithms of energy reconstruction and calibration for the electromagnetic calorimeter (EMC) of the ATLAS experiment, presently under installation and commissioning at the CERN Large Hadron Collider in Geneva (Switzerland). A deep study of the electrical characteristics of the detector and of the signals formation and propagation is conduced: an electrical model of the detector is developed and analyzed through simulations; a hardware model (mock-up) of a group of the EMC readout cells has been built, allowing the direct collection and properties study of the signals emerging from the EMC cells. We analyze the existing multiple-sampled signal reconstruction strategy, showing the need of an improvement in order to reach the advertised performances of the detector. The optimal filtering reconstruction technique is studied and implemented, taking into account the differences between the ionization and calibration waveforms as e...

  3. A reconstruction algorithm for compressive quantum tomography using various measurement sets.

    Science.gov (United States)

    Zheng, Kai; Li, Kezhi; Cong, Shuang

    2016-12-14

    Compressed sensing (CS) has been verified that it offers a significant performance improvement for large quantum systems comparing with the conventional quantum tomography approaches, because it reduces the number of measurements from O(d(2)) to O(rd log(d)) in particular for quantum states that are fairly pure. Yet few algorithms have been proposed for quantum state tomography using CS specifically, let alone basis analysis for various measurement sets in quantum CS. To fill this gap, in this paper an efficient and robust state reconstruction algorithm based on compressive sensing is developed. By leveraging the fixed point equation approach to avoid the matrix inverse operation, we propose a fixed-point alternating direction method algorithm for compressive quantum state estimation that can handle both normal errors and large outliers in the optimization process. In addition, properties of five practical measurement bases (including the Pauli basis) are analyzed in terms of their coherences and reconstruction performances, which provides theoretical instructions for the selection of measurement settings in the quantum state estimation. The numerical experiments show that the proposed algorithm has much less calculating time, higher reconstruction accuracy and is more robust to outlier noises than many existing state reconstruction algorithms.

  4. A reconstruction algorithm for compressive quantum tomography using various measurement sets

    Science.gov (United States)

    Zheng, Kai; Li, Kezhi; Cong, Shuang

    2016-12-01

    Compressed sensing (CS) has been verified that it offers a significant performance improvement for large quantum systems comparing with the conventional quantum tomography approaches, because it reduces the number of measurements from O(d2) to O(rd log(d)) in particular for quantum states that are fairly pure. Yet few algorithms have been proposed for quantum state tomography using CS specifically, let alone basis analysis for various measurement sets in quantum CS. To fill this gap, in this paper an efficient and robust state reconstruction algorithm based on compressive sensing is developed. By leveraging the fixed point equation approach to avoid the matrix inverse operation, we propose a fixed-point alternating direction method algorithm for compressive quantum state estimation that can handle both normal errors and large outliers in the optimization process. In addition, properties of five practical measurement bases (including the Pauli basis) are analyzed in terms of their coherences and reconstruction performances, which provides theoretical instructions for the selection of measurement settings in the quantum state estimation. The numerical experiments show that the proposed algorithm has much less calculating time, higher reconstruction accuracy and is more robust to outlier noises than many existing state reconstruction algorithms.

  5. Superiorization of incremental optimization algorithms for statistical tomographic image reconstruction

    Science.gov (United States)

    Helou, E. S.; Zibetti, M. V. W.; Miqueles, E. X.

    2017-04-01

    We propose the superiorization of incremental algorithms for tomographic image reconstruction. The resulting methods follow a better path in its way to finding the optimal solution for the maximum likelihood problem in the sense that they are closer to the Pareto optimal curve than the non-superiorized techniques. A new scaled gradient iteration is proposed and three superiorization schemes are evaluated. Theoretical analysis of the methods as well as computational experiments with both synthetic and real data are provided.

  6. Computed Tomography Radiation Dose Reduction: Effect of Different Iterative Reconstruction Algorithms on Image Quality

    NARCIS (Netherlands)

    Willemink, M.J.; Takx, R.A.P.; Jong, P.A. de; Budde, R.P.; Bleys, R.L.; Das, M.; Wildberger, J.E.; Prokop, M.; Buls, N.; Mey, J. de; Leiner, T.; Schilham, A.M.

    2014-01-01

    We evaluated the effects of hybrid and model-based iterative reconstruction (IR) algorithms from different vendors at multiple radiation dose levels on image quality of chest phantom scans.A chest phantom was scanned on state-of-the-art computed tomography scanners from 4 vendors at 4 dose levels

  7. Combinatorial analysis and algorithms for quasispecies reconstruction using next-generation sequencing

    Directory of Open Access Journals (Sweden)

    Vincenti Donatella

    2011-01-01

    Full Text Available Abstract Background Next-generation sequencing (NGS offers a unique opportunity for high-throughput genomics and has potential to replace Sanger sequencing in many fields, including de-novo sequencing, re-sequencing, meta-genomics, and characterisation of infectious pathogens, such as viral quasispecies. Although methodologies and software for whole genome assembly and genome variation analysis have been developed and refined for NGS data, reconstructing a viral quasispecies using NGS data remains a challenge. This application would be useful for analysing intra-host evolutionary pathways in relation to immune responses and antiretroviral therapy exposures. Here we introduce a set of formulae for the combinatorial analysis of a quasispecies, given a NGS re-sequencing experiment and an algorithm for quasispecies reconstruction. We require that sequenced fragments are aligned against a reference genome, and that the reference genome is partitioned into a set of sliding windows (amplicons. The reconstruction algorithm is based on combinations of multinomial distributions and is designed to minimise the reconstruction of false variants, called in-silico recombinants. Results The reconstruction algorithm was applied to error-free simulated data and reconstructed a high percentage of true variants, even at a low genetic diversity, where the chance to obtain in-silico recombinants is high. Results on empirical NGS data from patients infected with hepatitis B virus, confirmed its ability to characterise different viral variants from distinct patients. Conclusions The combinatorial analysis provided a description of the difficulty to reconstruct a quasispecies, given a determined amplicon partition and a measure of population diversity. The reconstruction algorithm showed good performance both considering simulated data and real data, even in presence of sequencing errors.

  8. AN IMPROVED SPARSITY ADAPTIVE MATCHING PURSUIT ALGORITHM FOR COMPRESSIVE SENSING BASED ON REGULARIZED BACKTRACKING

    Institute of Scientific and Technical Information of China (English)

    Zhao Ruizhen; Ren Xiaoxin; Han Xuelian; Hu Shaohai

    2012-01-01

    Sparsity Adaptive Matching Pursuit (SAMP) algorithm is a widely used reconstruction algorithm for compressive sensing in the case that the sparsity is unknown.In order to match the sparsity more accurately,we presented an improved SAMP algorithm based on Regularized Backtracking (SAMP-RB).By adapting a regularized backtracking step to SAMP algorithm in each iteration stage,the proposed algorithm can flexibly remove the inappropriate atoms.The experimental results show that SAMP-RB reconstruction algorithm greatly improves SAMP algorithm both in reconstruction quality and computational time.It has better reconstruction efficiency than most of the available matching pursuit algorithms.

  9. Real Time Equilibrium Reconstruction Algorithm in EAST Tokamak

    Institute of Scientific and Technical Information of China (English)

    王华忠; 罗家融; 黄勤超

    2004-01-01

    The EAST (HT-7U) superconducting tokamak is a national project of China on fusion research, with a capability of long-pulse (~ 1000 s) operation. In order to realize a longduration steady-state operation of EAST, some significant capability of real-time control is required. It would be very crucial to obtain the current profile parameters and the plasma shapes in real time by a flexible control system. As those discharge parameters cannot be directly measured,so a current profile consistent with the magnetohydrodynamic equilibrium should be evaluated from external magnetic measurements, based on a linearized iterative least square method, which can meet the requirements of the measurements. The arithmetic that the EFIT (equilibrium fitting code) is used for reference will be given in this paper and the computational efforts are reduced by parametrizing the current profile linearly in terms of a number of physical parameters.In order to introduce this reconstruction algorithm clearly, the main hardware design will be listed also.

  10. SimpleSTORM: a fast, self-calibrating reconstruction algorithm for localization microscopy.

    Science.gov (United States)

    Köthe, Ullrich; Herrmannsdörfer, Frank; Kats, Ilia; Hamprecht, Fred A

    2014-06-01

    Although there are many reconstruction algorithms for localization microscopy, their use is hampered by the difficulty to adjust a possibly large number of parameters correctly. We propose SimpleSTORM, an algorithm that determines appropriate parameter settings directly from the data in an initial self-calibration phase. The algorithm is based on a carefully designed yet simple model of the image acquisition process which allows us to standardize each image such that the background has zero mean and unit variance. This standardization makes it possible to detect spots by a true statistical test (instead of hand-tuned thresholds) and to de-noise the images with an efficient matched filter. By reducing the strength of the matched filter, SimpleSTORM also performs reasonably on data with high-spot density, trading off localization accuracy for improved detection performance. Extensive validation experiments on the ISBI Localization Challenge Dataset, as well as real image reconstructions, demonstrate the good performance of our algorithm.

  11. FastDIRC: a fast Monte Carlo and reconstruction algorithm for DIRC detectors

    CERN Document Server

    Hardin, John

    2016-01-01

    FastDIRC is a novel fast Monte Carlo and reconstruction algorithm for DIRC detectors. A DIRC employs rectangular fused-silica bars both as Cherenkov radiators and as light guides. Cherenkov-photon imaging and time-of-propagation information are utilized by a DIRC to identify charged particles. GEANT-based DIRC Monte Carlo simulations are extremely CPU intensive. The FastDIRC algorithm permits fully simulating a DIRC detector more than 10000 times faster than using GEANT. This facilitates designing a DIRC-reconstruction algorithm that improves the Cherenkov-angle resolution of a DIRC detector by about 30% compared to existing algorithms. FastDIRC also greatly reduces the time required to study competing DIRC-detector designs.

  12. 一种基于双预警卫星的弹道导弹轨迹重建算法%Algorithm of reconstruction for ballistic missile trajectory based on double early warning satellites

    Institute of Scientific and Technical Information of China (English)

    王树文; 姜海林; 胡沛

    2015-01-01

    In view of the complexity of the reconstruction algorithm for early warning satellite to ballistic missile, and the trade-off between high precision and real-time being hard, we propose a reconstruction algorithm of ballistic missile flight path based on observation of the geo-stationary earth orbit (GEO) double early warning satellites, by employing the conditions of epipolar constraint and time constraint in stereo vision matching, and perform a simulation experiment. Simulation results illustrate that the reconstruction trajectory of the ballistic missile’s powered phase comes close to the real one, the distance error between the reconstruction point and the real one is less than 2 km under the circumstance of camera detection error being small, which can meet the demand of the following estimation of ballistic missile launching point and forecast of drop point.%针对预警卫星对弹道导弹轨迹重建算法复杂、高精度与实时性难以兼顾的问题,利用立体视觉匹配中的目标点外极线约束及时间约束条件,提出了一种基于地球静止轨道(GEO)双预警卫星观测的弹道导弹飞行轨迹重建算法,并进行了仿真实验。实验结果表明,该算法对弹道导弹主动段重建轨迹与真实轨迹接近,在相机检测误差较小情况下重建点距离真实点误差小于2 km,可满足后续对弹道导弹进行发射点估计及落点预报需求。

  13. New Algorithm for 3D Facial Model Reconstruction and Its Application in Virtual Reality

    Institute of Scientific and Technical Information of China (English)

    Rong-Hua Liang; Zhi-Geng Pan; Chun Chen

    2004-01-01

    3D human face model reconstruction is essential to the generation of facial animations that is widely used in the field of virtual reality(VR).The main issues of 3D facial model reconstruction based on images by vision technologies are in twofold: one is to select and match the corresponding features of face from two images with minimal interaction and the other is to generate the realistic-looking human face model.In this paper,a new algorithm for realistic-looking face reconstruction is presented based on stereo vision.Firstly,a pattern is printed and attached to a planar surface for camera calibration,and corners generation and corners matching between two images are performed by integrating modified image pyramid Lucas-Kanade(PLK)algorithm and local adjustment algorithm,and then 3D coordinates of corners are obtained by 3D reconstruction.Individual face model is generated by the deformation of general 3D model and interpolation of the features.Finally,realisticlooking human face model is obtained after texture mapping and eyes modeling.In addition,some application examples in the field of VR are given.Experimental result shows that the proposed algorithm is robust and the 3D model is photo-realistic.

  14. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    Science.gov (United States)

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  15. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); Chen Da [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); College of Material Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Li Zhenghong [Institute of Nuclear Physics and Chemistry, CAEP, Mianyang, 621900 Sichuan (China); Wu Yuelei [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); Nuclear and Radiation Safety Centre, State Environmental Protection Administration (SEPA), Beijing 100082 (China)

    2012-11-15

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  16. Validation of a method for in vivo 3D dose reconstruction for IMRT and VMAT treatments using on-treatment EPID images and a model-based forward-calculation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Van Uytven, Eric, E-mail: eric.vanuytven@cancercare.mb.ca; Van Beek, Timothy [Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); McCowan, Peter M. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2, Canada and Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Chytyk-Praznik, Krista [Medical Physics Department, Nova Scotia Cancer Centre, 5820 University Avenue, Halifax, Nova Scotia B3H 1V7 (Canada); Greer, Peter B. [School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, NSW 2308 (Australia); Department of Radiation Oncology, Calvary Mater Newcastle Hospital, Newcastle, NSW 2298 (Australia); McCurdy, Boyd M. C. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Department of Radiology, University of Manitoba, 820 Sherbrook Street, Winnipeg, Manitoba R3A 1R9 (Canada)

    2015-12-15

    Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient

  17. Algorithm for three dimension reconstruction of magnetic resonance tomographs and X-ray images based on Fast Fourier Transform; Algoritmo para reconstrucao tridimensional de imagens de tomografos de ressonancia magnetica e de raio-X baseado no uso de Transformada Rapida de Fourier

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Josiane M.; Traina, Agma Juci M. [Sao Paulo Univ., Sao Carlos, SP (Brazil). Inst. de Ciencias Matematicas; Cruvinel, Paulo E. [EMBRAPA, Sao Carlos, SP (Brazil). CNPDIA

    1995-12-31

    This work presents an algorithm for three-dimensional digital image reconstruction. Such algorithms based on the combination of both a Fast Fourier Transform method with Hamming Window and the use of a tri-linear interpolation function. The algorithm allows not only the generation of three-dimensional spatial spin distribution maps for Magnetic Resonance Tomography data but also X and Y-rays linear attenuation coefficient maps for CT scanners. Results demonstrates the usefulness of the algorithm in three-dimensional image reconstruction by doing first two-dimensional reconstruction and rather after interpolation. The algorithm was developed in C++ language, and there are two available versions: one under the DOS environment, and the other under the UNIX/Sun environment. (author) 10 refs., 5 figs.

  18. MLEM LOW-DOSE CT RECONSTRUCTION ALGORITHM BASED ON ROAD AND WAVELET SHRINKAGE%基于 ROAD 和小波收缩的 MLEM 低剂量 CT 重建算法

    Institute of Scientific and Technical Information of China (English)

    董婵婵; 桂志国; 张权; 郝慧艳; 张芳; 刘祎; 孙未雅

    2016-01-01

    针对低剂量 CT (Computed Tomography)重建图像质量退化的问题,提出一种基于小波收缩和绝对差值排序各项异性扩散的 MLEM(Maximum Likelihood Expectation Maximization)低剂量 CT 重建算法。算法在每次迭代中首先采用 MLEM算法对低剂量CT 投影数据进行重建。由于各项异性扩散对噪声敏感,所以算法先对重建后的图像进行小波变换,再在更稳定的低频小波域进行基于绝对差值排序的各项异性扩散处理,对小波高频系数进行软阈值降噪处理。然后将降噪处理后的系数进行小波反变换,得到降噪后的图像。最后使用中值滤波对图像进行处理,从而消除脉冲噪声点。实验结果表明,与其他几种常用重建算法相比,该算法重建的图像信噪比更高,归一化均方误差更小,处理后的图像更清晰,即可以在抑制噪声的同时,较好地保持图像的边缘和细节信息。%Concerning the problem of quality degradation of low-dose CT reconstruction images,we presented an MLEM low-dose CT reconstruction method which is based on wavelet shrinkage and rank-ordered absolute differences anisotropic diffusion.In each time of iteration,the algorithm first uses MLEMto reconstruct the low-dose projection data.Since the anisotropic diffusion is sensitive to noises,so the algorithm performs wavelet transform on the reconstructed image prior to conducting anisotropic diffusion processing based on rank-ordered absolute differences in more stable low-frequency wavelet domain and then carries out the soft threshold denoising processing on high-frequency coefficient of wavelet.After that the algorithm performs inverse discrete wavelet transform (IDWT)on the coefficients with denoising treatment and obtains the denoised images.Finally it uses median filter to process the image so as to eliminate the impulse noise points.Experimental results showed that compared with some other common

  19. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    Science.gov (United States)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  20. An adaptive reconstruction algorithm for spectral CT regularized by a reference image

    Science.gov (United States)

    Wang, Miaoshi; Zhang, Yanbo; Liu, Rui; Guo, Shuxu; Yu, Hengyong

    2016-12-01

    The photon counting detector based spectral CT system is attracting increasing attention in the CT field. However, the spectral CT is still premature in terms of both hardware and software. To reconstruct high quality spectral images from low-dose projections, an adaptive image reconstruction algorithm is proposed that assumes a known reference image (RI). The idea is motivated by the fact that the reconstructed images from different spectral channels are highly correlated. If a high quality image of the same object is known, it can be used to improve the low-dose reconstruction of each individual channel. This is implemented by maximizing the patch-wise correlation between the object image and the RI. Extensive numerical simulations and preclinical mouse study demonstrate the feasibility and merits of the proposed algorithm. It also performs well for truncated local projections, and the surrounding area of the region- of-interest (ROI) can be more accurately reconstructed. Furthermore, a method is introduced to adaptively choose the step length, making the algorithm more feasible and easier for applications.

  1. An architecture for the efficient implementation of compressive sampling reconstruction algorithms in reconfigurable hardware

    Science.gov (United States)

    Ortiz, Fernando E.; Kelmelis, Eric J.; Arce, Gonzalo R.

    2007-04-01

    According to the Shannon-Nyquist theory, the number of samples required to reconstruct a signal is proportional to its bandwidth. Recently, it has been shown that acceptable reconstructions are possible from a reduced number of random samples, a process known as compressive sampling. Taking advantage of this realization has radical impact on power consumption and communication bandwidth, crucial in applications based on small/mobile/unattended platforms such as UAVs and distributed sensor networks. Although the benefits of these compression techniques are self-evident, the reconstruction process requires the solution of nonlinear signal processing algorithms, which limit applicability in portable and real-time systems. In particular, (1) the power consumption associated with the difficult computations offsets the power savings afforded by compressive sampling, and (2) limited computational power prevents these algorithms to maintain pace with the data-capturing sensors, resulting in undesirable data loss. FPGA based computers offer low power consumption and high computational capacity, providing a solution to both problems simultaneously. In this paper, we present an architecture that implements the algorithms central to compressive sampling in an FPGA environment. We start by studying the computational profile of the convex optimization algorithms used in compressive sampling. Then we present the design of a pixel pipeline suitable for FPGA implementation, able to compute these algorithms.

  2. Electromagnetic tomography (EMT): image reconstruction based on the inverse problem

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Starting from Maxwell's equations for inhomogeneous media, nonlinear integral equations of the inverse problem of the electromagnetic tomography (EMT) are derived, whose kernel is the dyadic Green's function for the EMT sensor with a homogeneous medium in the object space. Then in terms of ill-posedness of the inverse problem, a Tikhonov-type regularization model is established based on a linearization-approximation of the nonlinear inverse problem. Finally, an iterative algorithm of image reconstruction based on the inverse problem and reconstruction images of some object flows for simplified sensor are given. Initial results of the image reconstruction show that the algorithm based on the inverse problem is superior to those based on the linear back-projection in the quality of image reconstruction.

  3. Tensor-based dictionary learning for dynamic tomographic reconstruction

    Science.gov (United States)

    Tan, Shengqi; Zhang, Yanbo; Wang, Ge; Mou, Xuanqin; Cao, Guohua; Wu, Zhifang; Yu, Hengyong

    2015-04-01

    In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction.

  4. Extension of the modal wave-front reconstruction algorithm to non-uniform illumination.

    Science.gov (United States)

    Ma, Xiaoyu; Mu, Jie; Rao, ChangHui; Yang, Jinsheng; Rao, XueJun; Tian, Yu

    2014-06-30

    Attempts are made to eliminate the effects of non-uniform illumination on the precision of wave-front measurement. To achieve this, the relationship between the wave-front slope at a single sub-aperture and the distributions of the phase and light intensity of the wave-front were first analyzed to obtain the relevant theoretical formulae. Then, based on the principle of modal wave-front reconstruction, the influence of the light intensity distribution on the wave-front slope is introduced into the calculation of the reconstruction matrix. Experiments were conducted to prove that the corrected modal wave-front reconstruction algorithm improved the accuracy of wave-front reconstruction. Moreover, the correction is conducive to high-precision wave-front measurement using a Hartmann wave-front sensor in the presence of non-uniform illumination.

  5. Shape reconstruction from apparent contours theory and algorithms

    CERN Document Server

    Bellettini, Giovanni; Paolini, Maurizio

    2015-01-01

    Motivated by a variational model concerning the depth of the objects in a picture and the problem of hidden and illusory contours, this book investigates one of the central problems of computer vision: the topological and algorithmic reconstruction of a smooth three dimensional scene starting from the visible part of an apparent contour. The authors focus their attention on the manipulation of apparent contours using a finite set of elementary moves, which correspond to diffeomorphic deformations of three dimensional scenes. A large part of the book is devoted to the algorithmic part, with implementations, experiments, and computed examples. The book is intended also as a user's guide to the software code appcontour, written for the manipulation of apparent contours and their invariants. This book is addressed to theoretical and applied scientists working in the field of mathematical models of image segmentation.

  6. Geometric Algorithms for Identifying and Reconstructing Galaxy Systems

    CERN Document Server

    Marinoni, C

    2010-01-01

    The theme of this book chapter is to discuss algorithms for identifying and reconstructing groups and clusters of galaxies out of the general galaxy distribution. I review the progress of detection techniques through time, from the very first visual-like algorithms to the most performant geometrical methods available today. This will allow readers to understand the development of the field as well as the various issues and pitfalls we are confronted with. This essay is drawn from a talk given by the author at the conference "The World a Jigsaw: Tessellations in the Sciences" held at the Lorentz Center in Leiden. It is intended for a broad audience of scientists (and so does not include full academic referencing), but it may be of interest to specialists.

  7. Cross-Correlation Delay Estimation Method Based on EMD Decomposition and Reconstruction Algorithm%基于EMD分解重构的互相关时延估计方法

    Institute of Scientific and Technical Information of China (English)

    路晓妹; 寇文珍; 段渭军

    2013-01-01

    Under the background of the application for anti-sniper acoustic detection positioning system,tosolve the problems of low accuracy of delay estimation and large relative errors which appears when using the common correlation method,IMF reconstruction and noise elimination are applied in the estimation of acoustic detection positioning system of TDOA(time delay of arrival).For the selection of IMF components in reconstruction,a selection method based on Pearson correlation coefficient is designed.A cross-correlation delay estimation method based on EMD decomposition and reconstruction algorithm is put forward by selecting IMF components for reconstructing and denoising,and combining with the cross-correlation time delay estimation method.The validity of this method is proved by simulation experiment on the actual collected acoustic signal.%在反狙击手声探测定位系统应用背景下,针对常用相关法时延估计精度不高、相对误差较大等问题,将IMF重构消噪的思想应用于声探测定位系统TDOA(time delay of arrival)的估计中.针对重构过程中IMF分量的选取问题,设计了一种基于皮尔逊(Pearson)相关系数的选取方法,选取IMF分量进行重构消噪并结合互相关的时延估计方法,构成了基于EMD分解重构的互相关时延估计方法.通过对实际采集的声信号进行仿真实验,验证了该方法的有效性.

  8. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  9. An automated algorithm for the generation of dynamically reconstructed trajectories

    Science.gov (United States)

    Komalapriya, C.; Romano, M. C.; Thiel, M.; Marwan, N.; Kurths, J.; Kiss, I. Z.; Hudson, J. L.

    2010-03-01

    The lack of long enough data sets is a major problem in the study of many real world systems. As it has been recently shown [C. Komalapriya, M. Thiel, M. C. Romano, N. Marwan, U. Schwarz, and J. Kurths, Phys. Rev. E 78, 066217 (2008)], this problem can be overcome in the case of ergodic systems if an ensemble of short trajectories is available, from which dynamically reconstructed trajectories can be generated. However, this method has some disadvantages which hinder its applicability, such as the need for estimation of optimal parameters. Here, we propose a substantially improved algorithm that overcomes the problems encountered by the former one, allowing its automatic application. Furthermore, we show that the new algorithm not only reproduces the short term but also the long term dynamics of the system under study, in contrast to the former algorithm. To exemplify the potential of the new algorithm, we apply it to experimental data from electrochemical oscillators and also to analyze the well-known problem of transient chaotic trajectories.

  10. Three-dimensional CT image reconstruction based on accelerated splitting algorithm of ordered subsets%基于有序子集加速拆分算法的三维CT图像重建

    Institute of Scientific and Technical Information of China (English)

    谌湘倩; 马绍惠; 须文波

    2016-01-01

    Since the computing time of statistical method in CT (computed tomography) reconstruction process is long,a three⁃dimensional CT image reconstruction method based on accelerated splitting algorithm of ordered subsets(OS)is proposed. The method takes full advantage of the fast convergence speed when the augmented Lagrangian(AL)method of the linear con⁃straint convex optimization is in weak condition. The weighted and regularized least square problems are solved with linear vari⁃ant of AL method. The separable quadratic surrogate function is used in this method to replace the quadratic AL penalty term scaled in the AL to obtain a simple accelerated splitting algorithm of ordered⁃subsets(OS⁃ASA),which can avoid the tedious pa⁃rameter tuning and speed up the convergence rate. The experimental results show that the algorithm significantly accelerates the convergence speed of CT image reconstruction. The OS artifacts can be reduced by CT image reconstruction when the more sub⁃sets are used.%针对计算机断层扫描(CT)重建过程中统计方法计算时间较长的问题,提出一种利用有序子集加速拆分算法的三维CT图像重建方法。该方法充分利用线性约束凸优化问题的增广拉格朗日(AL)方法在较弱条件下的收敛速度快的优势;同时针对内部最小二乘问题,使用AL方法的线性变形求解加权正则化最小二乘问题,该方法使用可分离二次型代理函数代替缩放增广拉格朗日中的二次型AL惩罚项,得到一种简单有序子集(OS)加速型拆分算法(OS⁃ASA),避免了繁琐的参数调整,可快速收敛。实验结果表明,该文算法显著加快了CT图像重建的收敛速度,当使用子集较多时,CT图像重建可以减少OS伪影。

  11. An efficient polyenergetic SART (pSART) reconstruction algorithm for quantitative myocardial CT perfusion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yuan, E-mail: yuan.lin@duke.edu; Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, 2424 Erwin Road, Suite 302, Durham, North Carolina 27705 (United States)

    2014-02-15

    Purpose: In quantitative myocardial CT perfusion imaging, beam hardening effect due to dense bone and high concentration iodinated contrast agent can result in visible artifacts and inaccurate CT numbers. In this paper, an efficient polyenergetic Simultaneous Algebraic Reconstruction Technique (pSART) was presented to eliminate the beam hardening artifacts and to improve the CT quantitative imaging ability. Methods: Our algorithm made threea priori assumptions: (1) the human body is composed of several base materials (e.g., fat, breast, soft tissue, bone, and iodine); (2) images can be coarsely segmented to two types of regions, i.e., nonbone regions and noniodine regions; and (3) each voxel can be decomposed into a mixture of two most suitable base materials according to its attenuation value and its corresponding region type information. Based on the above assumptions, energy-independent accumulated effective lengths of all base materials can be fast computed in the forward ray-tracing process and be used repeatedly to obtain accurate polyenergetic projections, with which a SART-based equation can correctly update each voxel in the backward projecting process to iteratively reconstruct artifact-free images. This approach effectively reduces the influence of polyenergetic x-ray sources and it further enables monoenergetic images to be reconstructed at any arbitrarily preselected target energies. A series of simulation tests were performed on a size-variable cylindrical phantom and a realistic anthropomorphic thorax phantom. In addition, a phantom experiment was also performed on a clinical CT scanner to further quantitatively validate the proposed algorithm. Results: The simulations with the cylindrical phantom and the anthropomorphic thorax phantom showed that the proposed algorithm completely eliminated beam hardening artifacts and enabled quantitative imaging across different materials, phantom sizes, and spectra, as the absolute relative errors were reduced

  12. Quasi-dense matching based on color Sift algorithm for 3D reconstruction%一种用于三维重建的彩色Sift准稠密匹配算法

    Institute of Scientific and Technical Information of China (English)

    赵璐璐; 耿国华; 周明全; 王小凤

    2012-01-01

    针对复杂光照条件下Sift算法对彩色图像匹配能力较差,基于Kubelka-Munk理论,提出了一种适用于未标定图像的准稠密立体匹配算法,有助于更精确地进行三维重建.该算法首先求出彩色图像各个像素的颜色不变量,提取彩色特征点并通过构造彩色Sift特征描述子进行初匹配,采用RANSAC鲁棒算法消除误匹配生成种子点;然后依据视差约束提出一种基于视差梯度均值自适应窗口方法,根据视差梯度均值调整搜索范围;最后采用最优先原则进行区域增长.实验证明,该算法能获得比较满意的匹配效果,是一种有效的用于三维重建的准稠密匹配算法.%This paper proposed a quasi-dense matching algorithm based on the Kubelka-Munk theory for uncalibrated images, it contributed to more accurate 3D reconstruction. Because Sift algorithm was less capable of matching for color images under changing illumination. First, it found color invariant of each pixel in color images for extracting color feature points and constructing color sift feature descriptor to match, using RANSAC robust algorithm to eliminate false matches for generating seed points. Then it presented a adaptive window method to adjust search scope, which was based on the mean of disparity gradient according to the parallax constraint. Finally, it used the principle of the highest priority for regional growth. Experiments show that this algorithm can obtain satisfactory quasi-dense matching results, and it is an effective dense matching algorithm for 3D Reconstruction.

  13. Optical tomography reconstruction algorithm with the finite element method: An optimal approach with regularization tools

    Energy Technology Data Exchange (ETDEWEB)

    Balima, O., E-mail: ofbalima@gmail.com [Département des Sciences Appliquées, Université du Québec à Chicoutimi, 555 bd de l’Université, Chicoutimi, QC, Canada G7H 2B1 (Canada); Favennec, Y. [LTN UMR CNRS 6607 – Polytech’ Nantes – La Chantrerie, Rue Christian Pauc, BP 50609 44 306 Nantes Cedex 3 (France); Rousse, D. [Chaire de recherche industrielle en technologies de l’énergie et en efficacité énergétique (t3e), École de technologie supérieure, 201 Boul. Mgr, Bourget Lévis, QC, Canada G6V 6Z3 (Canada)

    2013-10-15

    Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.

  14. RECONSTRUCTION OF WELD POOL SURFACE BASED ON SHAPE FROM SHADING

    Institute of Scientific and Technical Information of China (English)

    DU Quanying; CHEN Shanben; LIN Tao

    2006-01-01

    A valid image-processing algorithm of weld pool surface reconstruction according to an input image of weld pool based on shape from shading (SFS) in computer vision is presented. The weld pool surface information is related to the backside weld width, which is crucial to the quality of weldjoint. The image of weld pool is recorded with an optical sensing method. Firstly, the reflectance map model, which specifies the imaging process, is estimated. Then, the algorithm of weld pool surface reconstruction based on SFS is implemented by iteration scheme and speeded by hierarchical structure. The results indicate the accuracy and effectiveness of the approach.

  15. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

    Science.gov (United States)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

    2013-01-01

    The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

  16. DEVELOPMENT OF ALGORITHMS OF NUMERICAL PROJECT OPTIMIZATION FOR THE CONSTRUCTION AND RECONSTRUCTION OF ENGINEERING STRUCTURES

    Directory of Open Access Journals (Sweden)

    MENEJLJUK О. І.

    2016-08-01

    Full Text Available Raising of problem. The paper analyzes the numerical optimization methods of construction projects and reconstruction of engineering structures. Purpose. Possible ways of modeling organizational and technological solutions in construction are presented. Based on the analysis the most effective method of optimization by experimental and statistical modeling with application of modern computer programs in the field of project management and mathematical statistics is selected. Conclusion. An algorithm for solving optimization problems by means of experimental and statistical modeling is developed.

  17. Prior Structural Information CT Reconstruction Algorithm Based on Variable Voltage%基于结构先验的变电压 CT 成像

    Institute of Scientific and Technical Information of China (English)

    张雪英; 陈平; 潘晋孝

    2015-01-01

    为获得更高质量的CT重建图像,建立了基于结构先验的变电压CT成像方法。该方法对同一像素点在不同电压下满足最佳灰度带的有效投影数据进行叠加迭代重建,得到递变电压的重建结果,通过设定阈值,将低电压下投影数据的重建结果分为两个部分,将边缘的结构部分先验地运用到高电压下投影数据的重建结果里,以此弥补变电压重建图像过程中的信息缺失。仿真实验表明:该方法能够获得完整的工件信息,重建图像质量高,像素值也更加稳定。%The method of variable voltage CT reconstruction is to obtain projection sequence matching the effec -tive thickness of work piece under variable voltage and to construct it .Firstly, we use the efficient projection of the same pixel under different voltages meeting the best condition to reconstruct .So we can get variable voltage reconstruction .Then by setting threshold , the reconstruction of the projection data under low voltage can be di-vided into an edge portion and a non -edge section and the edge portion priors to applying to the reconstruction of the projection data under high voltage in order to make up for the missing information .Experiment shows that that this method can obtain the complete information of work piece , the quality of reconstructed image is higher and the pixel values are more stable .

  18. Quantitatively assessed CT imaging measures of pulmonary interstitial pneumonia: Effects of reconstruction algorithms on histogram parameters

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Hisanobu [Department of Radiology, Hyogo Kaibara Hospital, 5208-1 Kaibara, Kaibara-cho, Tanba 669-3395 (Japan)], E-mail: hisanobu19760104@yahoo.co.jp; Ohno, Yoshiharu [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: yosirad@kobe-u.ac.jp; Yamazaki, Youichi [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: y.yamazk@sahs.med.osaka-u.ac.jp; Nogami, Munenobu [Division of PET, Institute of Biomedical Research and Innovation, 2-2 MInamimachi, Minatojima, Chu0-ku, Kobe 650-0047 (Japan)], E-mail: aznogami@fbri.org; Kusaka, Akiko [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: a.kusaka@hosp.kobe-u.ac.jp; Murase, Kenya [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: murase@sahs.med.osaka-u.ac.jp; Sugimura, Kazuro [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: sugimura@med.kobe-u.ac.jp

    2010-04-15

    This study aimed the influences of reconstruction algorithm for quantitative assessments in interstitial pneumonia patients. A total of 25 collagen vascular disease patients (nine male patients and 16 female patients; mean age, 57.2 years; age range 32-77 years) underwent thin-section MDCT examinations, and MDCT data were reconstructed with three kinds of reconstruction algorithm (two high-frequencies [A and B] and one standard [C]). In reconstruction algorithm B, the effect of low- and middle-frequency space was suppressed compared with reconstruction algorithm A. As quantitative CT parameters, kurtosis, skewness, and mean lung density (MLD) were acquired from a frequency histogram of the whole lung parenchyma in each reconstruction algorithm. To determine the difference of quantitative CT parameters affected by reconstruction algorithms, these parameters were compared statistically. To determine the relationships with the disease severity, these parameters were correlated with PFTs. In the results, all the histogram parameters values had significant differences each other (p < 0.0001) and those of reconstruction algorithm C were the highest. All MLDs had fair or moderate correlation with all parameters of PFT (-0.64 < r < -0.45, p < 0.05). Though kurtosis and skewness in high-frequency reconstruction algorithm A had significant correlations with all parameters of PFT (-0.61 < r < -0.45, p < 0.05), there were significant correlations only with diffusing capacity of carbon monoxide (DLco) and total lung capacity (TLC) in reconstruction algorithm C and with forced expiratory volume in 1 s (FEV1), DLco and TLC in reconstruction algorithm B. In conclusion, reconstruction algorithm has influence to quantitative assessments on chest thin-section MDCT examination in interstitial pneumonia patients.

  19. Rapidly 3D Texture Reconstruction Based on Oblique Photography

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-07-01

    Full Text Available This paper proposes a city texture fast reconstruction method based on aerial tilt image for reconstruction of three-dimensional city model. Based on the photogrammetry and computer vision theory and using the city building digital surface model obtained by prior treatment, through collinear equation calculation geometric projection of object and image space, to obtain the three-dimensional information and texture information of the structure and through certain the optimal algorithm selecting the optimal texture on the surface of the object, realize automatic extraction of the building side texture and occlusion handling of the dense building texture. The real image texture reconstruction results show that: the method to the 3D city model texture reconstruction has the characteristics of high degree of automation, vivid effect and low cost and provides a means of effective implementation for rapid and widespread real texture rapid reconstruction of city 3D model.

  20. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    Science.gov (United States)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  1. Markov chain Monte Carlo(MCMC)based image reconstruction algorithm for electrical capacitance tomography%基于MCMC方法的电容成像图像重构算法

    Institute of Scientific and Technical Information of China (English)

    叶佳敏; 彭黎辉

    2012-01-01

    研究基于概率统计的电容成像图像重构算法,以马尔科夫随机场的方式给出介电常数分布的先验概率,利用电容成像(electrical capacitance tomography,ECT)线性模型得到似然函数,通过马尔科夫链蒙特卡罗(Markov chain Monte Carlo,MCMC)方法对介电常数分布的后验概率密度进行采样,马尔科夫链的转移核利用Metropolis-Hastings方法得到,结合嵌套迭代提高计算效率.仿真结果表明,嵌套迭代-MCMC方法在正则化参数设置合适的条件下,可以得到较好的图像质量,基于MCMC方法图像重构算法为解决ECT图像重构问题提供一种新思路.%An image reconstruction algorithm based on statistical model for electrical capacitance tomography (ECT) is proposed. The prior probability and likelihood function are obtained using multi-level Markov random field and ECT liner model. Using MCMC sampling, the posterior distribution of permittivity is estimated. Meanwhile, nested iteration is introduced to improve the calculation efficiency. Simulation results show that the nested iteration-MCMC can enhance the calculation speed significantly and provide reconstruction images with higher quality if a proper regu-larization parameter is used. The MCMC based method provides a new way for ECT image reconstruction.

  2. HISTORY BASED PROBABILISTIC BACKOFF ALGORITHM

    Directory of Open Access Journals (Sweden)

    Narendran Rajagopalan

    2012-01-01

    Full Text Available Performance of Wireless LAN can be improved at each layer of the protocol stack with respect to energy efficiency. The Media Access Control layer is responsible for the key functions like access control and flow control. During contention, Backoff algorithm is used to gain access to the medium with minimum probability of collision. After studying different variations of back off algorithms that have been proposed, a new variant called History based Probabilistic Backoff Algorithm is proposed. Through mathematical analysis and simulation results using NS-2, it is seen that proposed History based Probabilistic Backoff algorithm performs better than Binary Exponential Backoff algorithm.

  3. Validation of an algorithm for planar surgical resection reconstruction

    Science.gov (United States)

    Milano, Federico E.; Ritacco, Lucas E.; Farfalli, Germán L.; Aponte-Tinao, Luis A.; González Bernaldo de Quirós, Fernán; Risk, Marcelo

    2012-02-01

    Surgical planning followed by computer-assisted intraoperative navigation in orthopaedics oncology for tumor resection have given acceptable results in the last few years. However, the accuracy of preoperative planning and navigation is not clear yet. The aim of this study is to validate a method capable of reconstructing the nearly planar surface generated by the cutting saw in the surgical specimen taken off the patient during the resection procedure. This method estimates an angular and offset deviation that serves as a clinically useful resection accuracy measure. The validation process targets the degree to which the automatic estimation is true, taking as a validation criterium the accuracy of the estimation algorithm. For this purpose a manually estimated gold standard (a bronze standard) data set is built by an expert surgeon. The results show that the manual and the automatic methods consistently provide similar measures.

  4. Model-based tomographic reconstruction

    Science.gov (United States)

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  5. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT

    Energy Technology Data Exchange (ETDEWEB)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Mascolo-Fortin, Julia, E-mail: julia.mascolo-fortin.1@ulaval.ca [Département de physique, de génie physique et d’optique, Université Laval, Québec, Québec G1V 0A6 (Canada); Goussard, Yves, E-mail: yves.goussard@polymtl.ca [Département de génie électrique/Institut de génie biomédical, École Polytechnique de Montréal, C.P. 6079, succ. Centre-ville, Montréal, Québec H3C 3A7 (Canada); Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca [Département de physique, de génie physique et d’optique and Centre de recherche sur le cancer, Université Laval, Québec, Québec G1V 0A6, Canada and Département de radio-oncologie and Centre de recherche du CHU de Québec, Québec, Québec G1R 2J6 (Canada)

    2015-11-15

    Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can

  6. An edge-preserving algorithm of joint image restoration and volume reconstruction for rotation-scanning 4D echocardiographic images

    Institute of Scientific and Technical Information of China (English)

    GUO Qiang; YANG Xin

    2006-01-01

    A statistical algorithm for the reconstruction from time sequence echocardiographic images is proposed in this paper.The ability to jointly restore the images and reconstruct the 3D images without blurring the boundary is the main innovation of this algorithm. First, a Bayesian model based on MAP-MRF is used to reconstruct 3D volume, and extended to deal with the images acquired by rotation scanning method. Then, the spatiotemporal nature of ultrasound images is taken into account for the parameter of energy function, which makes this statistical model anisotropic. Hence not only can this method reconstruct 3D ultrasound images, but also remove the speckle noise anisotropically. Finally, we illustrate the experiments of our method on the synthetic and medical images and compare it with the isotropic reconstruction method.

  7. Some Nonlinear Reconstruction Algorithms for Electrical Impedance Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Berryman, J G

    2001-03-09

    An impedance camera [Henderson and Webster, 1978; Dines and Lytle, 1981]--or what is now more commonly called electrical impedance tomography--attempts to image the electrical impedance (or just the conductivity) distribution inside a body using electrical measurements on its boundary. The method has been used successfully in both biomedical [Brown, 1983; Barber and Brown, 1986; J. C. Newell, D. G. Gisser, and D. Isaacson, 1988; Webster, 1990] and geophysical applications [Wexler, Fry, and Neurnan, 1985; Daily, Lin, and Buscheck, 1987], but the analysis of optimal reconstruction algorithms is still progressing [Murai and Kagawa, 1985; Wexler, Fry, and Neurnan, 1985; Kohn and Vogelius, 1987; Yorkey and Webster, 1987; Yorkey, Webster, and Tompkins, 1987; Berryman and Kohn, 1990; Kohn and McKenney, 1990; Santosa and Vogelius, 1990; Yorkey, 1990]. The most common application is monitoring the influx or efflux of a highly conducting fluid (such as brine in a porous rock or blood in the human body) through the volume being imaged. For biomedical applications, this met hod does not have the resolution of radiological methods, but it is comparatively safe and inexpensive and therefore provides a valuable alternative when continuous monitoring of a patient or process is desired. The following discussion is intended first t o summarize the physics of electrical impedance tomography, then to provide a few details of the data analysis and forward modeling requirements, and finally to outline some of the reconstruction algorithms that have proven to be most useful in practice. Pointers to the literature are provided throughout this brief narrative and the reader is encouraged to explore the references for more complete discussions of the various issues raised here.

  8. Inference-Based Surface Reconstruction of Cluttered Environments

    KAUST Repository

    Biggers, K.

    2012-08-01

    We present an inference-based surface reconstruction algorithm that is capable of identifying objects of interest among a cluttered scene, and reconstructing solid model representations even in the presence of occluded surfaces. Our proposed approach incorporates a predictive modeling framework that uses a set of user-provided models for prior knowledge, and applies this knowledge to the iterative identification and construction process. Our approach uses a local to global construction process guided by rules for fitting high-quality surface patches obtained from these prior models. We demonstrate the application of this algorithm on several example data sets containing heavy clutter and occlusion. © 2012 IEEE.

  9. A parallel stereo reconstruction algorithm with applications in entomology (APSRA)

    Science.gov (United States)

    Bhasin, Rajesh; Jang, Won Jun; Hart, John C.

    2012-03-01

    We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.

  10. Applicability of a set of tomographic reconstruction algorithms for quantitative SPECT on irradiated nuclear fuel assemblies

    Science.gov (United States)

    Jacobsson Svärd, Staffan; Holcombe, Scott; Grape, Sophie

    2015-05-01

    A fuel assembly operated in a nuclear power plant typically contains 100-300 fuel rods, depending on fuel type, which become strongly radioactive during irradiation in the reactor core. For operational and security reasons, it is of interest to experimentally deduce rod-wise information from the fuel, preferably by means of non-destructive measurements. The tomographic SPECT technique offers such possibilities through its two-step application; (1) recording the gamma-ray flux distribution around the fuel assembly, and (2) reconstructing the assembly's internal source distribution, based on the recorded radiation field. In this paper, algorithms for performing the latter step and extracting quantitative relative rod-by-rod data are accounted for. As compared to application of SPECT in nuclear medicine, nuclear fuel assemblies present a much more heterogeneous distribution of internal attenuation to gamma radiation than the human body, typically with rods containing pellets of heavy uranium dioxide surrounded by cladding of a zirconium alloy placed in water or air. This inhomogeneity severely complicates the tomographic quantification of the rod-wise relative source content, and the deduction of conclusive data requires detailed modelling of the attenuation to be introduced in the reconstructions. However, as shown in this paper, simplified models may still produce valuable information about the fuel. Here, a set of reconstruction algorithms for SPECT on nuclear fuel assemblies are described and discussed in terms of their quantitative performance for two applications; verification of fuel assemblies' completeness in nuclear safeguards, and rod-wise fuel characterization. It is argued that a request not to base the former assessment on any a priori information brings constraints to which reconstruction methods that may be used in that case, whereas the use of a priori information on geometry and material content enables highly accurate quantitative assessment, which

  11. First-order convex feasibility algorithms for iterative image reconstruction in limited angular-range X-ray CT

    CERN Document Server

    Sidky, Emil Y; Pan, Xiaochuan

    2012-01-01

    Iterative image reconstruction (IIR) algorithms in Computed Tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this article, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for efficient algorithms for their solution -- thereby facilitating the IIR algorithm design process. An accelerated version of the Chambolle-Pock (CP) algorithm is adapted to various convex fea...

  12. Image quality evaluation of iterative CT reconstruction algorithms: a perspective from spatial domain noise texture measures

    Science.gov (United States)

    Pachon, Jan H.; Yadava, Girijesh; Pal, Debashish; Hsieh, Jiang

    2012-03-01

    Non-linear iterative reconstruction (IR) algorithms have shown promising improvements in image quality at reduced dose levels. However, IR images sometimes may be perceived as having different image noise texture than traditional filtered back projection (FBP) reconstruction. Standard linear-systems-based image quality evaluation metrics are limited in characterizing such textural differences and non-linear image-quality vs. dose trade-off behavior, hence limited in predicting potential impact of such texture differences in diagnostic task. In an attempt to objectively characterize and measure dose dependent image noise texture and statistical properties of IR and FBP images, we have investigated higher order moments and Haralicks Gray Level Co-occurrence Matrices (GLCM) based texture features on phantom images reconstructed by an iterative and a traditional FBP method. In this study, the first 4 central order moments, and multiple texture features from Haralick GLCM in 4 directions at 6 different ROI sizes and four dose levels were computed. For resolution, noise and texture trade-off analysis, spatial frequency domain NPS and contrastdependent MTF were also computed. Preliminary results of the study indicate that higher order moments, along with spatial domain measures of energy, contrast, correlation, homogeneity, and entropy consistently capture the textural differences between FBP and IR as dose changes. These metrics may be useful in describing the perceptual differences in randomness, coarseness, contrast, and smoothness of images reconstructed by non-linear algorithms.

  13. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  14. Haplotyping a single triploid individual based on genetic algorithm.

    Science.gov (United States)

    Wu, Jingli; Chen, Xixi; Li, Xianchen

    2014-01-01

    The minimum error correction model is an important combinatorial model for haplotyping a single individual. In this article, triploid individual haplotype reconstruction problem is studied by using the model. A genetic algorithm based method GTIHR is presented for reconstructing the triploid individual haplotype. A novel coding method and an effectual hill-climbing operator are introduced for the GTIHR algorithm. This relatively short chromosome code can lead to a smaller solution space, which plays a positive role in speeding up the convergence process. The hill-climbing operator ensures algorithm GTIHR converge at a good solution quickly, and prevents premature convergence simultaneously. The experimental results prove that algorithm GTIHR can be implemented efficiently, and can get higher reconstruction rate than previous algorithms.

  15. Conductivity and current density image reconstruction using harmonic Bz algorithm in magnetic resonance electrical impedance tomography.

    Science.gov (United States)

    Oh, Suk Hoon; Lee, Byung Il; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Seo, Jin Keun

    2003-10-07

    Magnetic resonance electrical impedance tomography (MREIT) is to provide cross-sectional images of the conductivity distribution sigma of a subject. While injecting current into the subject, we measure one component Bz of the induced magnetic flux density B = (Bx, By, Bz) using an MRI scanner. Based on the relation between (inverted delta)2 Bz and inverted delta sigma, the harmonic Bz algorithm reconstructs an image of sigma using the measured Bz data from multiple imaging slices. After we obtain sigma, we can reconstruct images of current density distributions for any given current injection method. Following the description of the harmonic Bz algorithm, this paper presents reconstructed conductivity and current density images from computer simulations and phantom experiments using four recessed electrodes injecting six different currents of 26 mA. For experimental results, we used a three-dimensional saline phantom with two polyacrylamide objects inside. We used our 0.3 T (tesla) experimental MRI scanner to measure the induced Bz. Using the harmonic Bz algorithm, we could reconstruct conductivity and current density images with 82 x 82 pixels. The pixel size was 0.6 x 0.6 mm2. The relative L2 errors of the reconstructed images were between 13.8 and 21.5% when the signal-to-noise ratio (SNR) of the corresponding MR magnitude images was about 30. The results suggest that in vitro and in vivo experimental studies with animal subjects are feasible. Further studies are requested to reduce the amount of injection current down to less than 1 mA for human subjects.

  16. An Algorithm for Automated Reconstruction of Particle Cascades in High Energy Physics Experiments

    CERN Document Server

    Actis, O; Henrichs, A; Hinzmann, A; Kirsch, M; Müller, G; Steggemann, J

    2008-01-01

    We present an algorithm for reconstructing particle cascades from event data of a high energy physics experiment. For a given physics process, the algorithm reconstructs all possible configurations of the cascade from the final state objects. We describe the procedure as well as examples of physics processes of different complexity studied at hadron-hadron colliders. We estimate the performance of the algorithm by 20 microseconds per reconstructed decay vertex, and 0.6 kByte per reconstructed particle in the decay trees.

  17. Coral Reef environment reconstruction using small drones, new generation photogrammetry algorithms and satellite imagery

    Science.gov (United States)

    Elisa, Casella; Rovere, Alessio; Harris, Daniel; Parravicini, Valeriano

    2016-04-01

    Surveys based on Remotely Piloted Aircraft Systems (RPAS), together with new-generation Structure from Motion (SfM) and Multi-View Stereo (MVS) reconstruction algorithms have been employed to reconstruct the shallow bathymetry of the inner lagoon of a coral reef in Moorea, French Polinesia. This technique has already been used with a high rate of success on coastal environments (e.g. sandy beaches and rocky shorelines) reaching accuracy of the final Digital Elevation Model in the order of few centimeters. The application of such techniques to reconstruct shallow underwater environments is, though, still little reported. We then used the bathymetric dataset obtained from aerial pictures as ground-truth for relative bathymetry obtained from satellite imagery (WorldView-2) of a larger area within the same study site. The first results of our work suggest that RPAS coupled with SfM and MVS algorithms can be used to reconstruct shallow water environments with favorable weather conditions, and can be employed to ground-truth to satellite imagery.

  18. Source reconstruction technique for slot array antennas using the Gerchberg-Papoulis algorithm

    OpenAIRE

    Sano, Makoto; Sierra Castañer, Manuel; Salmerón Ruiz, Tamara; Hirokawa, Jiro; Ando, Makoto

    2014-01-01

    A source reconstruction technique for slot array antennas is presented. By exploiting the information about the positions and the polarizations of slots to the Gerchberg-Papoulis iterative algorithm, the field on the slots is accurately reconstructed. The proposed technique is applied to the source reconstruction of a K-band radial line slot antenna (RLSA), and the simulated and measured results are presented

  19. DART: a robust algorithm for fast reconstruction of three-dimensional grain maps

    DEFF Research Database (Denmark)

    Batenburg, K.J.; Sijbers, J.; Poulsen, Henning Friis;

    2010-01-01

    classical tomography. To test the properties of the algorithm, three-dimensional X-ray diffraction microscopy data are simulated and reconstructed with DART as well as by a conventional iterative technique, namely SIRT (simultaneous iterative reconstruction technique). For 100 × 100 pixel reconstructions...

  20. Source reconstruction technique for slot array antennas using the Gerchberg-Papoulis algorithm

    OpenAIRE

    Sano, Makoto; Sierra Castañer, Manuel; Salmerón Ruiz, Tamara; Hirokawa, Jiro; Ando, Makoto

    2014-01-01

    A source reconstruction technique for slot array antennas is presented. By exploiting the information about the positions and the polarizations of slots to the Gerchberg-Papoulis iterative algorithm, the field on the slots is accurately reconstructed. The proposed technique is applied to the source reconstruction of a K-band radial line slot antenna (RLSA), and the simulated and measured results are presented

  1. Cosmic Web Reconstruction through Density Ridges: Method and Algorithm

    CERN Document Server

    Chen, Yen-Chi; Freeman, Peter E; Genovese, Christopher R; Wasserman, Larry

    2015-01-01

    The detection and characterization of filamentary structures in the cosmic web allows cosmologists to constrain parameters that dictates the evolution of the Universe. While many filament estimators have been proposed, they generally lack estimates of uncertainty, reducing their inferential power. In this paper, we demonstrate how one may apply the Subspace Constrained Mean Shift (SCMS) algorithm (Ozertem and Erdogmus (2011); Genovese et al. (2012)) to uncover filamentary structure in galaxy data. The SCMS algorithm is a gradient ascent method that models filaments as density ridges, one-dimensional smooth curves that trace high-density regions within the point cloud. We also demonstrate how augmenting the SCMS algorithm with bootstrap-based methods of uncertainty estimation allows one to place uncertainty bands around putative filaments. We apply the SCMS method to datasets sampled from the P3M N-body simulation, with galaxy number densities consistent with SDSS and WFIRST-AFTA and to LOWZ and CMASS data fro...

  2. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Justin, E-mail: justin.solomon@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Biomedical Engineering and Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  3. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    Science.gov (United States)

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  4. Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images.

    Science.gov (United States)

    Maiti, Abhik; Chakravarty, Debashish

    2016-01-01

    3D reconstruction of geo-objects from their digital images is a time-efficient and convenient way of studying the structural features of the object being modelled. This paper presents a 3D reconstruction methodology which can be used to generate photo-realistic 3D watertight surface of different irregular shaped objects, from digital image sequences of the objects. The 3D reconstruction approach described here is robust, simplistic and can be readily used in reconstructing watertight 3D surface of any object from its digital image sequence. Here, digital images of different objects are used to build sparse, followed by dense 3D point clouds of the objects. These image-obtained point clouds are then used for generation of photo-realistic 3D surfaces, using different surface reconstruction algorithms such as Poisson reconstruction and Ball-pivoting algorithm. Different control parameters of these algorithms are identified, which affect the quality and computation time of the reconstructed 3D surface. The effects of these control parameters in generation of 3D surface from point clouds of different density are studied. It is shown that the reconstructed surface quality of Poisson reconstruction depends on Samples per node (SN) significantly, greater SN values resulting in better quality surfaces. Also, the quality of the 3D surface generated using Ball-Pivoting algorithm is found to be highly depend upon Clustering radius and Angle threshold values. The results obtained from this study give the readers of the article a valuable insight into the effects of different control parameters on determining the reconstructed surface quality.

  5. Filtered gradient compressive sensing reconstruction algorithm for sparse and structured measurement matrices

    Science.gov (United States)

    Mejia, Yuri H.; Arguello, Henry

    2016-05-01

    Compressive sensing state-of-the-art proposes random Gaussian and Bernoulli as measurement matrices. Nev- ertheless, often the design of the measurement matrix is subject to physical constraints, and therefore it is frequently not possible that the matrix follows a Gaussian or Bernoulli distribution. Examples of these lim- itations are the structured and sparse matrices of the compressive X-Ray, and compressive spectral imaging systems. A standard algorithm for recovering sparse signals consists in minimizing an objective function that includes a quadratic error term combined with a sparsity-inducing regularization term. This problem can be solved using the iterative algorithms for solving linear inverse problems. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity. However, current algorithms are slow for getting a high quality image reconstruction because they do not exploit the structured and sparsity characteristics of the compressive measurement matrices. This paper proposes the development of a gradient-based algorithm for compressive sensing reconstruction by including a filtering step that yields improved quality using less iterations. This algorithm modifies the iterative solution such that it forces to converge to a filtered version of the residual AT y, where y is the measurement vector and A is the compressive measurement matrix. We show that the algorithm including the filtering step converges faster than the unfiltered version. We design various filters that are motivated by the structure of AT y. Extensive simulation results using various sparse and structured matrices highlight the relative performance gain over the existing iterative process.

  6. Evaluation of a new reconstruction algorithm for x-ray phase-contrast imaging

    Science.gov (United States)

    Seifert, Maria; Hauke, Christian; Horn, Florian; Lachner, Sebastian; Ludwig, Veronika; Pelzer, Georg; Rieger, Jens; Schuster, Max; Wandner, Johannes; Wolf, Andreas; Michel, Thilo; Anton, Gisela

    2016-04-01

    X-ray grating-based phase-contrast imaging might open up entirely new opportunities in medical imaging. However, transferring the interferometer technique from laboratory setups to conventional imaging systems the necessary rigidity of the system is difficult to achieve. Therefore, vibrations or distortions of the system lead to inaccuracies within the phase-stepping procedure. Given insufficient stability of the phase-step positions, up to now, artifacts in phase-contrast images occur, which lower the image quality. This is a problem with regard to the intended use of phase-contrast imaging in clinical routine as for example tiny structures of the human anatomy cannot be observed. In this contribution we evaluate an algorithm proposed by Vargas et.al.1 and applied to X-ray imaging by Pelzer et.al. that enables us to reconstruct a differential phase-contrast image without the knowledge of the specific phase-step positions. This method was tested in comparison to the standard reconstruction by Fourier analysis. The quality of phase-contrast images remains stable, even if the phase-step positions are completely unknown and not uniformly distributed. To also achieve attenuation and dark-field images the proposed algorithm has been combined with a further algorithm of Vargas et al.3 Using this algorithm, the phase-step positions can be reconstructed. With the help of the proper phase-step positions it is possible to get information about the phase, the amplitude and the offset of the measured data. We evaluated this algorithm concerning the measurement of thick objects which show a high absorbency.

  7. Geometric Feature Extraction and Model Reconstruction Based on Scattered Data

    Institute of Scientific and Technical Information of China (English)

    胡鑫; 习俊通; 金烨

    2004-01-01

    A method of 3D model reconstruction based on scattered point data in reverse engineering is presented here. The topological relationship of scattered points was established firstly, then the data set was triangulated to reconstruct the mesh surface model. The curvatures of cloud data were calculated based on the mesh surface, and the point data were segmented by edge-based method; Every patch of data was fitted by quadric surface of freeform surface, and the type of quadric surface was decided by parameters automatically, at last the whole CAD model was created. An example of mouse model was employed to confirm the effect of the algorithm.

  8. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    Science.gov (United States)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  9. Performance assessment of different pulse reconstruction algorithms for the ATHENA X-ray Integral Field Unit

    Science.gov (United States)

    Peille, Philippe; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; den Hartog, Roland; de Plaa, Jelle; Barret, Didier; den Herder, Jan-Willem; Piro, Luigi; Barcons, Xavier; Pointecouteau, Etienne

    2016-07-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  10. A fast marching method based back projection algorithm for photoacoustic tomography in heterogeneous media

    CERN Document Server

    Wang, Tianren

    2015-01-01

    This paper presents a numerical study on a fast marching method based back projection reconstruction algorithm for photoacoustic tomography in heterogeneous media. Transcranial imaging is used here as a case study. To correct for the phase aberration from the heterogeneity (i.e., skull), the fast marching method is adopted to compute the phase delay based on the known speed of sound distribution, and the phase delay is taken into account by the back projection algorithm for more accurate reconstructions. It is shown that the proposed algorithm is more accurate than the conventional back projection algorithm, but slightly less accurate than the time reversal algorithm particularly in the area close to the skull. However, the image reconstruction time for the proposed algorithm can be as little as 124 ms when implemented by a GPU (512 sensors, 21323 pixels reconstructed), which is two orders of magnitude faster than the time reversal reconstruction. The proposed algorithm, therefore, not only corrects for the p...

  11. Three-dimensional imaging reconstruction algorithm of gated-viewing laser imaging with compressive sensing.

    Science.gov (United States)

    Li, Li; Xiao, Wei; Jian, Weijian

    2014-11-20

    Three-dimensional (3D) laser imaging combining compressive sensing (CS) has an advantage in lower power consumption and less imaging sensors; however, it brings enormous stress to subsequent calculation devices. In this paper we proposed a fast 3D imaging reconstruction algorithm to deal with time-slice images sampled by single-pixel detectors. The algorithm implements 3D imaging reconstruction before CS recovery, thus it saves plenty of runtime of CS recovery. Several experiments are conducted to verify the performance of the algorithm. Simulation results demonstrated that the proposed algorithm has better performance in terms of efficiency compared to an existing algorithm.

  12. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application......The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...

  13. An ordered-subsets proximal preconditioned gradient algorithm for edge-preserving PET image reconstruction.

    Science.gov (United States)

    Mehranian, Abolfazl; Rahmim, Arman; Ay, Mohammad Reza; Kotasidis, Fotis; Zaidi, Habib

    2013-05-01

    In iterative positron emission tomography (PET) image reconstruction, the statistical variability of the PET data precorrected for random coincidences or acquired in sufficiently high count rates can be properly approximated by a Gaussian distribution, which can lead to a penalized weighted least-squares (PWLS) cost function. In this study, the authors propose a proximal preconditioned gradient algorithm accelerated with ordered subsets (PPG-OS) for the optimization of the PWLS cost function and develop a framework to incorporate boundary side information into edge-preserving total variation (TV) and Huber regularizations. The PPG-OS algorithm is proposed to address two issues encountered in the optimization of PWLS function with edge-preserving regularizers. First, the second derivative of this function (Hessian matrix) is shift-variant and ill-conditioned due to the weighting matrix (which includes emission data, attenuation, and normalization correction factors) and the regularizer. As a result, the paraboloidal surrogate functions (used in the optimization transfer techniques) end up with high curvatures and gradient-based algorithms take smaller step-sizes toward the solution, leading to a slow convergence. In addition, preconditioners used to improve the condition number of the problem, and thus to speed up the convergence, would poorly act on the resulting ill-conditioned Hessian matrix. Second, the PWLS function with a nondifferentiable penalty such as TV is not amenable to optimization using gradient-based algorithms. To deal with these issues and also to enhance edge-preservation of the TV and Huber regularizers by incorporating adaptively or anatomically derived boundary side information, the authors followed a proximal splitting method. Thereby, the optimization of the PWLS function is split into a gradient descent step (upgraded by preconditioning, step size optimization, and ordered subsets) and a proximal mapping associated with boundary weighted TV

  14. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  15. Structure-based bayesian sparse reconstruction

    KAUST Repository

    Quadeer, Ahmed Abdul

    2012-12-01

    Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesian approach that utilizes the sparsity constraint and a priori statistical information (Gaussian or otherwise) to obtain near optimal estimates. In addition, we make use of the rich structure of the sensing matrix encountered in many signal processing applications to develop a fast sparse recovery algorithm. The computational complexity of the proposed algorithm is very low compared with the widely used convex relaxation methods as well as greedy matching pursuit techniques, especially at high sparsity. © 1991-2012 IEEE.

  16. Implementation and research on the reconstruction algorithms of pressure fields based on PIV%基于PIV技术的压力场重构算法实现与研究

    Institute of Scientific and Technical Information of China (English)

    刘顺; 徐惊雷; 俞凯凯

    2016-01-01

    The basic principles and the corresponding algorithms of the finite volume method, the direct integral method and the Poisson equation method are introduced in detail,which are used to reconstruct the pressure fields based on PIV velocity fields.The instantaneous velocity fields of two incompressible flows,including the pipe flow with a sudden expansion and the flow around a square,are selected to study the influence of picture noise,velocity error,interpolation methods,the type and the precision of boundary conditions on reconstructed pressure fields by using different reconstruction algorithms.Finally,the transient pressure distributions of the pipe flow with a sudden expansion at 20ms are obtained by using the three algorithms respectively as well as the CFD.It shows that the finite volume method and the direct integral method are easily affected by noise to produce rude shocks,but maintain high accuracy in a larger range of error in velocity fields while they can get higher precision of reconstructed pressure fields with bilinear in-terpolation;the Poisson equation method isn’t easily affected by noise so it produces few shocks, and has great advantages with the accurate PIV velocity fields while it can get higher precision of reconstructed pressure fields with bicubic interpolation;by measuring only several pressure points on the boundaries,the mixed boundary condition gets the accurate reconstructed pressure fields which are close to those of the Dirichlet boundary condition and far better than those of the Neumann boundary condition;the error of boundary conditions reduces the precision of recon-structed pressure fields,which is more severe than the error of velocity fields.%介绍了有限容积法、直接积分法和Poisson方程法3种基于PIV瞬时速度场重构压力场的基本原理以及相应的计算方法,选取管流突扩流场和偏置方块绕流流场两个不可压缩流场的瞬时速度场数据,采用上述3种压力场重构算

  17. Evidence-Based ACL Reconstruction

    Directory of Open Access Journals (Sweden)

    E. Carlos RODRIGUEZ-MERCHAN

    2015-01-01

    Full Text Available There is controversy in the literature regarding a number of topics related to anterior cruciate ligament (ACLreconstruction. The purpose of this article is to answer the following questions: 1 Bone patellar tendon bone (BPTB reconstruction or hamstring reconstruction (HR; 2 Double bundle or single bundle; 3 Allograft or authograft; 4 Early or late reconstruction; 5 Rate of return to sports after ACL reconstruction; 6 Rate of osteoarthritis after ACL reconstruction. A Cochrane Library and PubMed (MEDLINE search of systematic reviews and meta-analysis related to ACL reconstruction was performed. The key words were: ACL reconstruction, systematic reviews and meta-analysis. The main criteria for selection were that the articles were systematic reviews and meta-analysesfocused on the aforementioned questions. Sixty-nine articles were found, but only 26 were selected and reviewed because they had a high grade (I-II of evidence. BPTB-R was associated with better postoperative knee stability but with a higher rate of morbidity. However, the results of both procedures in terms of functional outcome in the long-term were similar. The double-bundle ACL reconstruction technique showed better outcomes in rotational laxity, although functional recovery was similar between single-bundle and double-bundle. Autograft yielded better results than allograft. There was no difference between early and delayed reconstruction. 82% of patients were able to return to some kind of sport participation. 28% of patients presented radiological signs of osteoarthritis with a follow-up of minimum 10 years.

  18. Efficient ω-k-algorithm for circular SAR and cylindrical reconstruction areas

    Directory of Open Access Journals (Sweden)

    A. Dallinger

    2006-01-01

    Full Text Available We present a novel reconstruction algorithm of ω-k type which suits for wideband circular synthetic aperture data taken in stripmap mode. The proposed algorithm allows to reconstruct an image on a cylindrical surface. The range trajectory is approximated by Taylor Series expansion using only the quadratic terms which limits the angular reconstruction range (cross range. In our case this is not a restriction for the application. Wider areas with respect to cross range can be realized by joining several reconstructed images side by side to build a wider image by means of digital spotlighting.

  19. Fast vision-based catheter 3D reconstruction

    Science.gov (United States)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  20. A modified OSEM algorithm for PET reconstruction using wavelet processing.

    Science.gov (United States)

    Lee, Nam-Yong; Choi, Yong

    2005-12-01

    Ordered subset expectation-maximization (OSEM) method in positron emission tomography (PET) has been very popular recently. It is an iterative algorithm and provides images with superior noise characteristics compared to conventional filtered backprojection (FBP) algorithms. Due to the lack of smoothness in images in OSEM iterations, however, some type of inter-smoothing is required. For this purpose, the smoothing based on the convolution with the Gaussian kernel has been used in clinical PET practices. In this paper, we incorporated a robust wavelet de-noising method into OSEM iterations as an inter-smoothing tool. The proposed wavelet method is based on a hybrid use of the standard wavelet shrinkage and the robust wavelet shrinkage to have edge preserving and robust de-noising simultaneously. The performances of the proposed method were compared with those of the smoothing methods based on the convolution with Gaussian kernel using software phantoms, physical phantoms, and human PET studies. The results demonstrated that the proposed wavelet method provided better spatial resolution characteristic than the smoothing methods based on the Gaussian convolution, while having comparable performance in noise removal.

  1. Algorithmic Study of M-Estimators for Multi-Function Sensor Data Reconstruction

    Institute of Scientific and Technical Information of China (English)

    LIU Dan; SUN Jinwei; WEI Guo

    2007-01-01

    This paper describes a data reconstruction technique for a multi-function sensor based on the Mestimator, which uses least squares and weighted least squares method.The algorithm has better robustness than conventional least squares which can amplify the errors of inaccurate data.The M-estimator places particular emphasis on reducing the effects of large data errors,which are further overcome by an iterative regression process which gives small weights to large off-group data errors and large weights to small data errors. Simulation results are consistent with the hypothesis with 81 groups of regression data having an average accuracy of 3.5%, which demonstrates that the M-estimator provides more accurate and reliable data reconstruction.

  2. Path-based Iterative Reconstruction (PBIR) for X-ray Computed Tomography

    CERN Document Server

    Wu, Meng; Yang, Qiao; Fahrig, Rebecca

    2015-01-01

    Model-based iterative reconstruction (MBIR) techniques have demonstrated many advantages in X-ray CT image reconstruction. The MBIR approach is often modeled as a convex optimization problem including a data fitting function and a penalty function. The tuning parameter value that regulates the strength of the penalty function is critical for achieving good reconstruction results but difficult to choose. In this work, we describe two path seeking algorithms that are capable of efficiently generating a series of MBIR images with different strengths of the penalty function. The root-mean-squared-differences of the proposed path seeking algorithms are below 4 HU throughout the entire reconstruction path. With the efficient path seeking algorithm, we suggest a path-based iterative reconstruction (PBIR) to obtain complete information from the scanned data and reconstruction model.

  3. An Algorithmic Approach to Total Breast Reconstruction with Free Tissue Transfer

    Directory of Open Access Journals (Sweden)

    Seong Cheol Yu

    2013-05-01

    Full Text Available As microvascular techniques continue to improve, perforator flap free tissue transfer is now the gold standard for autologous breast reconstruction. Various options are available for breast reconstruction with autologous tissue. These include the free transverse rectus abdominis myocutaneous (TRAM flap, deep inferior epigastric perforator flap, superficial inferior epigastric artery flap, superior gluteal artery perforator flap, and transverse/vertical upper gracilis flap. In addition, pedicled flaps can be very successful in the right hands and the right patient, such as the pedicled TRAM flap, latissimus dorsi flap, and thoracodorsal artery perforator. Each flap comes with its own advantages and disadvantages related to tissue properties and donor-site morbidity. Currently, the problem is how to determine the most appropriate flap for a particular patient among those potential candidates. Based on a thorough review of the literature and accumulated experiences in the author’s institution, this article provides a logical approach to autologous breast reconstruction. The algorithms presented here can be helpful to customize breast reconstruction to individual patient needs.

  4. Subjet multiplicity of gluon and quark jets reconstructed with the k⊥ algorithm in ppbar collisions

    Science.gov (United States)

    Abazov, V. M.; Abbott, B.; Abdesselam, A.; Abolins, M.; Abramov, V.; Acharya, B. S.; Adams, D. L.; Adams, M.; Ahmed, S. N.; Alexeev, G. D.; Alton, A.; Alves, G. A.; Amos, N.; Anderson, E. W.; Arnoud, Y.; Avila, C.; Baarmand, M. M.; Babintsev, V. V.; Babukhadia, L.; Bacon, T. C.; Baden, A.; Baldin, B.; Balm, P. W.; Banerjee, S.; Barberis, E.; Baringer, P.; Barreto, J.; Bartlett, J. F.; Bassler, U.; Bauer, D.; Bean, A.; Beaudette, F.; Begel, M.; Belyaev, A.; Beri, S. B.; Bernardi, G.; Bertram, I.; Besson, A.; Beuselinck, R.; Bezzubov, V. A.; Bhat, P. C.; Bhatnagar, V.; Bhattacharjee, M.; Blazey, G.; Blekman, F.; Blessing, S.; Boehnlein, A.; Bojko, N. I.; Borcherding, F.; Bos, K.; Bose, T.; Brandt, A.; Breedon, R.; Briskin, G.; Brock, R.; Brooijmans, G.; Bross, A.; Buchholz, D.; Buehler, M.; Buescher, V.; Burtovoi, V. S.; Butler, J. M.; Canelli, F.; Carvalho, W.; Casey, D.; Casilum, Z.; Castilla-Valdez, H.; Chakraborty, D.; Chan, K. M.; Chekulaev, S. V.; Cho, D. K.; Choi, S.; Chopra, S.; Christenson, J. H.; Chung, M.; Claes, D.; Clark, A. R.; Cochran, J.; Coney, L.; Connolly, B.; Cooper, W. E.; Coppage, D.; Crépé-Renaudin, S.; Cummings, M. A.; Cutts, D.; Davis, G. A.; Davis, K.; de, K.; de Jong, S. J.; del Signore, K.; Demarteau, M.; Demina, R.; Demine, P.; Denisov, D.; Denisov, S. P.; Desai, S.; Diehl, H. T.; Diesburg, M.; Doulas, S.; Ducros, Y.; Dudko, L. V.; Duensing, S.; Duflot, L.; Dugad, S. R.; Duperrin, A.; Dyshkant, A.; Edmunds, D.; Ellison, J.; Elvira, V. D.; Engelmann, R.; Eno, S.; Eppley, G.; Ermolov, P.; Eroshin, O. V.; Estrada, J.; Evans, H.; Evdokimov, V. N.; Fahland, T.; Feher, S.; Fein, D.; Ferbel, T.; Filthaut, F.; Fisk, H. E.; Fisyak, Y.; Flattum, E.; Fleuret, F.; Fortner, M.; Fox, H.; Frame, K. C.; Fu, S.; Fuess, S.; Gallas, E.; Galyaev, A. N.; Gao, M.; Gavrilov, V.; Genik, R. J.; Genser, K.; Gerber, C. E.; Gershtein, Y.; Gilmartin, R.; Ginther, G.; Gómez, B.; Gómez, G.; Goncharov, P. I.; González Solís, J. L.; Gordon, H.; Goss, L. T.; Gounder, K.; Goussiou, A.; Graf, N.; Graham, G.; Grannis, P. D.; Green, J. A.; Greenlee, H.; Greenwood, Z. D.; Grinstein, S.; Groer, L.; Grünendahl, S.; Gupta, A.; Gurzhiev, S. N.; Gutierrez, G.; Gutierrez, P.; Hadley, N. J.; Haggerty, H.; Hagopian, S.; Hagopian, V.; Hall, R. E.; Hanlet, P.; Hansen, S.; Hauptman, J. M.; Hays, C.; Hebert, C.; Hedin, D.; Heinmiller, J. M.; Heinson, A. P.; Heintz, U.; Heuring, T.; Hildreth, M. D.; Hirosky, R.; Hobbs, J. D.; Hoeneisen, B.; Huang, Y.; Illingworth, R.; Ito, A. S.; Jaffré, M.; Jain, S.; Jesik, R.; Johns, K.; Johnson, M.; Jonckheere, A.; Jöstlein, H.; Juste, A.; Kahl, W.; Kahn, S.; Kajfasz, E.; Kalinin, A. M.; Karmanov, D.; Karmgard, D.; Kehoe, R.; Khanov, A.; Kharchilava, A.; Kim, S. K.; Klima, B.; Knuteson, B.; Ko, W.; Kohli, J. M.; Kostritskiy, A. V.; Kotcher, J.; Kothari, B.; Kotwal, A. V.; Kozelov, A. V.; Kozlovsky, E. A.; Krane, J.; Krishnaswamy, M. R.; Krivkova, P.; Krzywdzinski, S.; Kubantsev, M.; Kuleshov, S.; Kulik, Y.; Kunori, S.; Kupco, A.; Kuznetsov, V. E.; Landsberg, G.; Lee, W. M.; Leflat, A.; Leggett, C.; Lehner, F.; Li, J.; Li, Q. Z.; Li, X.; Lima, J. G.; Lincoln, D.; Linn, S. L.; Linnemann, J.; Lipton, R.; Lucotte, A.; Lueking, L.; Lundstedt, C.; Luo, C.; Maciel, A. K.; Madaras, R. J.; Malyshev, V. L.; Manankov, V.; Mao, H. S.; Marshall, T.; Martin, M. I.; Mauritz, K. M.; May, B.; Mayorov, A. A.; McCarthy, R.; McMahon, T.; Melanson, H. L.; Merkin, M.; Merritt, K. W.; Miao, C.; Miettinen, H.; Mihalcea, D.; Mishra, C. S.; Mokhov, N.; Mondal, N. K.; Montgomery, H. E.; Moore, R. W.; Mostafa, M.; da Motta, H.; Nagy, E.; Nang, F.; Narain, M.; Narasimham, V. S.; Naumann, N. A.; Neal, H. A.; Negret, J. P.; Negroni, S.; Nunnemann, T.; O'Neil, D.; Oguri, V.; Olivier, B.; Oshima, N.; Padley, P.; Pan, L. J.; Papageorgiou, K.; Para, A.; Parashar, N.; Partridge, R.; Parua, N.; Paterno, M.; Patwa, A.; Pawlik, B.; Perkins, J.; Peters, O.; Pétroff, P.; Piegaia, R.; Pope, B. G.; Popkov, E.; Prosper, H. B.; Protopopescu, S.; Przybycien, M. B.; Qian, J.; Raja, R.; Rajagopalan, S.; Ramberg, E.; Rapidis, P. A.; Reay, N. W.; Reucroft, S.; Ridel, M.; Rijssenbeek, M.; Rizatdinova, F.; Rockwell, T.; Roco, M.; Royon, C.; Rubinov, P.; Ruchti, R.; Rutherfoord, J.; Sabirov, B. M.; Sajot, G.; Santoro, A.; Sawyer, L.; Schamberger, R. D.; Schellman, H.; Schwartzman, A.; Sen, N.; Shabalina, E.; Shivpuri, R. K.; Shpakov, D.; Shupe, M.; Sidwell, R. A.; Simak, V.; Singh, H.; Singh, J. B.; Sirotenko, V.; Slattery, P.; Smith, E.; Smith, R. P.; Snihur, R.; Snow, G. R.; Snow, J.; Snyder, S.; Solomon, J.; Song, Y.; Sorín, V.; Sosebee, M.; Sotnikova, N.; Soustruznik, K.; Souza, M.; Stanton, N. R.; Steinbrück, G.; Stephens, R. W.; Stichelbaut, F.; Stoker, D.; Stolin, V.; Stone, A.; Stoyanova, D. A.; Strang, M. A.; Strauss, M.; Strovink, M.; Stutte, L.; Sznajder, A.; Talby, M.; Taylor, W.; Tentindo-Repond, S.; Tripathi, S. M.; Trippe, T. G.; Turcot, A. S.; Tuts, P. M.; Vaniev, V.; van Kooten, R.; Varelas, N.; Vertogradov, L. S.; Villeneuve-Seguier, F.; Volkov, A. A.; Vorobiev, A. P.; Wahl, H. D.; Wang, H.; Wang, Z.-M.; Warchol, J.; Watts, G.; Wayne, M.; Weerts, H.; White, A.; White, J. T.; Whiteson, D.; Wightman, J. A.; Wijngaarden, D. A.; Willis, S.; Wimpenny, S. J.; Womersley, J.; Wood, D. R.; Xu, Q.; Yamada, R.; Yamin, P.; Yasuda, T.; Yatsunenko, Y.; Yip, K.; Youssef, S.; Yu, J.; Yu, Z.; Zanabria, M.; Zhang, X.; Zheng, H.; Zhou, B.; Zhou, Z.; Zielinski, M.; Zieminska, D.; Zieminski, A.; Zutshi, V.; Zverev, E. G.; Zylberstejn, A.

    2002-03-01

    The DØ Collaboration has studied for the first time the properties of hadron-collider jets reconstructed with a successive-combination algorithm based on relative transverse momenta (k⊥) of energy clusters. Using the standard value D=1.0 of the jet-separation parameter in the k⊥ algorithm, we find that the pT of such jets is higher than the ET of matched jets reconstructed with cones of radius R=0.7, by about 5 (8) GeV at pT~90 (240) GeV. To examine internal jet structure, the k⊥ algorithm is applied within D=0.5 jets to resolve any subjets. The multiplicity of subjets in jet samples at (s)=1800 GeV and 630 GeV is extracted separately for gluons (Mg) and quarks (Mq), and the ratio of average subjet multiplicities in gluon and quark jets is measured as (-1)/(-1)=1.84+/-0.15 (stat)+/-0.220.18 (syst). This ratio is in agreement with the expectations from the HERWIG Monte Carlo event generator and a resummation calculation, and with observations in e+e- annihilations, and is close to the naive prediction for the ratio of color charges of CA/CF=9/4=2.25.

  5. Variables Bounding Based Retiming Algorithm

    Institute of Scientific and Technical Information of China (English)

    宫宗伟; 林争辉; 陈后鹏

    2002-01-01

    Retiming is a technique for optimizing sequential circuits. In this paper, wediscuss this problem and propose an improved retiming algorithm based on variables bounding.Through the computation of the lower and upper bounds on variables, the algorithm can signi-ficantly reduce the number of constraints and speed up the execution of retiming. Furthermore,the elements of matrixes D and W are computed in a demand-driven way, which can reducethe capacity of memory. It is shown through the experimental results on ISCAS89 benchmarksthat our algorithm is very effective for large-scale sequential circuits.

  6. Recursive three-dimensional model reconstruction based on Kalman filtering.

    Science.gov (United States)

    Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen

    2005-06-01

    A recursive two-step method to recover structure and motion from image sequences based on Kalman filtering is described in this paper. The algorithm consists of two major steps. The first step is an extended Kalman filter (EKF) for the estimation of the object's pose. The second step is a set of EKFs, one for each model point, for the refinement of the positions of the model features in the three-dimensional (3-D) space. These two steps alternate from frame to frame. The initial model converges to the final structure as the image sequence is scanned sequentially. The performance of the algorithm is demonstrated with both synthetic data and real-world objects. Analytical and empirical comparisons are made among our approach, the interleaved bundle adjustment method, and the Kalman filtering-based recursive algorithm by Azarbayejani and Pentland. Our approach outperformed the other two algorithms in terms of computation speed without loss in the quality of model reconstruction.

  7. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  8. Filtering of measurement noise with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey

    2014-01-01

    Two different antenna models are set up in GRASP and CHAMP, and noise is added to the radiated field. The noisy field is then given as input to the 3D reconstruction of DIATOOL and the SWE coefficients and the far-field radiated by the reconstructed currents are compared with the noise-free results...

  9. Adjoint-optimization algorithm for spatial reconstruction of a scalar source

    Science.gov (United States)

    Wang, Qi; Hasegawa, Yosuke; Meneveau, Charles; Zaki, Tamer

    2016-11-01

    Identifying the location of the source of passive scalar transported in a turbulent environment based on remote measurements is an ill-posed problem. A conjugate-gradient algorithm is proposed, and relies on eddy-resolving simulations of both the forward and adjoint scalar transport equations to reconstruct the spatial distribution of the source. The formulation can naturally accommodate measurements from multiple sensors. The algorithm is evaluated for scalar dispersion in turbulent channel flow (Reτ = 180). As the distance between the source and sensor increases, the accuracy of the source recovery deteriorates due to diffusive effects. Improvement in performance is demonstrated for higher Prantl numbers and also with increasing number of sensors. This study is supported by the National Science Foundation (Grant CNS 1461870).

  10. Reconstruction Algorithms for Positron Emission Tomography and Single Photon Emission Computed Tomography and their Numerical Implementation

    CERN Document Server

    Fokas, A S; Marinakis, V

    2004-01-01

    The modern imaging techniques of Positron Emission Tomography and of Single Photon Emission Computed Tomography are not only two of the most important tools for studying the functional characteristics of the brain, but they now also play a vital role in several areas of clinical medicine, including neurology, oncology and cardiology. The basic mathematical problems associated with these techniques are the construction of the inverse of the Radon transform and of the inverse of the so called attenuated Radon transform respectively. We first show that, by employing mathematical techniques developed in the theory of nonlinear integrable equations, it is possible to obtain analytic formulas for these two inverse transforms. We then present algorithms for the numerical implementation of these analytic formulas, based on approximating the given data in terms of cubic splines. Several numerical tests are presented which suggest that our algorithms are capable of producing accurate reconstruction for realistic phanto...

  11. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images

    OpenAIRE

    Chong Fan; Xushuai Chen; Lei Zhong; Min Zhou; Yun Shi; Yulin Duan

    2017-01-01

    A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This ...

  12. Influence of reconstruction settings on the performance of adaptive thresholding algorithms for FDG-PET image segmentation in radiotherapy planning.

    Science.gov (United States)

    Matheoud, Roberta; Della Monica, Patrizia; Loi, Gianfranco; Vigna, Luca; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco

    2011-01-30

    The purpose of this study was to analyze the behavior of a contouring algorithm for PET images based on adaptive thresholding depending on lesions size and target-to-background (TB) ratio under different conditions of image reconstruction parameters. Based on this analysis, the image reconstruction scheme able to maximize the goodness of fit of the thresholding algorithm has been selected. A phantom study employing spherical targets was designed to determine slice-specific threshold (TS) levels which produce accurate cross-sectional areas. A wide range of TB ratio was investigated. Multiple regression methods were used to fit the data and to construct algorithms depending both on target cross-sectional area and TB ratio, using various reconstruction schemes employing a wide range of iteration number and amount of postfiltering Gaussian smoothing. Analysis of covariance was used to test the influence of iteration number and smoothing on threshold determination. The degree of convergence of ordered-subset expectation maximization (OSEM) algorithms does not influence TS determination. Among these approaches, the OSEM at two iterations and eight subsets with a 6-8 mm post-reconstruction Gaussian three-dimensional filter provided the best fit with a coefficient of determination R² = 0.90 for cross-sectional areas ≤ 133 mm² and R² = 0.95 for cross-sectional areas > 133 mm². The amount of post-reconstruction smoothing has been directly incorporated in the adaptive thresholding algorithms. The feasibility of the method was tested in two patients with lymph node FDG accumulation and in five patients using the bladder to mimic an anatomical structure of large size and uniform uptake, with satisfactory results. Slice-specific adaptive thresholding algorithms look promising as a reproducible method for delineating PET target volumes with good accuracy.

  13. Application aspects of advanced antenna diagnostics with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Pivnenko, Sergey

    2015-01-01

    This paper focuses on two important applications of the 3D reconstruction algorithm of the commercial software DIATOOL for antenna diagnostics. The first one is the accurate and detailed identification of array malfunctioning, thanks to the available enhanced spatial resolution of the reconstructed...

  14. An FBP image reconstruction algorithm for x-ray differential phase contrast CT

    Science.gov (United States)

    Qi, Zhihua; Chen, Guang-Hong

    2008-03-01

    Most recently, a novel data acquisition method has been proposed and experimentally implemented for x-ray differential phase contrast computed tomography (DPC-CT), in which a conventional x-ray tube and a Talbot-Lau type interferometer were utilized in data acquisition. The divergent nature of the data acquisition system requires a divergent-beam image reconstruction algorithm for DPC-CT. This paper focuses on addressing this image reconstruction issue. We developed a filtered backprojection algorithm to directly reconstruct the DPC-CT images from acquired projection data. The developed algorithm allows one to directly reconstruct the decrement of the real part of the refractive index from the measured data. In order to accurately reconstruct an image, the data need to be acquired over an angular range of at least 180° plus the fan-angle. Different from the parallel beam data acquisition and reconstruction methods, a 180° rotation angle for data acquisition system does not provide sufficient data for an accurate reconstruction of the entire field of view. Numerical simulations have been conducted to validate the image reconstruction algorithm.

  15. Array diagnostics, spatial resolution, and filtering of undesired radiation with the 3D reconstruction algorithm

    DEFF Research Database (Denmark)

    Cappellin, C.; Pivnenko, Sergey; Jørgensen, E.

    2013-01-01

    This paper focuses on three important features of the 3D reconstruction algorithm of DIATOOL: the identification of array elements improper functioning and failure, the obtainable spatial resolution of the reconstructed fields and currents, and the filtering of undesired radiation and scattering...

  16. Robust baseline-independent algorithms for segmentation and reconstruction of Arabic handwritten cursive script

    Science.gov (United States)

    Mostafa, Khaled; Darwish, Ahmed M.

    1999-01-01

    The problem of cursive script segmentation is an essential one for handwritten character recognition. This is specially true for Arabic text where cursive is the only mode even for typewritten font. In this paper, we present a generalized segmentation approach for handwritten Arabic cursive scripts. The proposed approach is based on the analysis of the upper and lower contours of the word. The algorithm searchers for local minima points along the upper contour and local maxima points along the lower contour of the word. These points are then marked as potential letter boundaries (PLB). A set of rules, based on the nature of Arabic cursive scripts, are then applied to both upper and lower PLB points to eliminate some of the improper ones. A matching process between upper and lower PLBs is then performed in order to obtain the minimum number of non-overlapping PLB for each word. The output of the proposed segmentation algorithm is a set of labeled primitives that represent the Arabic word. In order to reconstruct the original word from its corresponding primitives and diacritics, a novel binding and dot assignment algorithm is introduced. The algorithm achieved correct segmentation rate of 97.7% when tested on samples of loosely constrained handwritten cursive script words consisting of 7922 characters written by 14 different writers.

  17. DCT and DST Based Image Compression for 3D Reconstruction

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  18. Validation of Ionosonde Electron Density Reconstruction Algorithms with IONOLAB-RAY in Central Europe

    Science.gov (United States)

    Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra

    2016-07-01

    Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used

  19. Reconstructing optical parameters from double-integrating-sphere measurements using a genetic algorithm

    Science.gov (United States)

    Böcklin, Christoph; Baumann, Dirk; Stuker, Florian; Klohs, Jan; Rudin, Markus; Fröhlich, Jürg

    2013-02-01

    For the reconstruction of physiological changes in specific tissue layers detected by optical techniques, the exact knowledge of the optical parameters μa, μs and g of different tissue types is of paramount importance. One approach to accurately determine these parameters for biological tissue or phantom material is to use a double-integrating-sphere measurement system. It offers a flexible way to measure various kinds of tissues, liquids and artificial phantom materials. Accurate measurements can be achieved by technical adjustments and calibration of the spheres using commercially available reflection and transmission standards. The determination For the reconstruction of physiological changes in specific tissue layers detected by optical techniques, the exact knowledge of the optical parameters μa, μs and g of different tissue types is of paramount importance. One approach to accurately determine these parameters for biological tissue or phantom material is to use a double-integrating-sphere measurement system. It offers a flexible way to measure various kinds of tissues, liquids and artificial phantom materials. Accurate measurements can be achieved by technical adjustments and calibration of the spheres using commercially available reflection and transmission standards. The determination of the optical parameters of a material is based on two separate steps. Firstly, the reflectance ρs, the total transmittance TsT and the unscattered transmittance TsC of the sample s are measured with the double-integrating-sphere setup. Secondly, the optical parameters μa, μs and g are reconstructed with an inverse search algorithm combined with an appropriate solver for the forward problem (calculating ρs, TsT and TsC from μa, μs and g) has to be applied. In this study a Genetic Algorithm is applied as search heuristic, since it offers the most flexible and general approach without requiring any foreknowledge of the fitness-landscape. Given the challenging

  20. CS-based fast ultrasound imaging with improved FISTA algorithm

    Science.gov (United States)

    Lin, Jie; He, Yugao; Shi, Guangming; Han, Tingyu

    2015-08-01

    In ultrasound imaging system, the wave emission and data acquisition is time consuming, which can be solved by adopting the plane wave as the transmitted signal, and the compressed sensing (CS) theory for data acquisition and image reconstruction. To overcome the very high computation complexity caused by introducing CS into ultrasound imaging, in this paper, we propose an improvement of the fast iterative shrinkage-thresholding algorithm (FISTA) to achieve the fast reconstruction of the ultrasound imaging, in which a modified setting is done with the parameter of step size for each iteration. Further, the GPU strategy is designed for the proposed algorithm, to guarantee the real time implementation of imaging. The simulation results show that the GPU-based image reconstruction algorithm can achieve the fast ultrasound imaging without damaging the quality of image.

  1. Image Reconstruction Algorithm for Electrical Charge Tomography System

    Directory of Open Access Journals (Sweden)

    M. F. Rahmat

    2010-01-01

    Full Text Available Problem statement: Many problems in scientific computing can be formulated as inverse problem. A vast majority of these problems are ill-posed problems. In Electrical Charge Tomography (EChT, normally the sensitivity matrix generated from forward modeling is very ill-condition. This condition posts difficulties to the inverse problem solution especially in the accuracy and stability of the image being reconstructed. The objective of this study is to reconstruct the image cross-section of the material in pipeline gravity dropped mode conveyor as well to solve the ill-condition of matrix sensitivity. Approach: Least Square with Regularization (LSR method had been introduced to reconstruct the image and the electrodynamics sensor was used to capture the data that installed around the pipe. Results: The images were validated using digital imaging technique and Singular Value Decomposition (SVD method. The results showed that image reconstructed by this method produces a good promise in terms of accuracy and stability. Conclusion: This implied that LSR method provides good and promising result in terms of accuracy and stability of the image being reconstructed. As a result, an efficient method for electrical charge tomography image reconstruction has been introduced.

  2. Evolutionary algorithm based index assignment algorithm for noisy channel

    Institute of Scientific and Technical Information of China (English)

    李天昊; 余松煜

    2004-01-01

    A globally optimal solution to vector quantization (VQ) index assignment on noisy channel, the evolutionary algorithm based index assignment algorithm (EAIAA), is presented. The algorithm yields a significant reduction in average distortion due to channel errors, over conventional arbitrary index assignment, as confirmed by experimental results over the memoryless binary symmetric channel (BSC) for any bit error.

  3. Acceleration of the direct reconstruction of linear parametric images using nested algorithms.

    Science.gov (United States)

    Wang, Guobao; Qi, Jinyi

    2010-03-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  4. A Simple Algorithm for Immediate Postmastectomy Reconstruction of the Small Breast—A Single Surgeon's 10-Year Experience

    Science.gov (United States)

    Kitcat, Magelia; Molina, Alexandra; Meldon, Charlotte; Darhouse, Nagham; Clibbon, Jon; Malata, Charles M.

    2012-01-01

    Introduction: Immediate small breast reconstruction poses challenges including limited potential donor site tissues, a thinner skin envelope, and limited implant choice. Few patients are suitable for autologous reconstruction while contralateral symmetrization surgery that often offsets the problem of obvious asymmetry in thin and small-breasted patients is often unavailable, too expensive, or declined by the patient. Methods: We reviewed 42 consecutive patients with mastectomy weights of 350 g or less (the lowest quartile of all reconstructions). Indications for the mastectomy, body mass index, bra cup size, comorbidity, reconstruction type, and complications were recorded. Results: A total of 59 immediate reconstructions, including 25 latissimus dorsi flaps, 23 implant-only reconstructions, 9 abdominal flaps, and 2 gluteal flaps, were performed in 42 patients. Of the 42 mastectomies, 4 were prophylactic. Forty-three percent of patients had immediate contralateral balancing surgery. The average mastectomy weight was 231 g (range, 74-350 g). Seven percent of implant-based reconstructions developed capsular contracture requiring further surgery. One free transverse rectus abdominus myocutaneous flap failed because of fulminant methicillin resistant staphylococcus aureus septicaemia. Discussion and Conclusion: Balancing contralateral surgery is key in achieving excellent symmetry in reconstruction small-breasted patients. However, many patients wish to avoid contralateral surgery, thus restricting a surgeon's reconstructive options. Autologous flaps, traditionally, had not been considered in thinner women because of inadequacy of donor site tissue, but in fact, often, as with larger-breasted patients, produce superior cosmetic results. We propose a simple algorithm for the reconstruction of small-breasted women (without resorting to super-complex microsurgery), which is designed to tailor the choice of reconstructive technique to the requirements of the individual

  5. Practical algorithms for simulation and reconstruction of digital in-line holograms

    CERN Document Server

    Latychevskaia, Tatiana

    2014-01-01

    Here, we present practical methods for simulation and reconstruction of in-line digital holograms recorded with plane and spherical waves. The algorithms described here are applicable to holographic imaging of an object exhibiting absorption as well as phase shifting properties. Optimal parameters, related to distances, sampling rate, and other factors for successful simulation and reconstruction of holograms are evaluated and criteria for the achievable resolution are worked out. Moreover, we show that the numerical procedures for the reconstruction of holograms recorded with plane and spherical waves are identical under certain conditions. Experimental examples of holograms and their reconstructions are also discussed.

  6. Contrast improvement of continuous wave diffuse optical tomography reconstruction by hybrid approach using least square and genetic algorithm.

    Science.gov (United States)

    Patra, Rusha; Dutta, Pranab K

    2015-07-01

    Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10⁻³, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.

  7. Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath

    2012-01-01

    This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals....... The performance of the existing compressed sensing reconstruction algorithms have not been investigated yet for the discrete multi-coset sampling. We compare the following algorithms – orthogonal matching pursuit, multiple signal classification, subspace-augmented multiple signal classification, focal under...

  8. Technical Note: Proximal Ordered Subsets Algorithms for TV Constrained Optimization in CT Image Reconstruction

    CERN Document Server

    Rose, Sean; Sidky, Emil Y; Pan, Xiaochuan

    2016-01-01

    This article is intended to supplement our 2015 paper in Medical Physics titled "Noise properties of CT images reconstructed by use of constrained total-variation, data-discrepancy minimization", in which ordered subsets methods were employed to perform total-variation constrained data-discrepancy minimization for image reconstruction in X-ray computed tomography. Here we provide details regarding implementation of the ordered subsets algorithms and suggestions for selection of algorithm parameters. Detailed pseudo-code is included for every algorithm implemented in the original manuscript.

  9. Theory and algorithms for image reconstruction on chords and within regions of interest.

    Science.gov (United States)

    Zou, Yu; Pan, Xiaochuan; Sidky, Emil Y

    2005-11-01

    We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.

  10. Developpement d'algorithmes de reconstruction statistique appliques en tomographie rayons-X assistee par ordinateur

    Science.gov (United States)

    Thibaudeau, Christian

    La tomodensitometrie (TDM) permet d'obtenir, et ce de facon non invasive, une image tridimensionnelle de l'anatomie interne d'un sujet. Elle constitue l'evolution logique de la radiographie et permet l'observation d'un volume sous differents plans (sagittal, coronal, axial ou n'importe quel autre plan). La TDM peut avantageusement completer la tomographie d'emission par positrons (TEP), un outil de predilection utilise en recherche biomedicale et pour le diagnostic du cancer. La TEP fournit une information fonctionnelle, physiologique et metabolique, permettant la localisation et la quantification de radiotraceurs a l'interieur du corps humain. Cette derniere possede une sensibilite inegalee, mais peut neanmoins souffrir d'une faible resolution spatiale et d'un manque de repere anatomique selon le radiotraceur utilise. La combinaison, ou fusion, des images TEP et TDM permet d'obtenir cette localisation anatomique de la distribution du radiotraceur. L'image TDM represente une carte de l'attenuation subie par les rayons-X lors de leur passage a travers les tissus. Elle permet donc aussi d'ameliorer la quantification de l'image TEP en offrant la possibilite de corriger pour l'attenuation. L'image TDM s'obtient par la transformation de profils d'attenuation en une image cartesienne pouvant etre interpretee par l'humain. Si la qualite de cette image est fortement influencee par les performances de l'appareil, elle depend aussi grandement de la capacite de l'algorithme de reconstruction a obtenir une representation fidele du milieu image. Les techniques de reconstruction standards, basees sur la retroprojection filtree (FBP, filtered back-projection), reposent sur un modele mathematiquement parfait de la geometrie d'acquisition. Une alternative a cette methode etalon est appelee reconstruction statistique, ou iterative. Elle permet d'obtenir de meilleurs resultats en presence de bruit ou d'une quantite limitee d'information et peut virtuellement s'adapter a toutes formes

  11. Deconvolution based photoacoustic reconstruction for directional transducer with sparsity regularization

    Science.gov (United States)

    Moradi, Hamid; Tang, Shuo; Salcudean, Septimiu E.

    2016-03-01

    We define a deconvolution based photoacoustic reconstruction with sparsity regularization (DPARS) algorithm for image restoration from projections. The proposed method is capable of visualizing tissue in the presence of constraints such as the specific directivity of sensors and limited-view Photoacoustic Tomography (PAT). The directivity effect means that our algorithm treats the optically-generated ultrasonic waves based on which direction they arrive at the transducer. Most PA image reconstruction methods assume that sensors have omni-directional response; however, in practice, the sensors show higher sensitivity to the ultrasonic waves coming from one specific direction. In DPARS, the sensitivity of the transducer to incoming waves from different directions are considered. Thus, the DPARS algorithm takes into account the relative location of the absorbers with respect to the transducers, and generates a linear system of equations to solve for the distribution of absorbers. The numerical conditioning and computing times are improved by the use of a sparse Discrete Fourier Transform (DCT) representation of the distribution of absorption coefficients. Our simulation results show that DPARS outperforms the conventional Delay-and-Sum reconstruction method in terms of CNR and RMS errors. Experimental results confirm that DPARS provides images with higher resolution than DAS.

  12. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    Energy Technology Data Exchange (ETDEWEB)

    Rescigno, R., E-mail: regina.rescigno@iphc.cnrs.fr [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Finck, Ch.; Juliani, D. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Spiriti, E. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali di Frascati (Italy); Istituto Nazionale di Fisica Nucleare - Sezione di Roma 3 (Italy); Baudot, J. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Abou-Haidar, Z. [CNA, Sevilla (Spain); Agodi, C. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Alvarez, M.A.G. [CNA, Sevilla (Spain); Aumann, T. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); Battistoni, G. [Istituto Nazionale di Fisica Nucleare - Sezione di Milano (Italy); Bocci, A. [CNA, Sevilla (Spain); Böhlen, T.T. [European Organization for Nuclear Research CERN, Geneva (Switzerland); Medical Radiation Physics, Karolinska Institutet and Stockholm University, Stockholm (Sweden); Boudard, A. [CEA-Saclay, IRFU/SPhN, Gif sur Yvette Cedex (France); Brunetti, A.; Carpinelli, M. [Istituto Nazionale di Fisica Nucleare - Sezione di Cagliari (Italy); Università di Sassari (Italy); Cirrone, G.A.P. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Cortes-Giraldo, M.A. [Departamento de Fisica Atomica, Molecular y Nuclear, University of Sevilla, 41080-Sevilla (Spain); Cuttone, G.; De Napoli, M. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Durante, M. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); and others

    2014-12-11

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.

  13. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    Science.gov (United States)

    Rescigno, R.; Finck, Ch.; Juliani, D.; Spiriti, E.; Baudot, J.; Abou-Haidar, Z.; Agodi, C.; Alvarez, M. A. G.; Aumann, T.; Battistoni, G.; Bocci, A.; Böhlen, T. T.; Boudard, A.; Brunetti, A.; Carpinelli, M.; Cirrone, G. A. P.; Cortes-Giraldo, M. A.; Cuttone, G.; De Napoli, M.; Durante, M.; Gallardo, M. I.; Golosio, B.; Iarocci, E.; Iazzi, F.; Ickert, G.; Introzzi, R.; Krimmer, J.; Kurz, N.; Labalme, M.; Leifels, Y.; Le Fevre, A.; Leray, S.; Marchetto, F.; Monaco, V.; Morone, M. C.; Oliva, P.; Paoloni, A.; Patera, V.; Piersanti, L.; Pleskac, R.; Quesada, J. M.; Randazzo, N.; Romano, F.; Rossi, D.; Rousseau, M.; Sacchi, R.; Sala, P.; Sarti, A.; Scheidenberger, C.; Schuy, C.; Sciubba, A.; Sfienti, C.; Simon, H.; Sipala, V.; Tropea, S.; Vanstalle, M.; Younis, H.

    2014-12-01

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.

  14. Application of detecting algorithm based on network

    Institute of Scientific and Technical Information of China (English)

    张凤斌; 杨永田; 江子扬; 孙冰心

    2004-01-01

    Because currently intrusion detection systems cannot detect undefined intrusion behavior effectively,according to the robustness and adaptability of the genetic algorithms, this paper integrates the genetic algorithms into an intrusion detection system, and a detection algorithm based on network traffic is proposed. This algorithm is a real-time and self-study algorithm and can detect undefined intrusion behaviors effectively.

  15. Diversity-Based Boosting Algorithm

    Directory of Open Access Journals (Sweden)

    Jafar A. Alzubi

    2016-05-01

    Full Text Available Boosting is a well known and efficient technique for constructing a classifier ensemble. An ensemble is built incrementally by altering the distribution of training data set and forcing learners to focus on misclassification errors. In this paper, an improvement to Boosting algorithm called DivBoosting algorithm is proposed and studied. Experiments on several data sets are conducted on both Boosting and DivBoosting. The experimental results show that DivBoosting is a promising method for ensemble pruning. We believe that it has many advantages over traditional boosting method because its mechanism is not solely based on selecting the most accurate base classifiers but also based on selecting the most diverse set of classifiers.

  16. Optimization-based reconstruction for reduction of CBCT artifact in IGRT

    Science.gov (United States)

    Xia, Dan; Zhang, Zheng; Paysan, Pascal; Seghers, Dieter; Brehm, Marcus; Munro, Peter; Sidky, Emil Y.; Pelizzari, Charles; Pan, Xiaochuan

    2016-04-01

    Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.

  17. Algorithms For Phylogeny Reconstruction In a New Mathematical Model

    NARCIS (Netherlands)

    Lenzini, Gabriele; Marianelli, Silvia

    1997-01-01

    The evolutionary history of a set of species is represented by a tree called phylogenetic tree or phylogeny. Its structure depends on precise biological assumptions about the evolution of species. Problems related to phylogeny reconstruction (i.e., finding a tree representation of information regard

  18. An Algorithmic Approach for the Reconstruction of Nasal Skin Defects: Retrospective Analysis of 130 Cases

    Directory of Open Access Journals (Sweden)

    Berrak Akşam

    2016-06-01

    Full Text Available Objective: Most of the malignant cutaneous carcinomas are seen in the nasal region. Reconstruction of nasal defects is challenging because of the unique anatomic properties and complex structure of this region. In this study, we present our algorithm for the nasal skin defects that occurred after malignant skin tumor excisions. Material and Methods: Patients whose nasal skin was reconstructed after malignant skin tumor excision were included in the study. These patients were evaluated by their age, gender, comorbities, tumor location, tumor size, reconstruction type, histopathological diagnosis, and tumor recurrence. Results: A total of 130 patients (70 female, 60 male were evaluated. The average age of the patients was 67.8 years. Tumors were located mostly at the dorsum, alar region, and tip of the nose. When reconstruction methods were evaluated, primary closure was preferred in 14.6% patients, full thickness skin grafts were used in 25.3% patients, and reconstruction with flaps were the choice in 60% patients. Different flaps were used according to the subunits. Mostly, dorsal nasal flaps, bilobed flaps, nasolabial flaps, and forehead flaps were used. Conclusion: The defect-only reconstruction principle was accepted in this study. Previously described subunits, such as the dorsum, tip, alar region, lateral wall, columella, and soft triangles, of the nose were further divided into subregions by their anatomical relations. An algorithm was planned with these sub regions. In nasal skin reconstruction, this algorithm helps in selection the methods for the best results and minimize the complications.

  19. Phase-contrast CT: fundamental theorem and fast image reconstruction algorithms

    Science.gov (United States)

    Bronnikov, Andrei V.

    2006-08-01

    Phase-contrast x-ray computed tomography (CT) is an emerging imaging technique that can be implemented at third generation synchrotron radiation sources or by using a microfocus x-ray tube. Promising experimental results have recently been obtained in material science and biological applications. At the same time, the lack of a mathematical theory comparable to that of conventional absorption-based CT limits the progress in this field. We suggest such a theory and prove a fundamental theorem that plays the same role for phase-contrast CT as the Fourier slice theorem does for absorption-based CT. The fundamental theorem allows us to derive fast image reconstruction algorithms in the form of filtered backprojection (FBP).

  20. Neural Network Based 3D Surface Reconstruction

    Directory of Open Access Journals (Sweden)

    Vincy Joseph

    2009-11-01

    Full Text Available This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach

  1. DD4hep Based Event Reconstruction

    CERN Document Server

    AUTHOR|(SzGeCERN)683529; Frank, Markus; Gaede, Frank-Dieter; Hynds, Daniel; Lu, Shaojun; Nikiforou, Nikiforos; Petric, Marko; Simoniello, Rosa; Voutsinas, Georgios Gerasimos

    The DD4HEP detector description toolkit offers a flexible and easy-to-use solution for the consistent and complete description of particle physics detectors in a single system. The sub-component DDREC provides a dedicated interface to the detector geometry as needed for event reconstruction. With DDREC there is no need to define an additional, separate reconstruction geometry as is often done in HEP, but one can transparently extend the existing detailed simulation model to be also used for the reconstruction. Based on the extension mechanism of DD4HEP, DDREC allows one to attach user defined data structures to detector elements at all levels of the geometry hierarchy. These data structures define a high level view onto the detectors describing their physical properties, such as measurement layers, point resolutions, and cell sizes. For the purpose of charged particle track reconstruction, dedicated surface objects can be attached to every volume in the detector geometry. These surfaces provide the measuremen...

  2. DD4hep Based Event Reconstruction

    CERN Document Server

    Sailer, Andre; Gaede, Frank-Dieter; Hynds, Daniel; Lu, Shaojun; Nikiforou, Nikiforos; Petric, Marko; Simoniello, Rosa; Voutsinas, Georgios Gerasimos

    The DD4HEP detector description toolkit offers a flexible and easy-to-use solution for the consistent and complete description of particle physics detectors in a single system. The sub-component DDREC provides a dedicated interface to the detector geometry as needed for event reconstruction. With DDREC there is no need to define an additional, separate reconstruction geometry as is often done in HEP, but one can transparently extend the existing detailed simulation model to be also used for the reconstruction. Based on the extension mechanism of DD4HEP, DDREC allows one to attach user defined data structures to detector elements at all levels of the geometry hierarchy. These data structures define a high level view onto the detectors describing their physical properties, such as measurement layers, point resolutions, and cell sizes. For the purpose of charged particle track reconstruction, dedicated surface objects can be attached to every volume in the detector geometry. These surfaces provide the measuremen...

  3. Wave Superposition Based Sound Field Reconstruction

    Institute of Scientific and Technical Information of China (English)

    LI Jia-qing; CHEN Jin; YANG Chao

    2008-01-01

    In order to overcome the obstacle of singular integral in boundary element method (BEM), wepresented an efficient sound field reconstruction technique based on the wave superposition method (WSM). Itsprinciple includes three steps: first, the sound pressure field of an arbitrary shaped radiator is measured witha microphone array; then, the exterior sound field of the radiator is computed backward and forward using theWSM; at last, the final results are visualized in terms of sound pressure contours or animations. With thesevisualized contours or animations, noise sources can be easily located and quantified; also noise transmissionpath can be found out. By numerical simulation and experimental results, we proved that the technique aresuitable and accurate for sound field reconstruction. In addition, we presented a sound field reconstruction sys-tem prototype on the basis of this technique. It makes a foundation for the application of wave superpositionin the sound field reconstruction in industry situations.

  4. GPU-based online track reconstruction for PANDA and application to the analysis of D→Kππ

    Energy Technology Data Exchange (ETDEWEB)

    Herten, Andreas

    2015-07-02

    The PANDA experiment is a new hadron physics experiment which is being built for the FAIR facility in Darmstadt, Germany. PANDA will employ a novel scheme of data acquisition: the experiment will reconstruct the full stream of events in realtime to make trigger decisions based on the event topology. An important part of this online event reconstruction is online track reconstruction. Online track reconstruction algorithms need to reconstruct particle trajectories in nearly realtime. This work uses the high-throughput devices of Graphics Processing Units to benchmark different online track reconstruction algorithms. The reconstruction of D{sup ±}→K{sup -+}π{sup ±}π{sup ±} is studied extensively and one online track reconstruction algorithm applied.

  5. New reconstruction algorithm allows shortened acquisition time for myocardial perfusion SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Valenta, Ines; Treyer, Valerie; Husmann, Lars; Gaemperli, Oliver; Schindler, Michael J.; Herzog, Bernhard A.; Veit-Heibach, Patrick; Pazhenkottil, Aju P.; Kaufmann, Philipp A. [University Hospital Zurich, Cardiac Imaging, Zurich (Switzerland); University of Zurich, Zurich Center for Integrative Human Physiology, Zurich (Switzerland); Buechel, Ronny R.; Nkoulou, Rene [University Hospital Zurich, Cardiac Imaging, Zurich (Switzerland)

    2010-04-15

    Shortening scan time and/or reducing radiation dose at maintained image quality are the main issues of the current research in radionuclide myocardial perfusion imaging (MPI). We aimed to validate a new iterative reconstruction (IR) algorithm for SPECT MPI allowing shortened acquisition time (HALF time) while maintaining image quality vs. standard full time acquisition (FULL time). In this study, 50 patients, referred for evaluation of known or suspected coronary artery disease by SPECT MPI using 99mTc-Tetrofosmin, underwent 1-day adenosine stress 300 MBq/rest 900 MBq protocol with standard (stress 15 min/rest 15 min FULL time) immediately followed by short emission scan (stress 9 min/rest 7 min HALF time) on a Ventri SPECT camera (GE Healthcare). FULL time scans were processed with IR, short scans were additionally processed with a recently developed software algorithm for HALF time emission scans. All reconstructions were subsequently analyzed using commercially available software (QPS/QGS, Cedars Medical Sinai) with/without X-ray based attenuation correction (AC). Uptake values (percent of maximum) were compared by regression and Bland-Altman (BA) analysis in a 20-segment model. HALF scans yielded a 96% readout and 100% clinical diagnosis concordance compared to FULL. Correlation for uptake in each segment (n = 1,000) was r = 0.87at stress (p < 0.001) and r = 0.89 at rest (p < 0.001) with respective BA limits of agreement of -11% to 10% and -12% to 11%. After AC similar correlation (r = 0.82, rest; r = 0.80, stress, both p < 0.001) and BA limits were found (-12% to 10%; -13% to 12%). With the new IR algorithm, SPECT MPI can be acquired at half of the scan time without compromising image quality, resulting in an excellent agreement with FULL time scans regarding to uptake and clinical conclusion. (orig.)

  6. Bayesian Image Reconstruction Based on Voronoi Diagrams

    CERN Document Server

    Cabrera, G F; Hitschfeld, N

    2007-01-01

    We present a Bayesian Voronoi image reconstruction technique (VIR) for interferometric data. Bayesian analysis applied to the inverse problem allows us to derive the a-posteriori probability of a novel parameterization of interferometric images. We use a variable Voronoi diagram as our model in place of the usual fixed pixel grid. A quantization of the intensity field allows us to calculate the likelihood function and a-priori probabilities. The Voronoi image is optimized including the number of polygons as free parameters. We apply our algorithm to deconvolve simulated interferometric data. Residuals, restored images and chi^2 values are used to compare our reconstructions with fixed grid models. VIR has the advantage of modeling the image with few parameters, obtaining a better image from a Bayesian point of view.

  7. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Science.gov (United States)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  8. Comparing five different iterative reconstruction algorithms for computed tomography in an ROC study.

    Science.gov (United States)

    Jensen, Kristin; Martinsen, Anne Catrine T; Tingberg, Anders; Aaløkken, Trond Mogens; Fosse, Erik

    2014-12-01

    The purpose of this study was to evaluate lesion conspicuity achieved with five different iterative reconstruction techniques from four CT vendors at three different dose levels. Comparisons were made of iterative algorithm and filtered back projection (FBP) among and within systems. An anthropomorphic liver phantom was examined with four CT systems, each from a different vendor. CTDIvol levels of 5 mGy, 10 mGy and 15 mGy were chosen. Images were reconstructed with FBP and the iterative algorithm on the system. Images were interpreted independently by four observers, and the areas under the ROC curve (AUCs) were calculated. Noise and contrast-to-noise ratios (CNR) were measured. One iterative algorithm increased AUC (0.79, 0.95, and 0.97) compared to FBP (0.70, 0.86, and 0.93) at all dose levels (p algorithm increased AUC from 0.78 with FBP to 0.84 (p = 0.007) at 5 mGy. Differences at 10 and 15 mGy were not significant (p-values: 0.084-0.883). Three algorithms showed no difference in AUC compared to FBP (p-values: 0.008-1.000). All of the algorithms decreased noise (10-71%) and improved CNR. Only two algorithms improved lesion detection, even though noise reduction was shown with all algorithms. Iterative reconstruction algorithms affected lesion detection differently at different dose levels. One iterative algorithm improved lesion detectability compared to filtered back projection. Three algorithms did not significantly improve lesion detectability. One algorithm improved lesion detectability at the lowest dose level.

  9. A stand-alone track reconstruction algorithm for the scintillating fibre tracker at the LHCb upgrade

    CERN Multimedia

    Quagliani, Renato

    2017-01-01

    The LHCb upgrade detector project foresees the presence of a scintillating fiber tracker (SciFi) to be used during the LHC Run III, starting in 2020. The instantaneous luminosity will be increased up to $2\\times10^{33}$, five times larger than in Run II and a full software event reconstruction will be performed at the full bunch crossing rate by the trigger. The new running conditions, and the tighter timing constraints in the software trigger, represent a big challenge for track reconstruction. This poster presents the design and performance of a novel algorithm that has been developed to reconstruct track segments using solely hits from the SciFi. This algorithm is crucial for the reconstruction of tracks originating from long-lived particles such as $K_{S}^{0}$ and $\\Lambda$ and allows to greatly enhance the physics potential and capabilities of the LHCb upgrade when compared to its previous implementation.

  10. PROCEEDINGS ON SYNCHROTRON RADIATION: An ART iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging

    Science.gov (United States)

    Wang, Zhen-Tian; Zhang, Li; Huang, Zhi-Feng; Kang, Ke-Jun; Chen, Zhi-Qiang; Fang, Qiao-Guang; Zhu, Pei-Ping

    2009-11-01

    X-ray diffraction enhanced imaging (DEI) has extremely high sensitivity for weakly absorbing low-Z samples in medical and biological fields. In this paper, we propose an Algebra Reconstruction Technique (ART) iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging (DEI-CT). An Ordered Subsets (OS) technique is used to accelerate the ART reconstruction. Few-view reconstruction is also studied, and a partial differential equation (PDE) type filter which has the ability of edge-preserving and denoising is used to improve the image quality and eliminate the artifacts. The proposed algorithm is validated with both the numerical simulations and the experiment at the Beijing synchrotron radiation facility (BSRF).

  11. Regularized image reconstruction algorithms for dual-isotope myocardial perfusion SPECT (MPS) imaging using a cross-tracer prior.

    Science.gov (United States)

    He, Xin; Cheng, Lishui; Fessler, Jeffrey A; Frey, Eric C

    2011-06-01

    In simultaneous dual-isotope myocardial perfusion SPECT (MPS) imaging, data are simultaneously acquired to determine the distributions of two radioactive isotopes. The goal of this work was to develop penalized maximum likelihood (PML) algorithms for a novel cross-tracer prior that exploits the fact that the two images reconstructed from simultaneous dual-isotope MPS projection data are perfectly registered in space. We first formulated the simultaneous dual-isotope MPS reconstruction problem as a joint estimation problem. A cross-tracer prior that couples voxel values on both images was then proposed. We developed an iterative algorithm to reconstruct the MPS images that converges to the maximum a posteriori solution for this prior based on separable surrogate functions. To accelerate the convergence, we developed a fast algorithm for the cross-tracer prior based on the complete data OS-EM (COSEM) framework. The proposed algorithm was compared qualitatively and quantitatively to a single-tracer version of the prior that did not include the cross-tracer term. Quantitative evaluations included comparisons of mean and standard deviation images as well as assessment of image fidelity using the mean square error. We also evaluated the cross tracer prior using a three-class observer study with respect to the three-class MPS diagnostic task, i.e., classifying patients as having either no defect, reversible defect, or fixed defects. For this study, a comparison with conventional ordered subsets-expectation maximization (OS-EM) reconstruction with postfiltering was performed. The comparisons to the single-tracer prior demonstrated similar resolution for areas of the image with large intensity changes and reduced noise in uniform regions. The cross-tracer prior was also superior to the single-tracer version in terms of restoring image fidelity. Results of the three-class observer study showed that the proposed cross-tracer prior and the convergent algorithms improved the

  12. Hardware Implementation and Validation of 3D Underwater Shape Reconstruction Algorithm Using a Stereo-Catadioptric System

    Directory of Open Access Journals (Sweden)

    Rihab Hmida

    2016-08-01

    Full Text Available In this paper, we present a new stereo vision-based system and its efficient hardware implementation for real-time underwater environments exploration throughout 3D sparse reconstruction based on a number of feature points. The proposed underwater 3D shape reconstruction algorithm details are presented. The main concepts and advantages are discussed and comparison with existing systems is performed. In order to achieve real-time video constraints, a hardware implementation of the algorithm is performed using Xilinx System Generator. The pipelined stereo vision system has been implemented using Field Programmable Gate Arrays (FPGA technology. Both timing constraints and mathematical operations precision have been evaluated in order to validate the proposed hardware implementation of our system. Experimental results show that the proposed system presents high accuracy and execution time performances.

  13. Jet Energy Scale and its Uncertainties using the Heavy Ion Jet Reconstruction Algorithm in pp Collisions

    CERN Document Server

    Puri, Akshat; The ATLAS collaboration

    2017-01-01

    ATLAS uses a jet reconstruction algorithm in heavy ion collisions that takes as input calorimeter towers of size $0.1 \\times \\pi/32$ in $\\Delta\\eta \\times \\Delta\\phi$ and iteratively determines the underlying event background. This algorithm, which is different from the standard jet reconstruction used in ATLAS, is also used in the proton-proton collisions used as reference data for the Pb+Pb and p+Pb. This poster provides details of the heavy ion jet reconstruction algorithm and its performance in pp collisions. The calibration procedure is described in detail and cross checks using photon- jet balance are shown. The uncertainties on the jet energy scale and the jet energy resolution are described.

  14. A robust jet reconstruction algorithm for high-energy lepton colliders

    Directory of Open Access Journals (Sweden)

    M. Boronat

    2015-11-01

    Full Text Available We propose a new sequential jet reconstruction algorithm for future lepton colliders at the energy frontier. The Valencia algorithm combines the natural distance criterion for lepton colliders with the greater robustness against backgrounds of algorithms adapted to hadron colliders. Results on a detailed Monte Carlo simulation of tt¯ and ZZ production at future linear e+e− colliders (ILC and CLIC with a realistic level of background overlaid, show that it achieves better performance in the presence of background than the classical algorithms used at previous e+e− colliders.

  15. Reconstruction of harmonic signals based on bispectrum

    Institute of Scientific and Technical Information of China (English)

    FAN Yangyu; SUN Jincai; LI Pingan; XU Jiadong; SHANG Jiuhao

    2000-01-01

    A method for accurate reconstruction of the harmonic signals from bispectrum is presented. Based on the analysis of the measured harmonic signal, a sinusoid signal with 0phase, 1-amplitude and half of the fundamental frequency combines with the measured signal to form a combined signal, and then the bispectrum analysis is carried out to reconstruct the phase and the amplitude of the measured signal accurately. Without the zero-phase assumption of the fundamental component, using the new method eliminates the phase shifting between the calculated Fourier phase and the true Fourier phase in the existing signal retrieval methods based on bispectrum. The simulation results show the effectiveness of the new method.

  16. Imaging metallic samples using electrical capacitance tomography: forward modelling and reconstruction algorithms

    Science.gov (United States)

    Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.

  17. Incorporation of local dependent reliability information into the Prior Image Constrained Compressed Sensing (PICCS) reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Vaegler, Sven; Sauer, Otto [Wuerzburg Univ. (Germany). Dept. of Radiation Oncology; Stsepankou, Dzmitry; Hesser, Juergen [University Medical Center Mannheim, Mannheim (Germany). Dept. of Experimental Radiation Oncology

    2015-07-01

    The reduction of dose in cone beam computer tomography (CBCT) arises from the decrease of the tube current for each projection as well as from the reduction of the number of projections. In order to maintain good image quality, sophisticated image reconstruction techniques are required. The Prior Image Constrained Compressed Sensing (PICCS) incorporates prior images into the reconstruction algorithm and outperforms the widespread used Feldkamp-Davis-Kress-algorithm (FDK) when the number of projections is reduced. However, prior images that contain major variations are not appropriately considered so far in PICCS. We therefore propose the partial-PICCS (pPICCS) algorithm. This framework is a problem-specific extension of PICCS and enables the incorporation of the reliability of the prior images additionally. We assumed that the prior images are composed of areas with large and small deviations. Accordingly, a weighting matrix considered the assigned areas in the objective function. We applied our algorithm to the problem of image reconstruction from few views by simulations with a computer phantom as well as on clinical CBCT projections from a head-and-neck case. All prior images contained large local variations. The reconstructed images were compared to the reconstruction results by the FDK-algorithm, by Compressed Sensing (CS) and by PICCS. To show the gain of image quality we compared image details with the reference image and used quantitative metrics (root-mean-square error (RMSE), contrast-to-noise-ratio (CNR)). The pPICCS reconstruction framework yield images with substantially improved quality even when the number of projections was very small. The images contained less streaking, blurring and inaccurately reconstructed structures compared to the images reconstructed by FDK, CS and conventional PICCS. The increased image quality is also reflected in large RMSE differences. We proposed a modification of the original PICCS algorithm. The pPICCS algorithm

  18. 3 dimensional ionospheric electron density reconstruction based on GPS measurements

    Science.gov (United States)

    Stolle, C.; Schlüter, S.; Jacobi, C.; Jakowski, N.

    When radio waves as sended by the naviagtion system GPS are passing through the ionosphere they are subject to delays in phase, travel time and polarisation which is an effect of the free electrons. The measured integrated value of Total Electron Content can be utilised for three-dimensional reconstruction of electron density patterns in the ionosphere. Here a tomographic approach is represented. Scince the distribution of data is very sparse and patchy we decided for an algebraic iterative algorithm. The ground based GPS data collected by IGS receivers can be combined by space based GPS of radio limb sounding, incoherent scatter radar and ionosondes data. Hereby, radio occultation data improve beside the amount of available data especially the vertical resolution of electron density distribution. Ionosonde peack electron densities are taken as stop criteria determination for iteration. Reconstructed ionospheric scenarios and validations of the system by independent measurements are presented.

  19. MEDICAL IMAGE COLOUR TRANSFER AND 3 D RECONSTRUCTION BASED ON IMPROVED GENETIC ALGORITHM%基于改进遗传算法的医学图片颜色迁移及重构

    Institute of Scientific and Technical Information of China (English)

    蒋先刚; 丘赟立; 熊娟

    2013-01-01

    为了得到有效和符合真实人体器官组织的彩色仿真图片,通过灰色MRI切片和真彩图像的亮度及纹理的适应度的搜索取优而实现灰度MRI切片的颜色迁移。着重研究将改进遗传算法与点邻域亮度纹理分布和局部颜色结构分布相结合,并应用于Welsh图像彩色化算法和比较,探索用于遗传算法的种群选择、交叉和变异、编码方式和码值渐变自适应等调整技术和方法。将点邻域分布和彩色局部分布的彩色特征通过穷举法、随机法和遗传算法进行最优彩色点搜索分析比较。最后得到的彩色化的MRI切片的三维重构模型能多层次清晰地反映器官组织的分布和构造。%In order to gain the colourful simulated image of human organs and tissues which are valid and in line with the real , we realise the colour transferring of a gray MRI slice by searching and picking up the optimal fitness of a gray MRI slice to the brightness and textures of a true colour image .In this paper we particularly study the combination of the improved genetic algorithm with the distribution of brightness and textures within pixel neighbourhood and the local colour structure , and apply our study to Welsh image colourisation algorithm and comparison, probe the adjustment technologies and ways of population selection , crossover and mutation , coding mode and code value gradual varying adaptiveness , all are used in genetic algorithm .By using exhaust algorithm , random algorithm and genetic algorithm , colour characteristics with pixels neighbourhood distribution and with local colour distribution are analysed and compared for best colour points search.The reconstructed 3D model of colourised MRI slices eventually derived can clearly and in multi-level reflect the distribution and construction of human organs and tissues .

  20. Studies on filtered back-projection imaging reconstruction based on a modified wavelet threshold function

    Science.gov (United States)

    Wang, Zhengzi; Ren, Zhong; Liu, Guodong

    2016-10-01

    In this paper, the wavelet threshold denoising method was used into the filtered back-projection algorithm of imaging reconstruction. To overcome the drawbacks of the traditional soft- and hard-threshold functions, a modified wavelet threshold function was proposed. The modified wavelet threshold function has two threshold values and two variants. To verify the feasibility of the modified wavelet threshold function, the standard test experiments were performed by using the software platform of MATLAB. Experimental results show that the filtered back-projection reconstruction algorithm based on the modified wavelet threshold function has better reconstruction effect because of more flexible advantage.

  1. TV-constrained incremental algorithms for low-intensity CT image reconstruction

    DEFF Research Database (Denmark)

    Rose, Sean D.; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    constraint can be guided by an image reconstructed by filtered backprojection (FBP). We apply our algorithm to low-dose synchrotron X-ray CT data from the Advanced Photon Source (APS) at Argonne National Labs (ANL) to demonstrate its potential utility. We find that the algorithm provides a means of edge......-preserving regularization with the potential to generate useful images at low iteration number in low-dose CT....

  2. Application of incremental algorithms to CT image reconstruction for sparse-view, noisy data

    OpenAIRE

    Rose, Sean; Andersen, Martin Skovgaard; Sidky, Emil Y.; Pan, Xiaochuan

    2014-01-01

    This conference contribution adapts an incremental framework for solving optimization problems of interest for sparse-view CT. From the incremental framework two algorithms are derived: one that combines a damped form of the algebraic reconstruction technique (ART) with a total-variation (TV) projection, and one that employs a modified damped ART, accounting for a weighted-quadratic data fidelity term, combined with TV projection. The algorithms are demonstrated on simulated, noisy, sparsevie...

  3. 2.5D dictionary learning based computed tomography reconstruction

    Science.gov (United States)

    Luo, Jiajia; Eri, Haneda; Can, Ali; Ramani, Sathish; Fu, Lin; De Man, Bruno

    2016-05-01

    A computationally efficient 2.5D dictionary learning (DL) algorithm is proposed and implemented in the model- based iterative reconstruction (MBIR) framework for low-dose CT reconstruction. MBIR is based on the minimization of a cost function containing data-fitting and regularization terms to control the trade-off between data-fidelity and image noise. Due to the strong denoising performance of DL, it has previously been considered as a regularizer in MBIR, and both 2D and 3D DL implementations are possible. Compared to the 2D case, 3D DL keeps more spatial information and generates images with better quality although it requires more computation. We propose a novel 2.5D DL scheme, which leverages the computational advantage of 2D-DL, while attempting to maintain reconstruction quality similar to 3D-DL. We demonstrate the effectiveness of this new 2.5D DL scheme for MBIR in low-dose CT. By applying the 2D DL method in three different orthogonal planes and calculating the sparse coefficients accordingly, much of the 3D spatial information can be preserved without incurring the computational penalty of the 3D DL method. For performance evaluation, we use baggage phantoms with different number of projection views. In order to quantitatively compare the performance of different algorithms, we use PSNR, SSIM and region based standard deviation to measure the noise level, and use the edge response to calculate the resolution. Experimental results with full view datasets show that the different DL based algorithms have similar performance and 2.5D DL has the best resolution. Results with sparse view datasets show that 2.5D DL outperforms both 2D and 3D DL in terms of noise reduction. We also compare the computational costs, and 2.5D DL shows strong advantage over 3D DL in both full-view and sparse-view cases.

  4. Fast direct fourier reconstruction of radial and PROPELLER MRI data using the chirp transform algorithm on graphics hardware.

    Science.gov (United States)

    Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan

    2013-10-01

    To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.

  5. Comparison of parametric FBP and OS-EM reconstruction algorithm images for PET dynamic study

    Energy Technology Data Exchange (ETDEWEB)

    Oda, Keiichi; Uemura, Koji; Kimura, Yuichi; Senda, Michio [Tokyo Metropolitan Inst. of Gerontology (Japan). Positron Medical Center; Toyama, Hinako; Ikoma, Yoko

    2001-10-01

    An ordered subsets expectation maximization (OS-EM) algorithm is used for image reconstruction to suppress image noise and to make non-negative value images. We have applied OS-EM to a digital brain phantom and to human brain {sup 18}F-FDG PET kinetic studies to generate parametric images. A 45 min dynamic scan was performed starting injection of FDG with a 2D PET scanner. The images were reconstructed with OS-EM (6 iterations, 16 subsets) and with filtered backprojection (FBP), and K1, k2 and k3 images were created by the Marquardt non-linear least squares method based on the 3-parameter kinetic model. Although the OS-EM activity images correlated fairly well with those obtained by FBP, the pixel correlations were poor for the k2 and k3 parametric images, but the plots were scattered along the line of identity and the mean values for K1, k2 and k3 obtained by OS-EM were almost equal to those by FBP. The kinetic fitting error for OS-EM was no smaller than that for FBP. The results suggest that OS-EM is not necessarily superior to FBP for creating parametric images. (author)

  6. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    Science.gov (United States)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  7. Region-of-interest reconstruction on medical C-arms with the ATRACT algorithm

    Science.gov (United States)

    Dennerlein, Frank; Maier, Andreas

    2012-03-01

    Between 2006 and 2008, the business volume of the top 20 orthopedic manufacturers increased by 30% to about 35 Billion. Similar growth rates could be observed in the market of neurological devices, which went up in 2009 by 10.9% to a volume of 2.2 Billion in the US and by 7.0% to 500 Million in Europe.* These remarkable increases are closely connected to the fact that nowadays, many medical procedures, such as implantations in osteosynthesis or the placement of stents in neuroradiology can be performed using minimally-invasive approaches. Such approaches require elaborate interoperative imaging technology. C-arm based tomographic X-ray region-of-interest (ROI) tomography can deliver suitable imaging guidance in these circumstances: it can offer 3D information in desired patient regions at reasonably low X-ray dose. Tomographic ROI reconstruction, however, is in general challenging since projection images might be severely truncated. Recently, a novel, truncation-robust algorithm (ATRACT) has been suggested for 3D C-arm ROI imaging. In this paper, we report for the first time on the performance of ATRACT for reconstruction from real, angiographic C-arm data. Our results indicate that the resulting ROI image quality is suitable for intraoperative imaging. We observe only little differences to the images from a non-truncated acquisition, which would necessarily require significantly more X-ray dose.

  8. A Collaborative Neighbor Representation Based Face Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Zhengming Li

    2013-01-01

    Full Text Available We propose a new collaborative neighbor representation algorithm for face recognition based on a revised regularized reconstruction error (RRRE, called the two-phase collaborative neighbor representation algorithm (TCNR. Specifically, the RRRE is the division of  l2-norm of reconstruction error of each class into a linear combination of  l2-norm of reconstruction coefficients of each class, which can be used to increase the discrimination information for classification. The algorithm is as follows: in the first phase, the test sample is represented as a linear combination of all the training samples by incorporating the neighbor information into the objective function. In the second phase, we use the k classes to represent the test sample and calculate the collaborative neighbor representation coefficients. TCNR not only can preserve locality and similarity information of sparse coding but also can eliminate the side effect on the classification decision of the class that is far from the test sample. Moreover, the rationale and alternative scheme of TCNR are given. The experimental results show that TCNR algorithm achieves better performance than seven previous algorithms.

  9. The anâtaxis phylogenetic reconstruction algorithm

    OpenAIRE

    Sonderegger, Bernhard Pascal

    2007-01-01

    La phylogénétique est une discipline scientifique qui a pour but de reconstruire l'histoire de l'évolution à partir des caractères d'organismes vivants ou de données fossiles. Depuis quelques décennies, l'utilisation de caractères moléculaires dans la phylogénétique est devenue incontournable. D'ailleurs, la phylogénétique prend aujourd'hui un rôle important dans l'analyse bioinformatique de toutes sortes de données moléculaires. Il existe un grand nombre de techniques de reconstruction phylo...

  10. The anâtaxis phylogenetic reconstruction algorithm

    OpenAIRE

    Sonderegger, Bernhard Pascal; Chopard, Bastien; Bittar, Gabriel

    2008-01-01

    La phylogénétique est une discipline scientifique qui a pour but de reconstruire l'histoire de l'évolution à partir des caractères d'organismes vivants ou de données fossiles. Depuis quelques décennies, l'utilisation de caractères moléculaires dans la phylogénétique est devenue incontournable. D'ailleurs, la phylogénétique prend aujourd'hui un rôle important dans l'analyse bioinformatique de toutes sortes de données moléculaires. Il existe un grand nombre de techniques de reconstruction phylo...

  11. Comparing five different iterative reconstruction algorithms for computed tomography in an ROC study

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Kristin; Martinsen, Anne Catrine T. [Rikshospitalet, The Intervention Centre, Postboks 4950, Oslo (Norway); University of Oslo, lnstitute of Physics, Oslo (Norway); Tingberg, Anders [Lund University, Skaane University Hospital, Department of Medical Radiation Physics, Malmoe (Sweden); Aaloekken, Trond Mogens [Rikshospitalet, Department of Radiology and Nuclear Medicine, Postboks 4950, Oslo (Norway); Fosse, Erik [Rikshospitalet, The Intervention Centre, Postboks 4950, Oslo (Norway); University of Oslo, lnstitute of Clinical Medicine, Oslo (Norway)

    2014-12-15

    The purpose of this study was to evaluate lesion conspicuity achieved with five different iterative reconstruction techniques from four CT vendors at three different dose levels. Comparisons were made of iterative algorithm and filtered back projection (FBP) among and within systems. An anthropomorphic liver phantom was examined with four CT systems, each from a different vendor. CTDI{sub vol} levels of 5 mGy, 10 mGy and 15 mGy were chosen. Images were reconstructed with FBP and the iterative algorithm on the system. Images were interpreted independently by four observers, and the areas under the ROC curve (AUCs) were calculated. Noise and contrast-to-noise ratios (CNR) were measured. One iterative algorithm increased AUC (0.79, 0.95, and 0.97) compared to FBP (0.70, 0.86, and 0.93) at all dose levels (p < 0.001 and p = 0.047). Another algorithm increased AUC from 0.78 with FBP to 0.84 (p = 0.007) at 5 mGy. Differences at 10 and 15 mGy were not significant (p-values: 0.084-0.883). Three algorithms showed no difference in AUC compared to FBP (p-values: 0.008-1.000). All of the algorithms decreased noise (10-71 %) and improved CNR. Only two algorithms improved lesion detection, even though noise reduction was shown with all algorithms. (orig.)

  12. Speech Enhancement based on Compressive Sensing Algorithm

    Science.gov (United States)

    Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel

    2013-12-01

    There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.

  13. Lensless microscope based on iterative in-line holographic reconstruction

    Science.gov (United States)

    Wu, Jigang

    2014-11-01

    We propose a lensless microscopic imaging technique based on iteration algorithm with known constraint for image reconstruction in digital in-line holography. In our method, we introduce a constraint on the sample plane as known part in the lensless microscopy for iteration algorithm in order to eliminate the twin-image effect of holography and thus lead to better performance on microscopic imaging. We evaluate our method by numerical simulation and built a prototype in-line holographic imaging system and demonstrated its capability by preliminary experiments. In our proposed setup, a carefully designed photomask used to hold the sample is under illumination of a coherent light source. The in-line hologram is then recorded by a CMOS sensor. In the reconstruction, the known information of the illumination beam and the photomask is used as constraints in the iteration process. The improvement of image quality because of suppression of twin-images can be clearly seen by comparing the images obtained by direct holographic reconstruction and our iterative method.

  14. A 3D Scene Graph Cut-Based Dense Reconstruction Algorithm%一种基于图割的稠密三维场景重建算法

    Institute of Scientific and Technical Information of China (English)

    徐刚; 刘彬; 李海滨

    2011-01-01

    Stereo vision plays a very important role in environment perception of the autonomous exploration robot, by which the dense reconstruction result of the 3D scene can be derived. In this paper, the candidate matching algorithm combined with graph cut theory is proposed to solve the stereo correspondence problem. First, the mesh grid representing depth is constructed in the world coordinate system. Second, most of the candidates can be pruned according to an area matching algorithm, and the remaining nodes are used to form a graph. Finally, the global energy minimization could be achieved by finding minimum cut in the graph, and the 3D scene reconstruction would be fulfilled. The experimental result shows that a tradeoff between performance and efficiency could be achieved when threshold y was set to 0.6. The problem of matching ambiguity is solved by using the graph cut algorithm, and a better performance can be achieved in coarse texture region.%立体视觉技术是自主探测机器人在未知环境中获取信息的重要手段,通过对可视场景的稠密三维重建实现导航、定位及路径规划等一系列工作.本文在候选点匹配的基础上结合图割理论,首先在世界坐标系建立代表深度信息的网格节点,接着依据区域匹配算法对候选点进行初步筛选,去除大部分相关值较低的节点,建立简化的网格图,最后通过寻找图中最小割来实现能量函数的全局最小,完成稠密的三维场景重建.实验证明,相关阈值γ设为0.6时,简化网格图的重建精度和计算效率达到相对平衡.图割算法解决了候选点测量时潜在的匹配歧义问题,且对低纹理区域有较好的匹配效果.

  15. AN SVAD ALGORITHM BASED ON FNNKD METHOD

    Institute of Scientific and Technical Information of China (English)

    Chen Dong; Zhang Yan; Kuang Jingming

    2002-01-01

    The capacity of mobile communication system is improved by using Voice Activity Detection (VAD) technology. In this letter, a novel VAD algorithm, SVAD algorithm based on Fuzzy Neural Network Knowledge Discovery (FNNKD) method is proposed. The performance of SVAD algorithm is discussed and compared with traditional algorithm recommended by ITU G.729B in different situations. The simulation results show that the SVAD algorithm performs better.

  16. Learning-Based Video Superresolution Reconstruction Using Spatiotemporal Nonlocal Similarity

    Directory of Open Access Journals (Sweden)

    Meiyu Liang

    2015-01-01

    Full Text Available Aiming at improving the video visual resolution quality and details clarity, a novel learning-based video superresolution reconstruction algorithm using spatiotemporal nonlocal similarity is proposed in this paper. Objective high-resolution (HR estimations of low-resolution (LR video frames can be obtained by learning LR-HR correlation mapping and fusing spatiotemporal nonlocal similarities between video frames. With the objective of improving algorithm efficiency while guaranteeing superresolution quality, a novel visual saliency-based LR-HR correlation mapping strategy between LR and HR patches is proposed based on semicoupled dictionary learning. Moreover, aiming at improving performance and efficiency of spatiotemporal similarity matching and fusion, an improved spatiotemporal nonlocal fuzzy registration scheme is established using the similarity weighting strategy based on pseudo-Zernike moment feature similarity and structural similarity, and the self-adaptive regional correlation evaluation strategy. The proposed spatiotemporal fuzzy registration scheme does not rely on accurate estimation of subpixel motion, and therefore it can be adapted to complex motion patterns and is robust to noise and rotation. Experimental results demonstrate that the proposed algorithm achieves competitive superresolution quality compared to other state-of-the-art algorithms in terms of both subjective and objective evaluations.

  17. Reconstruction-plane-dependent weighted FDK algorithm for cone beam volumetric CT

    Science.gov (United States)

    Tang, Xiangyang; Hsieh, Jiang

    2005-04-01

    The original FDK algorithm has been extensively employed in medical and industrial imaging applications. With an increased cone angle, cone beam (CB) artifacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few "circular plus" trajectories have been proposed in the past to reduce CB artifacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artifacts of the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle. The inconsistence between conjugate rays is pixel dependent, i.e., it varies dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artifacts that can be avoided if appropriate view weighting strategy is exercised. In this paper, a modified FDK algorithm is proposed, along with an experimental evaluation and verification, in which the helical body phantom and a humanoid head phantom scanned by a volumetric CT (64 x 0.625 mm) are utilized. Without extra trajectories supplemental to the circular trajectory, the modified FDK algorithm applies reconstruction-plane-dependent view weighting on projection data before 3D backprojection, which reduces the inconsistency between conjugate rays by suppressing the contribution of one of the conjugate rays with a larger cone angle. Both computer-simulated and real phantom studies show that, up to a moderate cone angle, the CB artifacts can be substantially suppressed by the modified FDK algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering, and data manipulation efficiency, can be

  18. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    Science.gov (United States)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  19. Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS

    Science.gov (United States)

    Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.

    Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.

  20. Algorithmic approach to lower abdominal, perineal, and groin reconstruction using anterolateral thigh flaps.

    Science.gov (United States)

    Zelken, Jonathan A; AlDeek, Nidal F; Hsu, Chung-Chen; Chang, Nai-Jen; Lin, Chih-Hung; Lin, Cheng-Hung

    2016-02-01

    Lower abdominal, perineal, and groin (LAPG) reconstruction may be performed in a single stage. Anterolateral thigh (ALT) flaps are preferred here and taken as fasciocutaneous (ALT-FC), myocutaneous (ALT-MC), or vastus lateralis myocutaneous (VL-MC) flaps. We aim to present the results of reconstruction from a series of patients and guide flap selection with an algorithmic approach to LAPG reconstruction that optimizes outcomes and minimizes morbidity. Lower abdomen, groin, perineum, vulva, vagina, scrotum, and bladder wounds reconstructed in 22 patients using ALT flaps between 2000 and 2013 were retrospectively studied. Five ALT-FC, eight ALT-MC, and nine VL-MC flaps were performed. All flaps survived. Venous congestion occurred in three VL-MC flaps from mechanical cause. Wound infection occurred in six cases. Urinary leak occurred in three cases of bladder reconstruction. One patient died from congestive heart failure. The ALT flap is time tested and dependably addresses most LAPG defects; flap variations are suited for niche defects. We propose a novel algorithm to guide reconstructive decision-making.

  1. Optical correlation algorithm for reconstructing phase skeleton of complex optical fields for solving the phase problem

    DEFF Research Database (Denmark)

    Angelsky, O. V.; Gorsky, M. P.; Hanson, Steen Grüner;

    2014-01-01

    We propose an optical correlation algorithm illustrating a new general method for reconstructing the phase skeleton of complex optical fields from the measured two-dimensional intensity distribution. The core of the algorithm consists in locating the saddle points of the intensity distribution an...... and connecting such points into nets by the lines of intensity gradient that are closely associated with the equi-phase lines of the field. This algorithm provides a new partial solution to the inverse problem in optics commonly referred to as the phase problem....

  2. Cine cone beam CT reconstruction using low-rank matrix factorization: algorithm and a proof-of-princple study

    CERN Document Server

    Cai, Jian-Feng; Gao, Hao; Jiang, Steve B; Shen, Zuowei; Zhao, Hongkai

    2012-01-01

    Respiration-correlated CBCT, commonly called 4DCBCT, provide respiratory phase-resolved CBCT images. In many clinical applications, it is more preferable to reconstruct true 4DCBCT with the 4th dimension being time, i.e., each CBCT image is reconstructed based on the corresponding instantaneous projection. We propose in this work a novel algorithm for the reconstruction of this truly time-resolved CBCT, called cine-CBCT, by effectively utilizing the underlying temporal coherence, such as periodicity or repetition, in those cine-CBCT images. Assuming each column of the matrix $\\bm{U}$ represents a CBCT image to be reconstructed and the total number of columns is the same as the number of projections, the central idea of our algorithm is that the rank of $\\bm{U}$ is much smaller than the number of projections and we can use a matrix factorization form $\\bm{U}=\\bm{L}\\bm{R}$ for $\\bm{U}$. The number of columns for the matrix $\\bm{L}$ constraints the rank of $\\bm{U}$ and hence implicitly imposing a temporal cohere...

  3. Particle Reconstruction at the LHC using Jet Substructure Algorithms

    CERN Document Server

    Rathjens, Denis

    2012-01-01

    The LHC-era with √ s = 7 TeV allows for a new energy-regime to be accessed. Heavy mass-resonances up to 3.5 TeV/c 2 are in reach. If such heavy particles decay hadronically to much lighter Standard Model particles such as top, Z or W, the jets of the decay products have a sizeable probability to be merged into a single jet. The result is a boosted jet with substructure. This diploma thesis deals with the phenomena of boosted jets, algorithms to distinguish substructure in these jets from normal hadronization and methods to further improve searches with boosted jets. The impact of such methods is demonstrated in an example analysis of a Z’→ tt¯-scenario on 2 fb −1 of data.

  4. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    Science.gov (United States)

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two

  5. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu [Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri 63130 (United States); Yang, Deshan [Department of Radiation Oncology, School of Medicine, Washington University in St. Louis, St. Louis, Missouri 63110 (United States); Tan, Jun [Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States)

    2016-04-15

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  6. N3DFix: an Algorithm for Automatic Removal of Swelling Artifacts in Neuronal Reconstructions.

    Science.gov (United States)

    Conde-Sousa, Eduardo; Szücs, Peter; Peng, Hanchuan; Aguiar, Paulo

    2017-01-01

    It is well established that not only electrophysiology but also morphology plays an important role in shaping the functional properties of neurons. In order to properly quantify morphological features it is first necessary to translate observational histological data into 3-dimensional geometric reconstructions of the neuronal structures. This reconstruction process, independently of being manual or (semi-)automatic, requires several preparation steps (e.g. histological processing) before data acquisition using specialized software. Unfortunately these processing steps likely produce artifacts which are then carried to the reconstruction, such as tissue shrinkage and formation of swellings. If not accounted for and corrected, these artifacts can change significantly the results from morphometric analysis and computer simulations. Here we present N3DFix, an open-source software which uses a correction algorithm to automatically find and fix swelling artifacts in neuronal reconstructions. N3DFix works as a post-processing tool and therefore can be used in either manual or (semi-)automatic reconstructions. The algorithm's internal parameters have been defined using a "ground truth" dataset produced by a neuroanatomist, involving two complementary manual reconstruction procedures: in the first, neuronal topology was faithfully reconstructed, including all swelling artifacts; in the second procedure a meticulous correction of the artifacts was manually performed directly during neuronal tracing. The internal parameters of N3DFix were set to minimize the differences between manual amendments and the algorithm's corrections. It is shown that the performance of N3DFix is comparable to careful manual correction of the swelling artifacts. To promote easy access and wide adoption, N3DFix is available in NEURON, Vaa3D and Py3DN.

  7. A New Page Ranking Algorithm Based On WPRVOL Algorithm

    Directory of Open Access Journals (Sweden)

    Roja Javadian Kootenae

    2013-03-01

    Full Text Available The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of Links (WPRVOL Algorithm" for search engines is being proposed which is called WPR'VOL for short. The proposed algorithm considers the number of visits of first and second level in-links. The original WPRVOL algorithm takes into account the number of visits of first level in-links of the pages and distributes rank scores based on the popularity of the pages whereas the proposed algorithm considers both in-links of that page (first level in-links and in-links of the pages that point to it (second level in-links in order to calculation of rank of the page, hence more related pages are displayed at the top of search result list. In the summary it is said that the proposed algorithm assigns higher rank to pages that both themselves and pages that point to them be important.

  8. A tailored ML-EM algorithm for reconstruction of truncated projection data using few view angles

    Science.gov (United States)

    Mao, Yanfei; Zeng, Gengsheng L.

    2013-06-01

    Dedicated cardiac single photon emission computed tomography (SPECT) systems have the advantage of high speed and sensitivity at no loss, or even a gain, in resolution. The potential drawbacks of these dedicated systems are data truncation by the small field of view (FOV) and the lack of view angles. Serious artifacts, including streaks outside the FOV and distortion in the FOV, are introduced to the reconstruction when using the traditional emission data maximum-likelihood expectation-maximization (ML-EM) algorithm to reconstruct images from the truncated data with a small number of views. In this note, we propose a tailored ML-EM algorithm to suppress the artifacts caused by data truncation and insufficient angular sampling by reducing the image updating step sizes for the pixels outside the FOV. As a consequence, the convergence speed for the pixels outside the FOV is decelerated. We applied the proposed algorithm to truncated analytical data, Monte Carlo simulation data and real emission data with different numbers of views. The computer simulation results show that the tailored ML-EM algorithm outperforms the conventional ML-EM algorithm in terms of streak artifacts and distortion suppression for reconstruction from truncated projection data with a small number of views.

  9. Implementation and evaluation of two helical CT reconstruction algorithms in CIVA

    Science.gov (United States)

    Banjak, H.; Costin, M.; Vienne, C.; Kaftandjian, V.

    2016-02-01

    The large majority of industrial CT systems reconstruct the 3D volume by using an acquisition on a circular trajec-tory. However, when inspecting long objects which are highly anisotropic, this scanning geometry creates severe artifacts in the reconstruction. For this reason, the use of an advanced CT scanning method like helical data acquisition is an efficient way to address this aspect known as the long-object problem. Recently, several analytically exact and quasi-exact inversion formulas for helical cone-beam reconstruction have been proposed. Among them, we identified two algorithms of interest for our case. These algorithms are exact and of filtered back-projection structure. In this work we implemented the filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms of Zou and Pan (2004). For performance evaluation, we present a numerical compari-son of the two selected algorithms with the helical FDK algorithm using both complete (noiseless and noisy) and truncated data generated by CIVA (the simulation platform for non-destructive testing techniques developed at CEA).

  10. DNA Coding Based Knowledge Discovery Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Ji-yun; GENG Zhao-feng; SHAO Shi-huang

    2002-01-01

    A novel DNA coding based knowledge discovery algorithm was proposed, an example which verified its validity was given. It is proved that this algorithm can discover new simplified rules from the original rule set efficiently.

  11. Comparative study of iterative reconstruction algorithms for missing cone problems in optical diffraction tomography.

    Science.gov (United States)

    Lim, JooWon; Lee, KyeoReh; Jin, Kyong Hwan; Shin, Seungwoo; Lee, SeoEun; Park, YongKeun; Ye, Jong Chul

    2015-06-29

    In optical tomography, there exist certain spatial frequency components that cannot be measured due to the limited projection angles imposed by the numerical aperture of objective lenses. This limitation, often called as the missing cone problem, causes the under-estimation of refractive index (RI) values in tomograms and results in severe elongations of RI distributions along the optical axis. To address this missing cone problem, several iterative reconstruction algorithms have been introduced exploiting prior knowledge such as positivity in RI differences or edges of samples. In this paper, various existing iterative reconstruction algorithms are systematically compared for mitigating the missing cone problem in optical diffraction tomography. In particular, three representative regularization schemes, edge preserving, total variation regularization, and the Gerchberg-Papoulis algorithm, were numerically and experimentally evaluated using spherical beads as well as real biological samples; human red blood cells and hepatocyte cells. Our work will provide important guidelines for choosing the appropriate regularization in ODT.

  12. Application of incremental algorithms to CT image reconstruction for sparse-view, noisy data

    DEFF Research Database (Denmark)

    Rose, Sean; Andersen, Martin Skovgaard; Sidky, Emil Y.;

    2014-01-01

    This conference contribution adapts an incremental framework for solving optimization problems of interest for sparse-view CT. From the incremental framework two algorithms are derived: one that combines a damped form of the algebraic reconstruction technique (ART) with a total-variation (TV) pro......) projection, and one that employs a modified damped ART, accounting for a weighted-quadratic data fidelity term, combined with TV projection. The algorithms are demonstrated on simulated, noisy, sparseview CT data.......This conference contribution adapts an incremental framework for solving optimization problems of interest for sparse-view CT. From the incremental framework two algorithms are derived: one that combines a damped form of the algebraic reconstruction technique (ART) with a total-variation (TV...

  13. Cone-beam local reconstruction based on a Radon inversion transformation

    Institute of Scientific and Technical Information of China (English)

    Wang Xian-Chao; Yan Bin; Li Lei; Hu Guo-En

    2012-01-01

    The local reconstruction from truncated projection data is one area of interest in image reconstruction for computed tomography (CT),which creates the possibility for dose reduction.In this paper,a filtered-backprojection (FBP)algorithm based on the Radon inversion transform is presented to deal with the three-dimensional (3D) local reconstruction in the circular geometry.The algorithm achieves the data filtering in two steps.The first step is the derivative of projections,which acts locally on the data and can thus be carried out accurately even in the presence of data truncation.The second step is the nonlocal Hilbert filtering.The numerical simulations and the real data reconstructions have been conducted to validate the new reconstruction algorithm.Compared with the approximate truncation resistant algorithm for computed tomography (ATRACT),not only it has a comparable ability to restrain truncation artifacts,but also its reconstruction efficiency is improved.It is about twice as fast as that of the ATRACT.Therefore,this work provides a simple and efficient approach for the approximate reconstruction from truncated projections in the circular cone-beam CT.

  14. Exact and efficient cone-beam reconstruction algorithm for a short-scan circle combined with various lines

    Science.gov (United States)

    Dennerlein, Frank; Katsevich, Alexander; Lauritsch, Guenter; Hornegger, Joachim

    2005-04-01

    X-ray 3D rotational angiography based on C-arm systems has become a versatile and established tomographic imaging modality for high contrast objects in interventional environment. Improvements in data acquisition, e.g. by use of flat panel detectors, will enable C-arm systems to resolve even low-contrast details. However, further progress will be limited by the incompleteness of data acquisition on the conventional short-scan circular source trajectories. Cone artifacts, which result from that incompleteness, significantly degrade image quality by severe smearing and shading. To assure data completeness a combination of a partial circle with one or several line segments is investigated. A new and efficient reconstruction algorithm is deduced from a general inversion formula based on 3D Radon theory. The method is theoretically exact, possesses shift-invariant filtered backprojection (FBP) structure, and solves the long object problem. The algorithm is flexible in dealing with various circle and line configurations. The reconstruction method requires nothing more than the theoretically minimum length of scan trajectory. It consists of a conventional short-scan circle and a line segment approximately twice as long as the height of the region-of-interest. Geometrical deviations from the ideal source trajectory are considered in the implementation in order to handle data of real C-arm systems. Reconstruction results show excellent image quality free of cone artifacts. The proposed scan trajectory and reconstruction algorithm assure excellent image quality and allow low-contrast tomographic imaging with C-arm based cone-beam systems. The method can be implemented without any hardware modifications on systems commercially available today.

  15. Influence of radiation dose and reconstruction algorithm in MDCT assessment of airway wall thickness: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Cardona, Daniel [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Nagle, Scott K. [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Department of Pediatrics, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Li, Ke; Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison School of Medicine and Public Health, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792 (United States); Robinson, Terry E. [Department of Pediatrics, Stanford School of Medicine, 770 Welch Road, Palo Alto, California 94304 (United States)

    2015-10-15

    Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiation dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose

  16. Distance Concentration-Based Artificial Immune Algorithm

    Institute of Scientific and Technical Information of China (English)

    LIU Tao; WANG Yao-cai; WANG Zhi-jie; MENG Jiang

    2005-01-01

    The diversity, adaptation and memory of biological immune system attract much attention of researchers. Several optimal algorithms based on immune system have also been proposed up to now. The distance concentration-based artificial immune algorithm (DCAIA) is proposed to overcome defects of the classical artificial immune algorithm (CAIA) in this paper. Compared with genetic algorithm (GA) and CAIA, DCAIA is good for solving the problem of precocity,holding the diversity of antibody, and enhancing convergence rate.

  17. Investigation of optimization-based reconstruction with an image-total-variation constraint in PET

    Science.gov (United States)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-08-01

    Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.

  18. Undersampled Hyperspectral Image Reconstruction Based on Surfacelet Transform

    Directory of Open Access Journals (Sweden)

    Lei Liu

    2015-01-01

    Full Text Available Hyperspectral imaging is a crucial technique for military and environmental monitoring. However, limited equipment hardware resources severely affect the transmission and storage of a huge amount of data for hyperspectral images. This limitation has the potentials to be solved by compressive sensing (CS, which allows reconstructing images from undersampled measurements with low error. Sparsity and incoherence are two essential requirements for CS. In this paper, we introduce surfacelet, a directional multiresolution transform for 3D data, to sparsify the hyperspectral images. Besides, a Gram-Schmidt orthogonalization is used in CS random encoding matrix, two-dimensional and three-dimensional orthogonal CS random encoding matrixes and a patch-based CS encoding scheme are designed. The proposed surfacelet-based hyperspectral images reconstruction problem is solved by a fast iterative shrinkage-thresholding algorithm. Experiments demonstrate that reconstruction of spectral lines and spatial images is significantly improved using the proposed method than using conventional three-dimensional wavelets, and growing randomness of encoding matrix can further improve the quality of hyperspectral data. Patch-based CS encoding strategy can be used to deal with large data because data in different patches can be independently sampled.

  19. Color hologram reconstruction based on single DMD

    Science.gov (United States)

    Xing, Jiang; Zhou, Hao; Wu, Dan; Hou, Jun-jian; Gu, Ji-hua

    2016-09-01

    Because of the magnification chromatic aberration and the transverse chromatic aberration caused from different wavelengths of color lasers in the process of color holographic optoelectronic reconstruction based on DMD, the reconstructed holograms of three color components can not coincide. Firstly, on the reference of blue color component, the magnification chromatic aberration of the original image is eliminated. Secondly, according to the analysis of the incident angles of three lasers, the transverse chromatic aberration is eliminated by adjusting the incident angles. At last, the synthesized color hologram is obtained by means of the experiments based on DMD. The method proposed in this paper does not use any lens, so there is no axial chromatic aberration.

  20. Three-dimension reconstruction based on spatial light modulator

    Science.gov (United States)

    Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  1. Three-dimension reconstruction based on spatial light modulator

    Energy Technology Data Exchange (ETDEWEB)

    Deng Xuejiao; Zhang Nanyang; Zeng Yanan; Yin Shiliang; Wang Weiyu, E-mail: daisydelring@yahoo.com.cn [Huazhong University of Science and Technology (China)

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  2. Research on object-plane constraints and hologram expansion in phase retrieval algorithms for continuous-wave terahertz inline digital holography reconstruction.

    Science.gov (United States)

    Hu, Jiaqi; Li, Qi; Cui, Shanshan

    2014-10-20

    In terahertz inline digital holography, zero-order diffraction light and conjugate images can cause the reconstructed image to be blurred. In this paper, three phase retrieval algorithms are applied to conduct reconstruction based on the same near-field diffraction propagation conditions and image-plane constraints. The impact of different object-plane constraints on CW terahertz inline digital holographic reconstruction is studied. The results show that in the phase retrieval algorithm it is not suitable to impose restriction on the phase when the object is not isolated in the transmission-type CW terahertz inline digital holography. In addition, the effects of zero-padding expansion, boundary replication expansion, and apodization operation on reconstructed images are studied. The results indicate that the conjugate image can be eliminated, and a better reconstructed image can be obtained by adopting an appropriate phase retrieval algorithm after the normalized hologram extending to the minimum area, which meets the applicable range of the angular spectrum reconstruction algorithm by means of boundary replication.

  3. Sparse Signal Reconstruction Based on Multiparameter Approximation Function with Smoothed l0 Norm

    Directory of Open Access Journals (Sweden)

    Xiao-Feng Fang

    2014-01-01

    Full Text Available The smoothed l0 norm algorithm is a reconstruction algorithm in compressive sensing based on approximate smoothed l0 norm. It introduces a sequence of smoothed functions to approximate the l0 norm and approaches the solution using the specific iteration process with the steepest method. In order to choose an appropriate sequence of smoothed function and solve the optimization problem effectively, we employ approximate hyperbolic tangent multiparameter function as the approximation to the big “steep nature” in l0 norm. Simultaneously, we propose an algorithm based on minimizing a reweighted approximate l0 norm in the null space of the measurement matrix. The unconstrained optimization involved is performed by using a modified quasi-Newton algorithm. The numerical simulation results show that the proposed algorithms yield improved signal reconstruction quality and performance.

  4. Optimization based automated curation of metabolic reconstructions

    Directory of Open Access Journals (Sweden)

    Maranas Costas D

    2007-06-01

    Full Text Available Abstract Background Currently, there exists tens of different microbial and eukaryotic metabolic reconstructions (e.g., Escherichia coli, Saccharomyces cerevisiae, Bacillus subtilis with many more under development. All of these reconstructions are inherently incomplete with some functionalities missing due to the lack of experimental and/or homology information. A key challenge in the automated generation of genome-scale reconstructions is the elucidation of these gaps and the subsequent generation of hypotheses to bridge them. Results In this work, an optimization based procedure is proposed to identify and eliminate network gaps in these reconstructions. First we identify the metabolites in the metabolic network reconstruction which cannot be produced under any uptake conditions and subsequently we identify the reactions from a customized multi-organism database that restores the connectivity of these metabolites to the parent network using four mechanisms. This connectivity restoration is hypothesized to take place through four mechanisms: a reversing the directionality of one or more reactions in the existing model, b adding reaction from another organism to provide functionality absent in the existing model, c adding external transport mechanisms to allow for importation of metabolites in the existing model and d restore flow by adding intracellular transport reactions in multi-compartment models. We demonstrate this procedure for the genome- scale reconstruction of Escherichia coli and also Saccharomyces cerevisiae wherein compartmentalization of intra-cellular reactions results in a more complex topology of the metabolic network. We determine that about 10% of metabolites in E. coli and 30% of metabolites in S. cerevisiae cannot carry any flux. Interestingly, the dominant flow restoration mechanism is directionality reversals of existing reactions in the respective models. Conclusion We have proposed systematic methods to identify and

  5. An enhanced reconstruction algorithm to extend CT scan field-of-view with z-axis consistency constraint.

    Science.gov (United States)

    Li, Baojun; Deng, Junjun; Lonn, Albert H; Hsieh, Jiang

    2012-10-01

    To further improve the image quality, in particularly, to suppress the boundary artifacts, in the extended scan field-of-view (SFOV) reconstruction. To combat projection truncation artifacts and to restore truncated objects outside the SFOV, an algorithm has previously been proposed based on fitting a partial water cylinder at the site of the truncation. Previous studies have shown this algorithm can simultaneously eliminate the truncation artifacts inside the SFOV and preserve the total amount of attenuation, owing to its emphasis on consistency conditions of the total attenuation in the parallel sampling geometry. Unfortunately, the water cylinder fitting parameters of this 2D algorithm are inclined to high noise fluctuation in the projection samples from image to image, causing anatomy boundaries artifacts, especially during helical scans with higher pitch (≥1.0). To suppress the boundary artifacts and further improve the image quality, the authors propose to use a roughness penalty function, based on the Huber regularization function, to reinforce the z-dimensional boundary consistency. Extensive phantom and clinical tests have been conducted to test the accuracy and robustness of the enhanced algorithm. Significant reduction in the boundary artifacts is observed in both phantom and clinical cases with the enhanced algorithm. The proposed algorithm also reduces the percent difference error between the horizontal and vertical diameters to well below 1%. It is also noticeable that the algorithm has improved CT number uniformity outside the SFOV compared to the original algorithm. The proposed algorithm is capable of suppressing boundary artifacts and improving the CT number uniformity outside the SFOV.

  6. Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)

    Science.gov (United States)

    McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian

    2006-03-01

    To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the

  7. Double-barrel vascularised fibula graft in mandibular reconstruction: a 10-year experience with an algorithm.

    Science.gov (United States)

    Shen, Yi; Guo, Xue-hua; Sun, Jian; Li, Jun; Shi, Jun; Huang, Wei; Ow, Andrew

    2013-03-01

    This retrospective study aims to report an algorithm to assist surgeons in selecting different modes of the double-barrel vascularised fibula graft for mandibular reconstruction. A total of 45 patients who underwent reconstruction of mandibular defects with different modes of the double-barrel vascularised fibula graft were reviewed. Our algorithm for deciding on any one of the different modes for different mandibular defects is influenced by factors including history of radiotherapy, the length of mandibular body defect and the need to preserve the inferior mandibular border. Post-operative functional outcomes included diet type and speech, and aesthetic results gained at post-operative 2 years. Patients with implant-borne prosthetic teeth underwent assessment of their masticatory function. There were four modes of mandibular reconstruction according to our algorithm, which included double-barrel vascularised fibula graft (n=21), partial double-barrel fibula graft (n=11), condylar prosthesis in combination with partial/double-barrel fibula graft (n=11), and double-barrel fibula onlay graft (n=2). Flap survival in all patients was 97.78%. Good occlusion, bony unions and wound closures were observed in 44 patients. Eleven patients received dental implantation in the transplanted fibula at post-operative 9-18th months. One patient wore removal partial dentures. For 11 patients with implant-borne prosthetic teeth, the average post-operative ipsilateral occlusal force was 41.5±17.7% of the contralateral force. Good functional and aesthetic results were achieved in 38 patients with more than 2 years of follow-up, including regular diet, normal speech and excellent or good appearance, especially for patients with dental rehabilitation. Good aesthetic and functional results can be achieved after dental rehabilitation by following our algorithm when choosing the different modes of double-barrel vascularised fibula graft for mandibular reconstruction. Copyright © 2012

  8. Parallel Algorithm for Reconstruction of TAC Images; Algoritmo Paralelo de Reconstruccion de Imagenes TAC

    Energy Technology Data Exchange (ETDEWEB)

    Vidal Gimeno, V.

    2012-07-01

    The algebraic reconstruction methods are based on solving a system of linear equations. In a previous study, was used and showed as the PETSc library, was and is a scientific computing tool, which facilitates and enables the optimal use of a computer system in the image reconstruction process.

  9. A new ionospheric tomographic algorithm – constrained multiplicative algebraic reconstruction technique (CMART)

    Indian Academy of Sciences (India)

    Wen Debao; Liu Sanzhi

    2010-08-01

    For the limitation of the conventional multiplicative algebraic reconstruction technique (MART), a constrained MART (CMART) is proposed in this paper. In the new tomographic algorithm, a popular two-dimensional multi-point finite difference approximation of the second order Laplacian operator is used to smooth the electron density field. The feasibility and superiority of the new method are demonstrated by using the numerical simulation experiment. Finally, the CMART is used to reconstruct the regional electron density field by using the actual GNSS data under geomagnetic quiet and disturbed days. The available ionosonde data from Beijing station further validates the superiority of the new method.

  10. A new ionospheric tomographic algorithm — constrained multiplicative algebraic reconstruction technique (CMART)

    Science.gov (United States)

    Wen, Debao; Liu, Sanzhi

    2010-08-01

    For the limitation of the conventional multiplicative algebraic reconstruction technique (MART), a constrained MART (CMART) is proposed in this paper. In the new tomographic algorithm, a popular two-dimensional multi-point finite difference approximation of the second order Laplacian operator is used to smooth the electron density field. The feasibility and superiority of the new method are demonstrated by using the numerical simulation experiment. Finally, the CMART is used to reconstruct the regional electron density field by using the actual GNSS data under geomagnetic quiet and disturbed days. The available ionosonde data from Beijing station further validates the superiority of the new method.

  11. The application of MUSIC algorithm in spectrum reconstruction and interferogram processing

    Science.gov (United States)

    Jian, Xiaohua; Zhang, Chunmin; Zhao, Baochang; Zhu, Baohui

    2008-05-01

    Three different methods of spectrum reproduction and interferogram processing are discussed and contrasted in this paper. Especially, the nonparametric model of MUSIC (multiple signal classification) algorithm is firstly brought into the practical spectrum reconstruction processing. The experimental results prove that this method has immensely improved the resolution of reproduced spectrum, and provided a better math model for super advanced resolving power in spectrum reconstruction. The usefulness and simplicity of the technique will lead the interference imaging spectrometers to almost every field into which the spectroscopy has ventured and into some where it has not gone before.

  12. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    Energy Technology Data Exchange (ETDEWEB)

    Guo, J. [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China); Lehrstuhl fuer Radiochemie, Technische Universitaet Muenchen, Garching 80748 (Germany); Buecherl, T. [Lehrstuhl fuer Radiochemie, Technische Universitaet Muenchen, Garching 80748 (Germany); Zou, Y., E-mail: zouyubin@pku.edu.cn [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China); Guo, Z. [State Key Laboratory of Nuclear Physics and Technology and School of Physics, Peking University, 5 Yiheyuan Lu, Beijing 100871 (China)

    2011-09-21

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  13. Adaptive Surface Reconstruction Based on Tensor Product Algebraic Splines

    Institute of Scientific and Technical Information of China (English)

    Xinghua Song; Falai Chen

    2009-01-01

    Surface reconstruction from unorganized data points is a challenging problem in Computer Aided Design and Geometric Modeling. In this paper, we extend the mathematical model proposed by Juttler and Felis (Adv. Comput. Math., 17 (2002), pp. 135-152) based on tensor product algebraic spline surfaces from fixed meshes to adaptive meshes. We start with a tensor product algebraic B-spline surface defined on an initial mesh to fit the given data based on an optimization approach. By measuring the fitting errors over each cell of the mesh, we recursively insert new knots in cells over which the errors are larger than some given threshold, and construct a new algebraic spline surface to better fit the given data locally. The algorithm terminates when the error over each cell is less than the threshold. We provide some examples to demonstrate our algorithm and compare it with Jiittler's method. Examples suggest that our method is effective and is able to produce reconstruction surfaces of high quality.AMS subject classifications: 65D17

  14. Integration of robust filters and phase unwrapping algorithms for image reconstruction of objects containing height discontinuities.

    Science.gov (United States)

    Weng, Jing-Feng; Lo, Yu-Lung

    2012-05-07

    For 3D objects with height discontinuities, the image reconstruction performance of interferometric systems is adversely affected by the presence of noise in the wrapped phase map. Various schemes have been proposed for detecting residual noise, speckle noise and noise at the lateral surfaces of the discontinuities. However, in most schemes, some noisy pixels are missed and noise detection errors occur. Accordingly, this paper proposes two robust filters (designated as Filters A and B, respectively) for improving the performance of the phase unwrapping process for objects with height discontinuities. Filter A comprises a noise and phase jump detection scheme and an adaptive median filter, while Filter B replaces the detected noise with the median phase value of an N × N mask centered on the noisy pixel. Filter A enables most of the noise and detection errors in the wrapped phase map to be removed. Filter B then detects and corrects any remaining noise or detection errors during the phase unwrapping process. Three reconstruction paths are proposed, Path I, Path II and Path III. Path I combines the path-dependent MACY algorithm with Filters A and B, while Paths II and III combine the path-independent cellular automata (CA) algorithm with Filters A and B. In Path II, the CA algorithm operates on the whole wrapped phase map, while in Path III, the CA algorithm operates on multiple sub-maps of the wrapped phase map. The simulation and experimental results confirm that the three reconstruction paths provide a robust and precise reconstruction performance given appropriate values of the parameters used in the detection scheme and filters, respectively. However, the CA algorithm used in Paths II and III is relatively inefficient in identifying the most suitable unwrapping paths. Thus, of the three paths, Path I yields the lowest runtime.

  15. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  16. Neural Network-Based Hyperspectral Algorithms

    Science.gov (United States)

    2016-06-07

    Neural Network-Based Hyperspectral Algorithms Walter F. Smith, Jr. and Juanita Sandidge Naval Research Laboratory Code 7340, Bldg 1105 Stennis Space...our effort is development of robust numerical inversion algorithms , which will retrieve inherent optical properties of the water column as well as...validate the resulting inversion algorithms with in-situ data and provide estimates of the error bounds associated with the inversion algorithm . APPROACH

  17. Application of phase space reconstruction and v-SVR algorithm in predicting displacement of underground engineering surrounding rock

    Institute of Scientific and Technical Information of China (English)

    SHI Chao; CHEN Yi-feng; YU Zhi-xiong; YANG Kun

    2006-01-01

    A new method for predicting the trend of displacement evolution of surrounding rock was presented in this paper. According to the nonlinear characteristics of displacement time series of underground engineering surrounding rock, based on phase space reconstruction theory and the powerful nonlinear mapping ability of support vector machines, the information offered by the time series datum sets was fully exploited and the non-linearity of the displacement evolution system of surrounding rock was well described.The example suggests that the methods based on phase space reconstruction and modified v-SVR algorithm are very accurate, and the study can help to build the displacement forecast system to analyze the stability of underground engineering surrounding rock.

  18. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R; Kim, Jeehyun; Nelson, J Stuart [Beckman Laser Institute and Medical Clinic, University of California, Irvine, CA 92612 (United States)], E-mail: wverkruy@uci.edu

    2008-03-07

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  19. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    Science.gov (United States)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart

    2008-03-01

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  20. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  1. A New Page Ranking Algorithm Based On WPRVOL Algorithm

    OpenAIRE

    Roja Javadian Kootenae; Seyyed Mohsen Hashemi; mehdi afzali

    2013-01-01

    The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of ...

  2. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions

    Science.gov (United States)

    Mazzaferri, Javier; Beaton, Luke; Hounye, Gisèle; Sayah, Diane N.; Costantino, Santiago

    2017-02-01

    The use of optical coherence tomography (OCT) to study ocular diseases associated with choroidal physiology is sharply limited by the lack of available automated segmentation tools. Current research largely relies on hand-traced, single B-Scan segmentations because commercially available programs require high quality images, and the existing implementations are closed, scarce and not freely available. We developed and implemented a robust algorithm for segmenting and quantifying the choroidal layer from 3-dimensional OCT reconstructions. Here, we describe the algorithm, validate and benchmark the results, and provide an open-source implementation under the General Public License for any researcher to use (https://www.mathworks.com/matlabcentral/fileexchange/61275-choroidsegmentation).

  3. Ultrasonic temperature distribution reconstruction for circular area based on Markov radial basis approximation and singular value decomposition.

    Science.gov (United States)

    Shen, Xuehua; Xiong, Qingyu; Shi, Xin; Wang, Kai; Liang, Shan; Gao, Min

    2015-09-01

    Temperature distribution reconstruction is of critical importance for circular area, and an ultrasonic technique is investigated to meet this demand in this paper. Considering the particularity of circular area, algorithm based on Markov radial basis approximation and singular value decomposition is proposed, while ultrasonic transducers layout and division of measured area are properly designed. The reconstruction performance is validated via numerical experiments using different temperature distribution models, and is compared with algorithm based on least square method. To study the anti-interference, various noises are adding to the theoretical value of time-of-flight. Experiment results indicate that the proposed algorithm can reconstruct temperature distribution with higher accuracy and stronger anti-interference, while without the problem of algorithm based on least square method that its reconstructions will lose much temperature information near the edge of measured area. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. The regularized blind tip reconstruction algorithm as a scanning probe microscopy tip metrology method

    CERN Document Server

    Jozwiak, G; Masalska, A; Gotszalk, T; Ritz, I; Steigmann, H

    2011-01-01

    The problem of an accurate tip radius and shape characterization is very important for determination of surface mechanical and chemical properties on the basis of the scanning probe microscopy measurements. We think that the most favorable methods for this purpose are blind tip reconstruction methods, since they do not need any calibrated characterizers and might be performed on an ordinary SPM setup. As in many other inverse problems also in case of these methods the stability of the solution in presence of vibrational and electronic noise needs application of so called regularization techniques. In this paper the novel regularization technique (Regularized Blind Tip Reconstruction - RBTR) for blind tip reconstruction algorithm is presented. It improves the quality of the solution in presence of isotropic and anisotropic noise. The superiority of our approach is proved on the basis of computer simulations and analysis of images of the Budget Sensors TipCheck calibration standard. In case of characterization ...

  5. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    Gagnon, Louis-Guillaume; The ATLAS collaboration

    2017-01-01

    ATLAS track reconstruction software is continuously evolving to match the demands from the increasing instantaneous luminosity of the LHC, as well as the increased center-of-mass energy. These conditions result in a higher abundance of events with dense track environments, such the core of jets or boosted tau leptons undergoing three-prong decays. These environments are characterised by charged particle separations on the order of the ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction software to cope with the expected conditions during LHC Run~2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented and physics performance studies are shown, including a measurement of the fraction of lost tracks in jets with high transverse momentum.

  6. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    Gagnon, Louis-Guillaume; The ATLAS collaboration

    2016-01-01

    ATLAS track reconstruction code is continuously evolving to match the demands from the increasing instantaneous luminosity of LHC, as well as the increased centre-of-mass energy. With the increase in energy, events with dense environments, e.g. the cores of jets or boosted tau leptons, become much more abundant. These environments are characterised by charged particle separations on the order of ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction code to cope with the expected conditions during LHC Run 2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented. In addition, physics performance studies are shown, e.g. a measurement of the fraction of lost tracks in jets with high transverse momentum.

  7. Structured Light-Based 3D Reconstruction System for Plants.

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  8. Structured Light-Based 3D Reconstruction System for Plants

    Directory of Open Access Journals (Sweden)

    Thuy Tuong Nguyen

    2015-07-01

    Full Text Available Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces and software algorithms (including the proposed 3D point cloud registration and plant feature measurement. This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  9. Effects of photon noise on speckle image reconstruction with the Knox-Thompson algorithm. [in astronomy

    Science.gov (United States)

    Nisenson, P.; Papaliolios, C.

    1983-01-01

    An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average arre biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.

  10. Kernel method-based fuzzy clustering algorithm

    Institute of Scientific and Technical Information of China (English)

    Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping

    2005-01-01

    The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.

  11. Reconstruction of volumetric ultrasound panorama based on improved 3D SIFT.

    Science.gov (United States)

    Ni, Dong; Chui, Yim Pan; Qu, Yingge; Yang, Xuan; Qin, Jing; Wong, Tien-Tsin; Ho, Simon S H; Heng, Pheng Ann

    2009-10-01

    Registration of ultrasound volumes is a key issue for the reconstruction of volumetric ultrasound panorama. In this paper, we propose an improved three-dimensional (3D) scale invariant feature transform (SIFT) algorithm to globally register ultrasound volumes acquired from dedicated ultrasound probe, where local deformations are corrected by block-based warping algorithm. Original SIFT algorithm is extended to 3D and improved by combining the SIFT detector with Rohr3D detector to extract complementary features and applying the diffusion distance algorithm for robust feature comparison. Extensive experiments have been performed on both phantom and clinical data sets to demonstrate the effectiveness and robustness of our approach.

  12. Non-Iterative Regularized reconstruction Algorithm for Non-CartesiAn MRI: NIRVANA.

    Science.gov (United States)

    Kashyap, Satyananda; Yang, Zhili; Jacob, Mathews

    2011-02-01

    We introduce a novel noniterative algorithm for the fast and accurate reconstruction of nonuniformly sampled MRI data. The proposed scheme derives the reconstructed image as the nonuniform inverse Fourier transform of a compensated dataset. We derive each sample in the compensated dataset as a weighted linear combination of a few measured k-space samples. The specific k-space samples and the weights involved in the linear combination are derived such that the reconstruction error is minimized. The computational complexity of the proposed scheme is comparable to that of gridding. At the same time, it provides significantly improved accuracy and is considerably more robust to noise and undersampling. The advantages of the proposed scheme makes it ideally suited for the fast reconstruction of large multidimensional datasets, which routinely arise in applications such as f-MRI and MR spectroscopy. The comparisons with state-of-the-art algorithms on numerical phantoms and MRI data clearly demonstrate the performance improvement. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.

    Science.gov (United States)

    Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel

    2016-10-01

    This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.

  14. IPED2X: a robust pedigree reconstruction algorithm for complicated pedigrees.

    Science.gov (United States)

    He, Dan; Eskin, Eleazar

    2014-12-01

    Reconstruction of family trees, or pedigree reconstruction, for a group of individuals is a fundamental problem in genetics. Some recent methods have been developed to reconstruct pedigrees using genotype data only. These methods are accurate and efficient for simple pedigrees which contain only siblings, where two individuals share the same pair of parents. A most recent method IPED2 is able to handle complicated pedigrees with half-sibling relationships, where two individuals share only one parent. However, the method is shown to miss many true positive half-sibling relationships as it removes all suspicious half-sibling relationships during the parent construction process. In this work, we propose a novel method IPED2X, which deploys a more robust algorithm for parent construction in the pedigrees by considering more possible operations rather than simple deletion. We convert the parent construction problem into a graph labeling problem and propose a more effective labeling algorithm. We show in our experiments that IPED2X is more powerful on capturing the true half-sibling relationships, which further leads to better reconstruction accuracy.

  15. Effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low dose CT colonography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Sun [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of); Kim, Se Hyung, E-mail: shkim7071@gmail.com [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of); Im, Jong Pil; Kim, Sang Gyun [Department of Internal Medicine, Seoul National University Hospital (Korea, Republic of); Shin, Cheong-il; Han, Joon Koo; Choi, Byung Ihn [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of)

    2015-04-15

    Highlights: •We assessed the effect of reconstruction algorithms on CAD in ultra-low dose CTC. •30 patients underwent ultra-low dose CTC using 120 and 100 kVp with 10 mAs. •CT was reconstructed with FBP, ASiR and Veo and then, we applied a CAD system. •Per-polyp sensitivity of CAD in ULD CT can be improved with the IR algorithms. •Despite of an increase in the number of FPs with IR, it was still acceptable. -- Abstract: Purpose: To assess the effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low-dose CT colonography (ULD CTC). Materials and methods: IRB approval and informed consents were obtained. Thirty prospectively enrolled patients underwent non-contrast CTC at 120 kVp/10 mAs in supine and 100 kVp/10 mAs in prone positions, followed by same-day colonoscopy. Images were reconstructed with filtered back projection (FBP), 80% adaptive statistical iterative reconstruction (ASIR80), and model-based iterative reconstruction (MBIR). A commercial CAD system was applied and per-polyp sensitivities and numbers of false-positives (FPs) were compared among algorithms. Results: Mean effective radiation dose of CTC was 1.02 mSv. Of 101 polyps detected and removed by colonoscopy, 61 polyps were detected on supine and on prone CTC datasets on consensus unblinded review, resulting in 122 visible polyps (32 polyps <6 mm, 52 6–9.9 mm, and 38 ≥ 10 mm). Per-polyp sensitivity of CAD for all polyps was highest with MBIR (56/122, 45.9%), followed by ASIR80 (54/122, 44.3%) and FBP (43/122, 35.2%), with significant differences between FBP and IR algorithms (P < 0.017). Per-polyp sensitivity for polyps ≥ 10 mm was also higher with MBIR (25/38, 65.8%) and ASIR80 (24/38, 63.2%) than with FBP (20/38, 58.8%), albeit without statistical significance (P > 0.017). Mean number of FPs was significantly different among algorithms (FBP, 1.4; ASIR, 2.1; MBIR, 2.4) (P = 0.011). Conclusion: Although the performance of stand-alone CAD

  16. Relaxed Linearized Algorithms for Faster X-Ray CT Image Reconstruction.

    Science.gov (United States)

    Nien, Hung; Fessler, Jeffrey A

    2016-04-01

    Statistical image reconstruction (SIR) methods are studied extensively for X-ray computed tomography (CT) due to the potential of acquiring CT scans with reduced X-ray dose while maintaining image quality. However, the longer reconstruction time of SIR methods hinders their use in X-ray CT in practice. To accelerate statistical methods, many optimization techniques have been investigated. Over-relaxation is a common technique to speed up convergence of iterative algorithms. For instance, using a relaxation parameter that is close to two in alternating direction method of multipliers (ADMM) has been shown to speed up convergence significantly. This paper proposes a relaxed linearized augmented Lagrangian (AL) method that shows theoretical faster convergence rate with over-relaxation and applies the proposed relaxed linearized AL method to X-ray CT image reconstruction problems. Experimental results with both simulated and real CT scan data show that the proposed relaxed algorithm (with ordered-subsets [OS] acceleration) is about twice as fast as the existing unrelaxed fast algorithms, with negligible computation and memory overhead.

  17. Optimization-based image reconstruction with artifact reduction in C-arm CBCT

    Science.gov (United States)

    Xia, Dan; Langan, David A.; Solomon, Stephen B.; Zhang, Zheng; Chen, Buxin; Lai, Hao; Sidky, Emil Y.; Pan, Xiaochuan

    2016-10-01

    We investigate an optimization-based reconstruction, with an emphasis on image-artifact reduction, from data collected in C-arm cone-beam computed tomography (CBCT) employed in image-guided interventional procedures. In the study, an image to be reconstructed is formulated as a solution to a convex optimization program in which a weighted data divergence is minimized subject to a constraint on the image total variation (TV); a data-derivative fidelity is introduced in the program specifically for effectively suppressing dominant, low-frequency data artifact caused by, e.g. data truncation; and the Chambolle-Pock (CP) algorithm is tailored to reconstruct an image through solving the program. Like any other reconstructions, the optimization-based reconstruction considered depends upon numerous parameters. We elucidate the parameters, illustrate their determination, and demonstrate their impact on the reconstruction. The optimization-based reconstruction, when applied to data collected from swine and patient subjects, yields images with visibly reduced artifacts in contrast to the reference reconstruction, and it also appears to exhibit a high degree of robustness against distinctively different anatomies of imaged subjects and scanning conditions of clinical significance. Knowledge and insights gained in the study may be exploited for aiding in the design of practical reconstructions of truly clinical-application utility.

  18. Model-based image reconstruction in X-ray computed tomography

    NARCIS (Netherlands)

    Zbijewski, Wojciech Bartosz

    2006-01-01

    The thesis investigates the applications of iterative, statistical reconstruction (SR) algorithms in X-ray Computed Tomography. Emphasis is put on various aspects of system modeling in statistical reconstruction. Fundamental issues such as effects of object discretization and algorithm initializatio

  19. ILU preconditioning based on the FAPINV algorithm

    Directory of Open Access Journals (Sweden)

    Davod Khojasteh Salkuyeh

    2015-01-01

    Full Text Available A technique for computing an ILU preconditioner based on the factored approximate inverse (FAPINV algorithm is presented. We show that this algorithm is well-defined for H-matrices. Moreover, when used in conjunction with Krylov-subspace-based iterative solvers such as the GMRES algorithm, results in reliable solvers. Numerical experiments on some test matrices are given to show the efficiency of the new ILU preconditioner.

  20. Model-based Tomographic Reconstruction Literature Search

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H; Lehman, S K

    2005-11-30

    In the process of preparing a proposal for internal research funding, a literature search was conducted on the subject of model-based tomographic reconstruction (MBTR). The purpose of the search was to ensure that the proposed research would not replicate any previous work. We found that the overwhelming majority of work on MBTR which used parameterized models of the object was theoretical in nature. Only three researchers had applied the technique to actual data. In this note, we summarize the findings of the literature search.

  1. Multi-polarization reconstruction from compact polarimetry based on modified four-component scattering decomposition

    Institute of Scientific and Technical Information of China (English)

    Junjun Yin; Jian Yang

    2014-01-01

    An improved algorithm for multi-polarization recon-struction from compact polarimetry (CP) is proposed. According to two fundamental assumptions in compact polarimetric reconstruc-tion, two improvements are proposed. Firstly, the four-component model-based decomposition algorithm is modified with a new vol-ume scattering model. The decomposed helix scattering compo-nent is then used to deal with the non-reflection symmetry con-dition in compact polarimetric measurements. Using the decom-posed power and considering the scattering mechanism of each component, an average relationship between co-polarized and cross-polarized channels is developed over the original polariza-tion state extrapolation model. E-SAR polarimetric data acquired over the Oberpfaffenhofen area and JPL/AIRSAR polarimetric data acquired over San Francisco are used for verification, and good re-construction results are obtained, demonstrating the effectiveness of the proposed algorithm.

  2. Image Reconstruction of Two-Dimensional Highly Scattering Inhomogeneous Medium Using MAP-Based Estimation

    Directory of Open Access Journals (Sweden)

    Hong Qi

    2015-01-01

    Full Text Available A maximum a posteriori (MAP estimation based on Bayesian framework is applied to image reconstruction of two-dimensional highly scattering inhomogeneous medium. The finite difference method (FDM and conjugate gradient (CG algorithm serve as the forward and inverse solving models, respectively. The generalized Gaussian Markov random field model (GGMRF is treated as the regularization, and finally the influence of the measurement errors and initial distributions is investigated. Through the test cases, the MAP estimate algorithm is demonstrated to greatly improve the reconstruction results of the optical coefficients.

  3. Influence of different path length computation models and iterative reconstruction algorithms on the quality of transmission reconstruction in Tomographic Gamma Scanning

    Science.gov (United States)

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2017-07-01

    This paper studies the influence of different path length computation models and iterative reconstruction algorithms on the quality of transmission reconstruction in Tomographic Gamma Scanning. The research purpose is to quantify and to localize heterogeneous matrices while investigating the recovery of linear attenuation coefficients (LACs) maps in 200 liter drums. Two different path length computation models so called ;point to point (PP); model and ;point to detector (PD); model are coupled with two different transmission reconstruction algorithms - Algebraic Reconstruction Technique (ART) with non-negativity constraint, and Maximum Likelihood Expectation Maximization (MLEM), respectively. Thus 4 modes are formed: ART-PP, ART-PD, MLEM-PP, MLEM-PD. The inter-comparison of transmission reconstruction qualities of these 4 modes is taken into account for heterogeneous matrices in the radioactive waste drums. Results illustrate that transmission-reconstructed qualities of MLEM algorithm are better than ART algorithm to get the most accurate LACs maps in good agreement with the reference data simulated by Monte Carlo. Moreover, PD model can be used to assay higher density waste drum and has a greater scope of application than PP model in TGS.

  4. Local fingerprint image reconstruction based on gabor filtering

    Science.gov (United States)

    Bakhtiari, Somayeh; Agaian, Sos S.; Jamshidi, Mo

    2012-06-01

    In this paper, we propose two solutions for fingerprint local image reconstruction based on Gabor filtering. Gabor filtering is a popular method for fingerprint image enhancement. However, the reliability of the information in the output image suffers, when the input image has a poor quality. This is the result of the spurious estimates of frequency and orientation by classical approaches, particularly in the scratch regions. In both techniques of this paper, the scratch marks are recognized initially using reliability image which is calculated using the gradient images. The first algorithm is based on an inpainting technique and the second method employs two different kernels for the scratch and the non-scratch parts of the image to calculate the gradient images. The simulation results show that both approaches allow the actual information of the image to be preserved while connecting discontinuities correctly by approximating the orientation matrix more genuinely.

  5. Contour-Based Surface Reconstruction using MPU Implicit Models.

    Science.gov (United States)

    Braude, Ilya; Marker, Jeffrey; Museth, Ken; Nissanov, Jonathan; Breen, David

    2007-03-01

    This paper presents a technique for creating a smooth, closed surface from a set of 2D contours, which have been extracted from a 3D scan. The technique interprets the pixels that make up the contours as points in ℝ(3) and employs Multi-level Partition of Unity (MPU) implicit models to create a surface that approximately fits to the 3D points. Since MPU implicit models additionally require surface normal information at each point, an algorithm that estimates normals from the contour data is also described. Contour data frequently contains noise from the scanning and delineation process. MPU implicit models provide a superior approach to the problem of contour-based surface reconstruction, especially in the presence of noise, because they are based on adaptive implicit functions that locally approximate the points within a controllable error bound. We demonstrate the effectiveness of our technique with a number of example datasets, providing images and error statistics generated from our results.

  6. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions.

    Science.gov (United States)

    Grootjans, Willem; Meeuwis, Antoi P W; Slump, Cornelis H; de Geus-Oei, Lioe-Fee; Gotthardt, Martin; Visser, Eric P

    2016-12-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4.2, respectively

  7. New Virtual Cutting Algorithms for 3D Surface Model Reconstructed from Medical Images

    Institute of Scientific and Technical Information of China (English)

    WANG Wei-hong; QIN Xu-Jia

    2006-01-01

    This paper proposes a practical algorithms of plane cutting, stereo clipping and arbitrary cutting for 3D surface model reconstructed from medical images. In plane cutting and stereo clipping algorithms, the 3D model is cut by plane or polyhedron. Lists of edge and vertex in every cut plane are established. From these lists the boundary contours are created and their relationship of embrace is ascertained. The region closed by the contours is triangulated using Delaunay triangulation algorithm. Arbitrary cutting operation creates cutting curve interactively.The cut model still maintains its correct topology structure. With these operations,tissues inside can be observed easily and it can aid doctors to diagnose. The methods can also be used in surgery planning of radiotherapy.

  8. Realization and Comparison of Several Regression Algorithms for Electron Energy Spectrum Reconstruction

    Institute of Scientific and Technical Information of China (English)

    LI Gui; LIN Hui; WU Ai-Dong; SONG Gang; WU Yi-Can

    2008-01-01

    To determine the electron energy spectra for medical accelerator effectively, we investigate a nonlinear programming model with several nonlinear regression algorithms, including Levenberg-Marquardt, Quasi-Newton, Gradient, Conjugate Gradient, Newton, Principal-Axis and NMinimize algorithms. The local relaxation-bound method is also developed to increase the calculation accuracy. The testing results demonstrate that the above methods could reconstruct the electron energy spectra effectively. Especially, further with the local relaxationbound method the Levenberg Marquardt, Newton and N Minimize algorithms could precisely obtain both the electron energy spectra and the photon contamination. Further study shows that ignoring about 4% photon contamination would increase error greatly, and it also inaccurately makes the electron energy spectra 'drift' to the low energy.

  9. A Fast Algorithm for Muon Track Reconstruction and its Application to the ANTARES Neutrino Telescope

    CERN Document Server

    Aguilar, J A; Albert, A; Andre, M; Anghinolfi, M; Anton, G; Anvar, S; Ardid, M; Jesus, A C Assis; Astraatmadja, T; Aubert, J-J; Auer, R; Baret, B; Basa, S; Bazzotti, M; Bertin, V; Biagi, S; Bigongiari, C; Bogazzi, C; Bou-Cabo, M; Bouwhuis, M C; Brown, A M; Brunner, J; Busto, J; Camarena, F; Capone, A; Carloganu, C; Carminati, G; Carr, J; Cecchini, S; Charvis, Ph; Chiarusi, T; Circella, M; Coniglione, R; Costantini, H; Cottini, N; Coyle, P; Curtil, C; Decowski, M P; Dekeyser, I; Deschamps, A; Distefano, C; Donzaud, C; Dornic, D; Dorosti, Q; Drouhin, D; Eberl, T; Emanuele, U; Ernenwein, J-P; Escoffier, S; Fehr, F; Flaminio, V; Fritsch, U; Fuda, J-L; Galata, S; Gay, P; Giacomelli, G; Gomez-Gonzalez, J P; Graf, K; Guillard, G; Halladjian, G; Hallewell, G; van Haren, H; Heijboer, A J; Hello, Y; Hernandez-Rey, J J; Herold, B; Hößl, J; Hsu, C C; de Jong, M; Kadler, M; Kalantar-Nayestanaki, N; Kalekin, O; Kappes, A; Katz, U; Kooijman, P; Kopper, C; Kouchner, A; Kulikovskiy, V; Lahmann, R; Lamare, P; Larosa, G; Lefevre, D; Lim, G; Presti, D Lo; Loehner, H; Loucatos, S; Lucarelli, F; Mangano, S; Marcelin, M; Margiotta, A; Martinez-Mora, J A; Mazure, A; Meli, A; Montaruli, T; Morganti, M; Moscoso, L; Motz, H; Naumann, C; Neff, M; Palioselitis, D; Pavalas, G E; Payre, P; Petrovic, J; Picot-Clemente, N; Picq, C; Popa, V; Pradier, T; Presani, E; Racca, C; Reed, C; Riccobene, G; Richardt, C; Richter, R; Rostovtsev, A; Rujoiu, M; Russo, G V; Salesa, F; Sapienza, P; Schöck, F; Schuller, J-P; Shanidze, R; Simeone, F; Spiess, A; Spurio, M; Steijger, J J M; Stolarczyk, Th; Taiuti, M; Tamburini, C; Tasca, L; Toscano, S; Vallage, B; Van Elewyck, V; Vannoni, G; Vecchi, M; Vernin, P; Wijnker, G; de Wolf, E; Yepes, H; Zaborov, D; Zornoza, J D; Zuniga, J

    2011-01-01

    An algorithm is presented, that provides a fast and robust reconstruction of neutrino induced upward-going muons and a discrimination of these events from downward-going atmospheric muon background in data collected by the ANTARES neutrino telescope. The algorithm consists of a hit merging and hit selection procedure followed by fitting steps for a track hypothesis and a point-like light source. It is particularly well-suited for real time applications such as online monitoring and fast triggering of optical follow-up observations for multi-messenger studies. The performance of the algorithm is evaluated with Monte Carlo simulations and various distributions are compared with that obtained in ANTARES data.

  10. A fast algorithm for muon track reconstruction and its application to the ANTARES neutrino telescope

    Science.gov (United States)

    Aguilar, J. A.; Al Samarai, I.; Albert, A.; André, M.; Anghinolfi, M.; Anton, G.; Anvar, S.; Ardid, M.; Assis Jesus, A. C.; Astraatmadja, T.; Aubert, J.-J.; Auer, R.; Baret, B.; Basa, S.; Bazzotti, M.; Bertin, V.; Biagi, S.; Bigongiari, C.; Bogazzi, C.; Bou-Cabo, M.; Bouwhuis, M. C.; Brown, A. M.; Brunner, J.; Busto, J.; Camarena, F.; Capone, A.; Cârloganu, C.; Carminati, G.; Carr, J.; Cecchini, S.; Charvis, Ph.; Chiarusi, T.; Circella, M.; Coniglione, R.; Costantini, H.; Cottini, N.; Coyle, P.; Curtil, C.; Decowski, M. P.; Dekeyser, I.; Deschamps, A.; Distefano, C.; Donzaud, C.; Dornic, D.; Dorosti, Q.; Drouhin, D.; Eberl, T.; Emanuele, U.; Ernenwein, J.-P.; Escoffier, S.; Fehr, F.; Flaminio, V.; Fritsch, U.; Fuda, J.-L.; Galatà, S.; Gay, P.; Giacomelli, G.; Gómez-González, J. P.; Graf, K.; Guillard, G.; Halladjian, G.; Hallewell, G.; van Haren, H.; Heijboer, A. J.; Hello, Y.; Hernández-Rey, J. J.; Herold, B.; Hößl, J.; Hsu, C. C.; de Jong, M.; Kadler, M.; Kalantar-Nayestanaki, N.; Kalekin, O.; Kappes, A.; Katz, U.; Kooijman, P.; Kopper, C.; Kouchner, A.; Kulikovskiy, V.; Lahmann, R.; Lamare, P.; Larosa, G.; Lefèvre, D.; Lim, G.; Lo Presti, D.; Loehner, H.; Loucatos, S.; Lucarelli, F.; Mangano, S.; Marcelin, M.; Margiotta, A.; Martinez-Mora, J. A.; Mazure, A.; Meli, A.; Montaruli, T.; Morganti, M.; Moscoso, L.; Motz, H.; Naumann, C.; Neff, M.; Palioselitis, D.; Păvălaş, G. E.; Payre, P.; Petrovic, J.; Picot-Clemente, N.; Picq, C.; Popa, V.; Pradier, T.; Presani, E.; Racca, C.; Reed, C.; Riccobene, G.; Richardt, C.; Richter, R.; Rostovtsev, A.; Rujoiu, M.; Russo, G. V.; Salesa, F.; Sapienza, P.; Schöck, F.; Schuller, J.-P.; Shanidze, R.; Simeone, F.; Spiess, A.; Spurio, M.; Steijger, J. J. M.; Stolarczyk, Th.; Taiuti, M.; Tamburini, C.; Tasca, L.; Toscano, S.; Vallage, B.; Van Elewyck, V.; Vannoni, G.; Vecchi, M.; Vernin, P.; Wijnker, G.; de Wolf, E.; Yepes, H.; Zaborov, D.; Zornoza, J. D.; Zúñiga, J.

    2011-04-01

    An algorithm is presented, that provides a fast and robust reconstruction of neutrino induced upward-going muons and a discrimination of these events from downward-going atmospheric muon background in data collected by the ANTARES neutrino telescope. The algorithm consists of a hit merging and hit selection procedure followed by fitting steps for a track hypothesis and a point-like light source. It is particularly well-suited for real time applications such as online monitoring and fast triggering of optical follow-up observations for multi-messenger studies. The performance of the algorithm is evaluated with Monte Carlo simulations and various distributions are compared with that obtained in ANTARES data.

  11. A Fast local Reconstruction algorithm by selective backprojection for Low-Dose in Dental Computed Tomography

    CERN Document Server

    Bin, Yan; Yu, Han; Feng, Zhang; Chao, Wang Xian; Lei, Li

    2013-01-01

    High radiation dose in computed tomography (CT) scans increases the lifetime risk of cancer, which become a major clinical concern. The backprojection-filtration (BPF) algorithm could reduce radiation dose by reconstructing images from truncated data in a short scan. In dental CT, it could reduce radiation dose for the teeth by using the projection acquired in a short scan, and could avoid irradiation to other part by using truncated projection. However, the limit of integration for backprojection varies per PI-line, resulting in low calculation efficiency and poor parallel performance. Recently, a tent BPF (T-BPF) has been proposed to improve calculation efficiency by rearranging projection. However, the memory-consuming data rebinning process is included. Accordingly, the chose-BPF (C-BPF) algorithm is proposed in this paper. In this algorithm, the derivative of projection is backprojected to the points whose x coordinate is less than that of the source focal spot to obtain the differentiated backprojection...

  12. Multicast Routing Based on Hybrid Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    CAO Yuan-da; CAI Gui

    2005-01-01

    A new multicast routing algorithm based on the hybrid genetic algorithm (HGA) is proposed. The coding pattern based on the number of routing paths is used. A fitness function that is computed easily and makes algorithm quickly convergent is proposed. A new approach that defines the HGA's parameters is provided. The simulation shows that the approach can increase largely the convergent ratio, and the fitting values of the parameters of this algorithm are different from that of the original algorithms. The optimal mutation probability of HGA equals 0.50 in HGA in the experiment, but that equals 0.07 in SGA. It has been concluded that the population size has a significant influence on the HGA's convergent ratio when it's mutation probability is bigger. The algorithm with a small population size has a high average convergent rate. The population size has little influence on HGA with the lower mutation probability.

  13. A novel image reconstruction methodology based on inverse Monte Carlo analysis for positron emission tomography

    Science.gov (United States)

    Kudrolli, Haris A.

    2001-04-01

    A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates

  14. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    Science.gov (United States)

    Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai

    2016-03-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  15. State exact reconstruction for switched linear systems via a super-twisting algorithm

    Science.gov (United States)

    Bejarano, Francisco J.; Fridman, Leonid

    2011-05-01

    This article discusses the problem of state reconstruction synthesis for switched linear systems. Based only on the continuous output information, an observer is proposed ensuring the reconstruction of the entire state (continuous and discrete) in finite time. For the observer design an exact sliding mode differentiator is used, which allows the finite time convergence of the observer trajectories to the actual trajectories. The design scheme includes both cases: zero control input and nonzero control input. Simulations illustrate the effectiveness of the proposed observer.

  16. L1/2 regularization based numerical method for effective reconstruction of bioluminescence tomography

    Science.gov (United States)

    Chen, Xueli; Yang, Defu; Zhang, Qitan; Liang, Jimin

    2014-05-01

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l1/2 regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l1/2 regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l1 regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  17. Approximate Sparsity and Nonlocal Total Variation Based Compressive MR Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Chengzhi Deng

    2014-01-01

    Full Text Available Recent developments in compressive sensing (CS show that it is possible to accurately reconstruct the magnetic resonance (MR image from undersampled k-space data by solving nonsmooth convex optimization problems, which therefore significantly reduce the scanning time. In this paper, we propose a new MR image reconstruction method based on a compound regularization model associated with the nonlocal total variation (NLTV and the wavelet approximate sparsity. Nonlocal total variation can restore periodic textures and local geometric information better than total variation. The wavelet approximate sparsity achieves more accurate sparse reconstruction than fixed wavelet l0 and l1 norm. Furthermore, a variable splitting and augmented Lagrangian algorithm is presented to solve the proposed minimization problem. Experimental results on MR image reconstruction demonstrate that the proposed method outperforms many existing MR image reconstruction methods both in quantitative and in visual quality assessment.

  18. Manifold-Based Reinforcement Learning via Locally Linear Reconstruction.

    Science.gov (United States)

    Xu, Xin; Huang, Zhenhua; Zuo, Lei; He, Haibo

    2017-04-01

    Feature representation is critical not only for pattern recognition tasks but also for reinforcement learning (RL) methods to solve learning control problems under uncertainties. In this paper, a manifold-based RL approach using the principle of locally linear reconstruction (LLR) is proposed for Markov decision processes with large or continuous state spaces. In the proposed approach, an LLR-based feature learning scheme is developed for value function approximation in RL, where a set of smooth feature vectors is generated by preserving the local approximation properties of neighboring points in the original state space. By using the proposed feature learning scheme, an LLR-based approximate policy iteration (API) algorithm is designed for learning control problems with large or continuous state spaces. The relationship between the value approximation error of a new data point and the estimated values of its nearest neighbors is analyzed. In order to compare different feature representation and learning approaches for RL, a comprehensive simulation and experimental study was conducted on three benchmark learning control problems. It is illustrated that under a wide range of parameter settings, the LLR-based API algorithm can obtain better learning control performance than the previous API methods with different feature representation schemes.

  19. Ghost images and feasibility of reconstructions with the Richardson-Lucy algorithm

    Science.gov (United States)

    Llacer, Jorge; Nunez, Jorge

    1994-09-01

    This paper is the result of a question that was raised at the recent workshop on 'The Restoration of HST Images and Spectra II', that took place at the Space Telescope Science Institute in November 1993, for which there was no forthcoming answer at that time. The question was: What is the null space (ghost images) of the Richardson-Lucy (RL) algorithm? Another question that came up for which there is a straight-forward answer was: What does the MLE algorithm really do? In this paper we attempt to answer both questions. This paper will begin with a brief description of the null space of an imaging system, with particular emphasis on the Hubble telescope. The imaging conditions under which there is a possibly damaging null space will be described in terms of linear methods of reconstruction. For the uncorrected Hubble telescope, it is shown that for a PSF computed by TINYTIM on a 512 X 512 dimension, there is no null space. We introduce the concept of a 'nearly null' space, with an unsharp distinction between the 'measurement' and the 'null' components of an image and generate a reduced resolution Hubble Point Spread Function (PSF) that has that nearly null space. We then study the propagation characteristics of null images in the Maximum Likelihood Estimator (MLE), or Richardson-Lucy algorithm, and the nature of its possible effects, but we find in computer simulations that the algorithm is very robust to those effects: if they exist, the effects are local and tend to disappear with increasing iteration numbers. We then demonstrate how a PSF that has small components in frequency domain results in noise magnification, just as one would expect in linear reconstruction. The answer to the second question is given in terms of the residuals of a reconstruction and the concept of feasibility.

  20. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    Energy Technology Data Exchange (ETDEWEB)

    Puchner, Stefan B. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Ferencik, Maros [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Harvard Medical School, Division of Cardiology, Massachusetts General Hospital, Boston, MA (United States); Maurovich-Horvat, Pal [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Semmelweis University, MTA-SE Lenduelet Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu [CV Path Institute Inc., Gaithersburg, MD (United States); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Hoffmann, Udo [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Schlett, Christopher L. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany)

    2015-01-15

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area <60 HU was semiautomatically quantified in CCTA. LCP was defined by histology as fibroatheroma with a large lipid/necrotic core. Area under the curve (AUC) was derived from logistic regression analysis as a measure of diagnostic accuracy. Overall, 173 CCTA triplets (FBP/ASIR/MBIR) were co-registered with histology. LCP was present in 26 cross-sections. Average measured plaque area <60 HU was significantly larger in LCP compared to non-LCP cross-sections (mm{sup 2}: 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  1. 一种改进的MC三维重建算法%An Improved MC Three-Dimensional Reconstruction Algorithm

    Institute of Scientific and Technical Information of China (English)

    帅仁俊; 陈书晶

    2016-01-01

    For traditional Marching Cubes algorithm in the process of 3d reconstruction operation time is too long, low efficiency of algorithm, this paper puts forward a kind of based on golden section point Marching Cubes algorithm, using edge golden point instead of Marching Cubes algorithm of contour surface and edge node. Make public the intersection of edge and normal vector linear interpolation calculation into basic mathematical operation, and makes the calculation of the number of by 4 times reduced to 1 times. Experiments show that this algorithm is effective to reduce the operation time of the algorithm and improve the execution efficiency of the algorithm.%针对传统Marching Cubes算法在进行三维重构过程中运算时间过长、算法效率低下的问题,提出了一种基于黄金分割点的Marching Cubes算法,使用棱边的黄金分割点代替Marching Cubes算法中等值面与棱边的交点。使公共棱边的交点和法向量的线性插值计算变为基本的数学运算,并且使计算的次数由4次减少为1次。实验证明,本次算法有效减少了算法的运算时间,提高了算法的执行效率。

  2. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method.

  3. Artifact reduction in short-scan CBCT by use of optimization-based reconstruction

    Science.gov (United States)

    Zhang, Zheng; Han, Xiao; Pearson, Erik; Pelizzari, Charles; Sidky, Emil Y.; Pan, Xiaochuan

    2016-05-01

    Increasing interest in optimization-based reconstruction in research on, and applications of, cone-beam computed tomography (CBCT) exists because it has been shown to have to potential to reduce artifacts observed in reconstructions obtained with the Feldkamp-Davis-Kress (FDK) algorithm (or its variants), which is used extensively for image reconstruction in current CBCT applications. In this work, we carried out a study on optimization-based reconstruction for possible reduction of artifacts in FDK reconstruction specifically from short-scan CBCT data. The investigation includes a set of optimization programs such as the image-total-variation (TV)-constrained data-divergency minimization, data-weighting matrices such as the Parker weighting matrix, and objects of practical interest for demonstrating and assessing the degree of artifact reduction. Results of investigative work reveal that appropriately designed optimization-based reconstruction, including the image-TV-constrained reconstruction, can reduce significant artifacts observed in FDK reconstruction in CBCT with a short-scan configuration.

  4. Multidimensional dictionary learning algorithm for compressive sensing-based hyperspectral imaging

    Science.gov (United States)

    Zhao, Rongqiang; Wang, Qiang; Shen, Yi; Li, Jia

    2016-11-01

    The sparsifying representation plays a significant role in compressive sensing (CS)-based hyperspectral (HS) imaging. Training the dictionaries for each dimension from HS samples is very beneficial to accurate reconstruction. However, the tensor dictionary learning algorithms are limited by a great amount of computation and convergence difficulties. We propose a least squares (LS) type multidimensional dictionary learning algorithm for CS-based HS imaging. We develop a practical method for the dictionary updating stage, which avoids the use of the Kronecker product and thus has lower computation complexity. To guarantee the convergence, we add a pruning stage to the algorithm to ensure the similarity and relativity among data in the spectral dimension. Our experimental results demonstrated that the dictionaries trained using the proposed algorithm performed better at CS-based HS image reconstruction than those trained with traditional LS-type dictionary learning algorithms and the commonly used analytical dictionaries.

  5. Investigation of the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection method: a phantom study

    Science.gov (United States)

    Abuhadi, Nouf; Bradley, David; Katarey, Dev; Podolyak, Zsolt; Sassi, Salem

    2014-03-01

    Introduction: Single-Photon Emission Computed Tomography (SPECT) is used to measure and quantify radiopharmaceutical distribution within the body. The accuracy of quantification depends on acquisition parameters and reconstruction algorithms. Until recently, most SPECT images were constructed using Filtered Back Projection techniques with no attenuation or scatter corrections. The introduction of 3-D Iterative Reconstruction algorithms with the availability of both computed tomography (CT)-based attenuation correction and scatter correction may provide for more accurate measurement of radiotracer bio-distribution. The effect of attenuation and scatter corrections on accuracy of SPECT measurements is well researched. It has been suggested that the combination of CT-based attenuation correction and scatter correction can allow for more accurate quantification of radiopharmaceutical distribution in SPECT studies (Bushberg et al., 2012). However, The effect of respiratory induced cardiac motion on SPECT images acquired using higher resolution algorithms such 3-D iterative reconstruction with attenuation and scatter corrections has not been investigated. Aims: To investigate the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection (FBP) methods implemented on cardiac SPECT/CT imaging with and without CT-attenuation and scatter corrections. Also to investigate the effects of respiratory induced cardiac motion on myocardium perfusion quantification. Lastly, to present a comparison of spatial resolution for FBP and ordered subset expectation maximization (OSEM) Flash 3D together with and without respiratory induced motion, and with and without attenuation and scatter correction. Methods: This study was performed on a Siemens Symbia T16 SPECT/CT system using clinical acquisition protocols. Respiratory induced cardiac motion was simulated by imaging a cardiac phantom insert whilst moving it using a respiratory motion motor

  6. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  7. Chest wall infiltration by lung cancer: value of thin-sectional CT with different reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Uhrmeister, P.; Allmann, K.H.; Altehoefer, C.; Laubenberger, J.; Langer, M. [Department of Diagnostic Radiology, University Hospital Freiburg (Germany); Wertzel, H.; Hasse, J. [Department of Thoracic Surgery, University Hospital Freiburg (Germany)

    1999-09-01

    The aim of this investigation was to evaluate whether thin-sectional CT with different reconstruction algorithms can improve the diagnostic accuracy with regard to chest wall invasion in patients with peripheral bronchogenic carcinoma. Forty-one patients with intrapulmonary lesions and tumor contact to the thoracic wall as seen on CT staging underwent additional 1-mm CT slices with reconstruction in a high-resolution (HR) and an edge blurring, soft detail (SD) algorithm. Five criteria were applied and validated by histological findings. Using the criteria of the intact fat layer, HRCT had a sensitivity of 81 % and a specificity of 79 %, SD CT had a sensitivity of 96 % and a specificity of 78 %, and standard CT technique had a sensitivity of 50 % and a specificity of 71 %, respectively. Regarding changes of intercostal soft tissue, HRCT achieved a sensitivity of 71 % and a specificity of 96 %, SD CT had a sensitivity of 94 % and a specificity of 96 % (standard CT technique: sensitivity 50 % and specificity 96 %). For the other criteria, such as pleural contact area, angle, and osseous destruction, no significant differences were found. Diagnostic accuracy of chest wall infiltration can be improved by using thin sectional CT. Especially the application of an edge-blurring (SD) algorithm increases sensitivity and specificity without additional costs. (orig.) With 4 figs., 1 tab., 26 refs.

  8. Homotopy based Surface Reconstruction with Application to Acoustic Signals

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Anton, François

    2011-01-01

    This work introduces a new algorithm for surface reconstruction in a"e(3) from spatially arranged one-dimensional cross sections embedded in a"e(3). This is generally the case with acoustic signals that pierce an object non-destructively. Continuous deformations (homotopies) that smoothly...... homotopies that can generate a C (2) surface. An algorithm to generate surface from acoustic sonar signals is presented with results. Reconstruction accuracies of the homotopies are compared by means of simulations performed on basic geometric primitives....

  9. NURBS-based geometric inverse reconstruction of free-form shapes

    Directory of Open Access Journals (Sweden)

    Deepika Saini

    2017-01-01

    Full Text Available In this study, a geometric inverse algorithm is proposed for reconstructing free-form shapes (curves and surfaces in space from their arbitrary perspective images using a Non-Uniform Rational B-Spline (NURBS model. In particular, NURBS model is used to recover information about the required shape in space. An optimization problem is formulated to fit the NURBS to the digitized data in the images. Control points and weights are treated as the decision variables in the optimization process. The 3D shape reconstruction problem is reduced to a problem that comprises stereo reconstruction of control points and computation of the corresponding weights. The correspondence between the control points in the two images is obtained using a third image. The performance of the proposed algorithm was validated by taking several examples based on the synthetic and real images. Comparisons were made with a point-based method in terms of various types of errors.

  10. An explicit reconstruction algorithm for the transverse ray transform of a second rank tensor field from three axis data

    Science.gov (United States)

    Desai, Naeem M.; Lionheart, William R. B.

    2016-11-01

    We give an explicit plane-by-plane filtered back-projection reconstruction algorithm for the transverse ray transform of symmetric second rank tensor fields on Euclidean three-space, using data from rotation about three orthogonal axes. We show that in the general case two-axis data is insufficient, but we give an explicit reconstruction procedure for the potential case with two-axis data. We describe a numerical implementation of the three-axis algorithm and give reconstruction results for simulated data.

  11. A promising limited angular computed tomography reconstruction via segmentation based regional enhancement and total variation minimization

    Science.gov (United States)

    Zhang, Wenkun; Zhang, Hanming; Li, Lei; Wang, Linyuan; Cai, Ailong; Li, Zhongguo; Yan, Bin

    2016-08-01

    X-ray computed tomography (CT) is a powerful and common inspection technique used for the industrial non-destructive testing. However, large-sized and heavily absorbing objects cause the formation of artifacts because of either the lack of specimen penetration in specific directions or the acquisition of data from only a limited angular range of views. Although the sparse optimization-based methods, such as the total variation (TV) minimization method, can suppress artifacts to some extent, reconstructing the images such that they converge to accurate values remains difficult because of the deficiency in continuous angular data and inconsistency in the projections. To address this problem, we use the idea of regional enhancement of the true values and suppression of the illusory artifacts outside the region to develop an efficient iterative algorithm. This algorithm is based on the combination of regional enhancement of the true values and TV minimization for the limited angular reconstruction. In this algorithm, the segmentation approach is introduced to distinguish the regions of different image knowledge and generate the support mask of the image. A new regularization term, which contains the support knowledge to enhance the true values of the image, is incorporated into the objective function. Then, the proposed optimization model is solved by variable splitting and the alternating direction method efficiently. A compensation approach is also designed to extract useful information from the initial projections and thus reduce false segmentation result and correct the segmentation support and the segmented image. The results obtained from comparing both simulation studies and real CT data set reconstructions indicate that the proposed algorithm generates a more accurate image than do the other reconstruction methods. The experimental results show that this algorithm can produce high-quality reconstructed images for the limited angular reconstruction and suppress

  12. Homotopy Based Reconstruction from Acoustic Images

    DEFF Research Database (Denmark)

    Sharma, Ojaswa

    of the inherent arrangement. The problem of reconstruction from arbitrary cross sections is a generic problem and is also shown to be solved here using the mathematical tool of continuous deformations. As part of a complete processing, segmentation using level set methods is explored for acoustic images and fast...... GPU (Graphics Processing Unit) based methods are suggested for a streaming computation on large volumes of data. Validation of results for acoustic images is not straightforward due to unavailability of ground truth. Accuracy figures for the suggested methods are provided using phantom object......This thesis presents work in the direction of generating smooth surfaces from linear cross sections embedded in R2 and R3 using homotopy continuation. The methods developed in this research are generic and can be applied to higher dimensions as well. Two types of problems addressed in this research...

  13. Simultaneous maximum-likelihood reconstruction for x-ray grating based phase-contrast tomography avoiding intermediate phase retrieval

    CERN Document Server

    Ritter, André; Durst, Jürgen; Gödel, Karl; Haas, Wilhelm; Michel, Thilo; Rieger, Jens; Weber, Thomas; Wucherer, Lukas; Anton, Gisela

    2013-01-01

    Phase-wrapping artifacts, statistical image noise and the need for a minimum amount of phase steps per projection limit the practicability of x-ray grating based phase-contrast tomography, when using filtered back projection reconstruction. For conventional x-ray computed tomography, the use of statistical iterative reconstruction algorithms has successfully reduced artifacts and statistical issues. In this work, an iterative reconstruction method for grating based phase-contrast tomography is presented. The method avoids the intermediate retrieval of absorption, differential phase and dark field projections. It directly reconstructs tomographic cross sections from phase stepping projections by the use of a forward projecting imaging model and an appropriate likelihood function. The likelihood function is then maximized with an iterative algorithm. The presented method is tested with tomographic data obtained through a wave field simulation of grating based phase-contrast tomography. The reconstruction result...

  14. Immune Based Intrusion Detector Generating Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Xiao-mei; YU Ge; XIANG Guang

    2005-01-01

    Immune-based intrusion detection approaches are studied. The methods of constructing self set and generating mature detectors are researched and improved. A binary encoding based self set construction method is applied. First,the traditional mature detector generating algorithm is improved to generate mature detectors and detect intrusions faster. Then, a novel mature detector generating algorithm is proposed based on the negative selection mechanism. Accord ing to the algorithm, less mature detectors are needed to detect the abnormal activities in the network. Therefore, the speed of generating mature detectors and intrusion detection is improved. By comparing with those based on existing algo rithms, the intrusion detection system based on the algorithm has higher speed and accuracy.

  15. Kernel-Based Reconstruction of Graph Signals

    Science.gov (United States)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  16. Structural algorithm to reservoir reconstruction using passive seismic data (synthetic example)

    Energy Technology Data Exchange (ETDEWEB)

    Smaglichenko, Tatyana A.; Volodin, Igor A.; Lukyanitsa, Andrei A.; Smaglichenko, Alexander V.; Sayankina, Maria K. [Oil and Gas Research Institute, Russian Academy of Science, Gubkina str.3, 119333, Moscow (Russian Federation); Faculty of Computational Mathematics and Cybernetics, M.V. Lomonosov Moscow State University, Leninskie gory, 1, str.52,Second Teaching Building.119991 Moscow (Russian Federation); Shmidt' s Institute of Physics of the Earth, Russian Academy of Science, Bolshaya Gruzinskaya str. 10, str.1, 123995 Moscow (Russian Federation); Oil and Gas Research Institute, Russian Academy of Science, Gubkina str.3, 119333, Moscow (Russian Federation)

    2012-09-26

    Using of passive seismic observations to detect a reservoir is a new direction of prospecting and exploration of hydrocarbons. In order to identify thin reservoir model we applied the modification of Gaussian elimination method in conditions of incomplete synthetic data. Because of the singularity of a matrix conventional method does not work. Therefore structural algorithm has been developed by analyzing the given model as a complex model. Numerical results demonstrate of its advantage compared with usual way of solution. We conclude that the gas reservoir is reconstructed by retrieving of the image of encasing shale beneath it.

  17. Enhanced temporal resolution at cardiac CT with a novel CT image reconstruction algorithm: Initial patient experience

    Energy Technology Data Exchange (ETDEWEB)

    Apfaltrer, Paul, E-mail: paul.apfaltrer@medma.uni-heidelberg.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Institute of Clinical Radiology and Nuclear Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim (Germany); Schoendube, Harald, E-mail: harald.schoendube@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Allmendinger, Thomas, E-mail: thomas.allmendinger@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Tricarico, Francesco, E-mail: francescotricarico82@gmail.com [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Department of Bioimaging and Radiological Sciences, Catholic University of the Sacred Heart, “A. Gemelli” Hospital, Largo A. Gemelli 8, Rome (Italy); Schindler, Andreas, E-mail: andreas.schindler@campus.lmu.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Vogt, Sebastian, E-mail: sebastian.vogt@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Sunnegårdh, Johan, E-mail: johan.sunnegardh@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); and others

    2013-02-15

    Objective: To evaluate the effect of a temporal resolution improvement method (TRIM) for cardiac CT on diagnostic image quality for coronary artery assessment. Materials and methods: The TRIM-algorithm employs an iterative approach to reconstruct images from less than 180° of projections and uses a histogram constraint to prevent the occurrence of limited-angle artifacts. This algorithm was applied in 11 obese patients (7 men, 67.2 ± 9.8 years) who had undergone second generation dual-source cardiac CT with 120 kV, 175–426 mAs, and 500 ms gantry rotation. All data were reconstructed with a temporal resolution of 250 ms using traditional filtered-back projection (FBP) and of 200 ms using the TRIM-algorithm. Contrast attenuation and contrast-to-noise-ratio (CNR) were measured in the ascending aorta. The presence and severity of coronary motion artifacts was rated on a 4-point Likert scale. Results: All scans were considered of diagnostic quality. Mean BMI was 36 ± 3.6 kg/m{sup 2}. Average heart rate was 60 ± 9 bpm. Mean effective dose was 13.5 ± 4.6 mSv. When comparing FBP- and TRIM reconstructed series, the attenuation within the ascending aorta (392 ± 70.7 vs. 396.8 ± 70.1 HU, p > 0.05) and CNR (13.2 ± 3.2 vs. 11.7 ± 3.1, p > 0.05) were not significantly different. A total of 110 coronary segments were evaluated. All studies were deemed diagnostic; however, there was a significant (p < 0.05) difference in the severity score distribution of coronary motion artifacts between FBP (median = 2.5) and TRIM (median = 2.0) reconstructions. Conclusion: The algorithm evaluated here delivers diagnostic imaging quality of the coronary arteries despite 500 ms gantry rotation. Possible applications include improvement of cardiac imaging on slower gantry rotation systems or mitigation of the trade-off between temporal resolution and CNR in obese patients.

  18. Performance of the Mean-Timer algorithm for DT Local Reconstruction and muon time measurement.

    CERN Document Server

    CMS Collaboration

    2014-01-01

    The Mean-Timer algorithm has been recently implemented as default for the local reconstruction within the CMS Drift Tubes (DT), for muons that appear to be out-of-time (OOT) or lack measured hits on one of the two space projections. Besides improving the spatial resoluton for OOT muons, this method allows a precise time measurement that can be used to tag OOT muons, in order either to reject them (e.g. as a result of OOT Pile Up) or to select them for exotic physical analyses.

  19. The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution neuroreceptor PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Angelis, G I; Kotasidis, F A; Matthews, J C [Imaging, Proteomics and Genomics, MAHSC, University of Manchester, Wolfson Molecular Imaging Centre, Manchester (United Kingdom); Reader, A J [Montreal Neurological Institute, McGill University, Montreal (Canada); Lionheart, W R, E-mail: georgios.angelis@mmic.man.ac.uk [School of Mathematics, University of Manchester, Alan Turing Building, Manchester (United Kingdom)

    2011-07-07

    Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [{sup 11}C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice

  20. Fuzzy Controllers Based Multipath Routing Algorithm in MANET

    Science.gov (United States)

    Pi, Shangchao; Sun, Baolin

    Mobile ad hoc networks (MANETs) consist of a collection of wireless mobile nodes which dynamically exchange data among themselves without the reliance on a fixed base station or a wired backbone network. Due to the limited transmission range of wireless network nodes, multiple hops are usually needed for a node to exchange information with any other node in the network. Multipath routing allows the establishment of multiple paths between a single source and single destination node. The multipath routing in mobile ad hoc networks is difficult because the network topology may change constantly, and the available alternative path is inherently unreliable. This paper introduces a fuzzy controllers based multipath routing algorithm in MANET (FMRM). The key idea of FMRM algorithm is to construct the fuzzy controllers with the help to reduce reconstructions in the ad hoc network. The simulation results show that the proposed approach is effective and efficient in applications to the MANETs. It is an available approach to multipath routing decision.

  1. Lane Detection Based on Machine Learning Algorithm

    National Research Council Canada - National Science Library

    Chao Fan; Jingbo Xu; Shuai Di

    2013-01-01

    In order to improve accuracy and robustness of the lane detection in complex conditions, such as the shadows and illumination changing, a novel detection algorithm was proposed based on machine learning...

  2. Anterior and middle skull base reconstruction after tumor resection

    Institute of Scientific and Technical Information of China (English)

    WANG Bo; WU Sheng-tian; LI Zhi; LIU Pi-nan

    2010-01-01

    Background Surgical management of skull base tumors is still challenging today due to its sophisticated operation procedure. Surgeons who specialize in skull base surgery are making endeavor to promote the outcome of patients with skull base tumor. A reliable skull base reconstruction after tumor resection is of paramount importance in avoiding life-threatening complications, such as cerebrospinal fluid leakage and intracranial infection. This study aimed at investigating the indication, operation approach and operation technique of anterior and middle skull base reconstruction.Methods A retrospective analysis was carried out on 44 patients who underwent anterior and middle skull base reconstruction in the Department of Neurosurgery at Beijing Tiantan Hospital between March 2005 and March 2008. Different surgical approaches were selected according to the different regions involved by the tumor. Microsurgery was carried out for tumor resection and combined endoscopic surgery was performed in some cases. According to the different locations and sizes of various defects after tumor resection, an individualized skull base soft tissue reconstruction was carried out for each case with artificial materials, pedicled flaps, free autologous tissue, and free vascularized muscle flaps, separately. A skull base bone reconstruction was carried out in some cases simultaneously.Results Soft tissue reconstruction was performed in all 44 cases with a fascia lata repair in 9 cases, a free vascularized muscle flap in 1 case, a pedicled muscle flap in 14 cases, and a pedicled periosteal flap in 20 cases. Skull base bone reconstruction was performed on 10 cases simultaneously. The materials for bone reconstruction included titanium mesh, free autogenous bone, and a Medpor implant. The result of skull base reconstruction was satisfactory in all patients. Postoperative early-stage complications occurred in 10 cases with full recovery after conventional treatment.Conclusions The specific

  3. QPSO-based adaptive DNA computing algorithm.

    Science.gov (United States)

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  4. QPSO-Based Adaptive DNA Computing Algorithm

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available DNA (deoxyribonucleic acid computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO. Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1 parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2 adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3 numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  5. Tensor decomposition and nonlocal means based spectral CT reconstruction

    Science.gov (United States)

    Zhang, Yanbo; Yu, Hengyong

    2016-10-01

    As one of the state-of-the-art detectors, photon counting detector is used in spectral CT to classify the received photons into several energy channels and generate multichannel projection simultaneously. However, the projection always contains severe noise due to the low counts in each energy channel. How to reconstruct high-quality images from photon counting detector based spectral CT is a challenging problem. It is widely accepted that there exists self-similarity over the spatial domain in a CT image. Moreover, because a multichannel CT image is obtained from the same object at different energy, images among channels are highly correlated. Motivated by these two characteristics of the spectral CT, we employ tensor decomposition and nonlocal means methods for spectral CT iterative reconstruction. Our method includes three basic steps. First, each channel image is updated by using the OS-SART. Second, small 3D volumetric patches (tensor) are extracted from the multichannel image, and higher-order singular value decomposition (HOSVD) is performed on each tensor, which can help to enhance the spatial sparsity and spectral correlation. Third, in order to employ the self-similarity in CT images, similar patches are grouped to reduce noise using the nonlocal means method. These three steps are repeated alternatively till the stopping criteria are met. The effectiveness of the developed algorithm is validated on both numerically simulated and realistic preclinical datasets. Our results show that the proposed method achieves promising performance in terms of noise reduction and fine structures preservation.

  6. Precise two-dimensional D-bar reconstructions of human chest and phantom tank via sinc-convolution algorithm

    Directory of Open Access Journals (Sweden)

    Abbasi Mahdi

    2012-06-01

    Full Text Available Abstract Background Electrical Impedance Tomography (EIT is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical

  7. Evolutionary algorithm based configuration interaction approach

    CERN Document Server

    Chakraborty, Rahul

    2016-01-01

    A stochastic configuration interaction method based on evolutionary algorithm is designed as an affordable approximation to full configuration interaction (FCI). The algorithm comprises of initiation, propagation and termination steps, where the propagation step is performed with cloning, mutation and cross-over, taking inspiration from genetic algorithm. We have tested its accuracy in 1D Hubbard problem and a molecular system (symmetric bond breaking of water molecule). We have tested two different fitness functions based on energy of the determinants and the CI coefficients of determinants. We find that the absolute value of CI coefficients is a more suitable fitness function when combined with a fixed selection scheme.

  8. A New Base-6 FFT Algorithm

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new FFT algorithm has been deduced, which is called the base-6 FFT algorithm. The amount for calculating the DFT of complex sequence of N=2r by the base-6 FFT algorithm is Mr(N)=14/3*Nlog6N-4N+4 for multiplication operation of real number and Ar(N)=23/3*Nlog6N-2N+2 for addition operation of real number. The amount for calculating the DFT of real sequence is a half of it with the complex sequence.

  9. GPU-based Iterative Cone Beam CT Reconstruction Using Tight Frame Regularization

    CERN Document Server

    Jia, Xun; Lou, Yifei; Jiang, Steve B

    2010-01-01

    X-ray imaging dose from serial cone-beam CT (CBCT) scans raises a clinical concern in most image guided radiation therapy procedures. It is the goal of this paper to develop a fast GPU-based algorithm to reconstruct high quality CBCT images from undersampled and noisy projection data so as to lower the imaging dose. For this purpose, we have developed an iterative tight frame (TF) based CBCT reconstruction algorithm. A condition that a real CBCT image has a sparse representation under a TF basis is imposed in the iteration process as regularization to the solution. To speed up the computation, a multi-grid method is employed. Our GPU implementation has achieved high computational efficiency and a CBCT image of resolution 512x512x70 can be reconstructed in about ~139 sec. We have tested our algorithm on a digital NCAT phantom and a physical Catphan phantom. It is found that our TF-based algorithm leads to much higher CBCT quality than those obtained from a conventional FDK algorithm in the context of undersamp...

  10. 3D reconstruction of worn parts for flexible remanufacture based on robotic arc welding

    Institute of Scientific and Technical Information of China (English)

    Yin Ziqiang; Zhang Guangjun; Gao Hongming; Wu Lin

    2010-01-01

    3D reconstruction of worn parts is the foundation for remanufacturing system based on robotic arc welding,because it can provide 3D geometric information for robot task plan.In this investigation,a nocwl 3D reconstruction system based on linear structured light vision sensing is developed,This system hardware consists of a MTC368-CB CCD camera,a MLH-645laser projector and a DH-CG300 image grabbing card.This system software is developed to control the image data capture.In order to reconstruct the 3D geometric information from the captured image,a two steps rapid calibration algorithm is proposed.The 3D reconstruction experiment shows a satisfactory result.

  11. An Algorithm For Training Multilayer Perceptron MLP For Image Reconstruction Using Neural Network Without Overfitting.

    Directory of Open Access Journals (Sweden)

    Mohammad Mahmudul Alam Mia

    2015-02-01

    Full Text Available Abstract Recently back propagation neural network BPNN has been applied successfully in many areas with excellent generalization results for example rule extraction classification and evaluation. In this paper the Levenberg-Marquardt back-propagation algorithm is used for training the network and reconstructs the image. It is found that Marquardt algorithm is significantly more proficient. A practical problem with MLPs is to select the correct complexity for the model i.e. the right number of hidden units or correct regularization parameters. In this paper a study is made to determine the issue of number of neurons in every hidden layer and the quantity of hidden layers needed for getting the high accuracy. We performed regression R analysis to measure the correlation between outputs and targets.

  12. Spectral Reconstruction Based on Svm for Cross Calibration

    Science.gov (United States)

    Gao, H.; Ma, Y.; Liu, W.; He, H.

    2017-05-01

    Chinese HY-1C/1D satellites will use a 5nm/10nm-resolutional visible-near infrared(VNIR) hyperspectral sensor with the solar calibrator to cross-calibrate with other sensors. The hyperspectral radiance data are composed of average radiance in the sensor's passbands and bear a spectral smoothing effect, a transform from the hyperspectral radiance data to the 1-nm-resolution apparent spectral radiance by spectral reconstruction need to be implemented. In order to solve the problem of noise cumulation and deterioration after several times of iteration by the iterative algorithm, a novel regression method based on SVM is proposed, which can approach arbitrary complex non-linear relationship closely and provide with better generalization capability by learning. In the opinion of system, the relationship between the apparent radiance and equivalent radiance is nonlinear mapping introduced by spectral response function(SRF), SVM transform the low-dimensional non-linear question into high-dimensional linear question though kernel function, obtaining global optimal solution by virtue of quadratic form. The experiment is performed using 6S-simulated spectrums considering the SRF and SNR of the hyperspectral sensor, measured reflectance spectrums of water body and different atmosphere conditions. The contrastive result shows: firstly, the proposed method is with more reconstructed accuracy especially to the high-frequency signal; secondly, while the spectral resolution of the hyperspectral sensor reduces, the proposed method performs better than the iterative method; finally, the root mean square relative error(RMSRE) which is used to evaluate the difference of the reconstructed spectrum and the real spectrum over the whole spectral range is calculated, it decreses by one time at least by proposed method.

  13. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    Science.gov (United States)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  14. Improved signal-to-noise ratio in parallel coronary artery magnetic resonance angiography using graph cuts based Bayesian reconstruction.

    Science.gov (United States)

    Singh, Gurmeet; Nguyen, Thanh; Kressler, Bryan; Spincemaille, Pascal; Raj, Ashish; Zabih, Ramin; Wang, Yi

    2006-01-01

    High resolution 3D coronary artery MR angiography is time-consuming and can benefit from accelerated data acquisition provided by parallel imaging techniques without sacrificing spatial resolution. Currently, popular maximum likelihood based parallel imaging reconstruction techniques such as the SENSE algorithm offer this advantage at the cost of reduced signal-to-noise ratio (SNR). Maximum a posteriori (MAP) reconstruction techniques that incorporate globally smooth priors have been developed to recover this SNR loss, but they tend to blur sharp edges in the target image. The objective of this study is to demonstrate the feasibility of employing edge-preserving Markov random field priors in a MAP reconstruction framework, which can be solved efficiently using a graph cuts based optimization algorithm. The preliminary human study shows that our reconstruction provides significantly better SNR than the SENSE reconstruction performed by a commercially available scanner for navigator gated steady state free precession 3D coronary magnetic resonance angiography images (n = 4).

  15. Seizure detection algorithms based on EMG signals

    DEFF Research Database (Denmark)

    Conradsen, Isa

    Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective......: to show whether medical signal processing of EMG data is feasible for detection of epileptic seizures. Methods: EMG signals during generalised seizures were recorded from 3 patients (with 20 seizures in total). Two possible medical signal processing algorithms were tested. The first algorithm was based...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while...

  16. Virtual patient 3D dose reconstruction using in air EPID measurements and a back-projection algorithm for IMRT and VMAT treatments.

    Science.gov (United States)

    Olaciregui-Ruiz, Igor; Rozendaal, Roel; van Oers, René F M; Mijnheer, Ben; Mans, Anton

    2017-05-01

    At our institute, a transit back-projection algorithm is used clinically to reconstruct in vivo patient and in phantom 3D dose distributions using EPID measurements behind a patient or a polystyrene slab phantom, respectively. In this study, an extension to this algorithm is presented whereby in air EPID measurements are used in combination with CT data to reconstruct 'virtual' 3D dose distributions. By combining virtual and in vivo patient verification data for the same treatment, patient-related errors can be separated from machine, planning and model errors. The virtual back-projection algorithm is described and verified against the transit algorithm with measurements made behind a slab phantom, against dose measurements made with an ionization chamber and with the OCTAVIUS 4D system, as well as against TPS patient data. Virtual and in vivo patient dose verification results are also compared. Virtual dose reconstructions agree within 1% with ionization chamber measurements. The average γ-pass rate values (3% global dose/3mm) in the 3D dose comparison with the OCTAVIUS 4D system and the TPS patient data are 98.5±1.9%(1SD) and 97.1±2.9%(1SD), respectively. For virtual patient dose reconstructions, the differences with the TPS in median dose to the PTV remain within 4%. Virtual patient dose reconstruction makes pre-treatment verification based on deviations of DVH parameters feasible and eliminates the need for phantom positioning and re-planning. Virtual patient dose reconstructions have additional value in the inspection of in vivo deviations, particularly in situations where CBCT data is not available (or not conclusive). Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Performance of Hull-Detection Algorithms For Proton Computed Tomography Reconstruction

    CERN Document Server

    Schultze, Blake; Censor, Yair; Schulte, Reinhard; Schubert, Keith Evan

    2014-01-01

    Proton computed tomography (pCT) is a novel imaging modality developed for patients receiving proton radiation therapy. The purpose of this work was to investigate hull-detection algorithms used for preconditioning of the large and sparse linear system of equations that needs to be solved for pCT image reconstruction. The hull-detection algorithms investigated here included silhouette/space carving (SC), modified silhouette/space carving (MSC), and space modeling (SM). Each was compared to the cone-beam version of filtered backprojection (FBP) used for hull-detection. Data for testing these algorithms included simulated data sets of a digital head phantom and an experimental data set of a pediatric head phantom obtained with a pCT scanner prototype at Loma Linda University Medical Center. SC was the fastest algorithm, exceeding the speed of FBP by more than 100 times. FBP was most sensitive to the presence of noise. Ongoing work will focus on optimizing threshold parameters in order to define a fast and effic...

  18. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    Energy Technology Data Exchange (ETDEWEB)

    Brady, S. L.; Yee, B. S.; Kaufman, R. A. [Department of Radiological Sciences, St. Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States)

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  19. Accelerated reconstruction of electrical impedance tomography images via patch based sparse representation

    Science.gov (United States)

    Wang, Qi; Lian, Zhijie; Wang, Jianming; Chen, Qingliang; Sun, Yukuan; Li, Xiuyan; Duan, Xiaojie; Cui, Ziqiang; Wang, Huaxiang

    2016-11-01

    Electrical impedance tomography (EIT) reconstruction is a nonlinear and ill-posed problem. Exact reconstruction of an EIT image inverts a high dimensional mathematical model to calculate the conductivity field, which causes significant problems regarding that the computational complexity will reduce the achievable frame rate, which is considered as a major advantage of EIT imaging. The single-step method, state estimation method, and projection method were always used to accelerate reconstruction process. The basic principle of these methods is to reduce computational complexity. However, maintaining high resolution in space together with not much cost is still challenging, especially for complex conductivity distribution. This study proposes an idea to accelerate image reconstruction of EIT based on compressive sensing (CS) theory, namely, CSEIT method. The novel CSEIT method reduces the sampling rate through minimizing redundancy in measurements, so that detailed information of reconstruction is not lost. In order to obtain sparse solution, which is the prior condition of signal recovery required by CS theory, a novel image reconstruction algorithm based on patch-based sparse representation is proposed. By applying the new framework of CSEIT, the data acquisition time, or the sampling rate, is reduced by more than two times, while the accuracy of reconstruction is significantly improved.

  20. Receiver operating characteristic (ROC) analysis of images reconstructed with iterative expectation maximization algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Yasuyuki; Murase, Kenya [Osaka Medical Coll., Takatsuki (Japan). Graduate School; Higashino, Hiroshi; Sogabe, Ichiro; Sakamoto, Kana

    2001-12-01

    The quality of images reconstructed by means of the maximum likelihood-expectation maximization (ML-EM) and ordered subset (OS)-EM algorithms, was examined with parameters such as the number of iterations and subsets, then compared with the quality of images reconstructed by the filtered back projection method. Phantoms showing signals inside signals, which mimicked single-photon emission computed tomography (SPECT) images of cerebral blood flow and myocardial perfusion, and phantoms showing signals around the signals obtained by SPECT of bone and tumor were used for experiments. To determine signals for recognition, SPECT images in which the signals could be appropriately recognized with a combination of fewer iterations and subsets of different sizes and densities were evaluated by receiver operating characteristic (ROC) analysis. The results of ROC analysis were applied to myocardial phantom experiments and scintigraphy of myocardial perfusion. Taking the image processing time into consideration, good SPECT images were obtained by OS-EM at iteration No. 10 and subset 5. This study will be helpful for selection of parameters such as the number of iterations and subsets when using the ML-EM or OS-EM algorithms. (author)

  1. Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded Vision Devices

    Directory of Open Access Journals (Sweden)

    B. Mahesh

    2013-02-01

    Full Text Available – Most digital cameras use a color filter array to capture the colors of the scene. Sub-sampled (Down sampled versions of the red, green, and blue components are acquired using Single Sensor Embedded vision devices with the help of Color Filter Array (CFA[1]. Hence Interpolation of the missing color samples is necessary to reconstruct a full color image. This method of interpolation is called as Demosaicing (Demosaicking. Least-Square Luma–Chroma demulti-plexing algorithm for Bayer demosaicking [2] is the most effective and efficient demosaicking technique available in the literature. As almost all companies of commercial cameras make use of this cost effective way for interpolating the missing colors and reconstructing the original image, the demosaicking arena has become a vital domain of research of embedded color vision devices[3].Hence, in this paper ,the authors aim is to analyze ,implement and evaluate the relative performance of the best known algorithms. Objective empirical value prove that LSLCDA is superior in performance

  2. An iterative algorithm for soft tissue reconstruction from truncated flat panel projections

    Science.gov (United States)

    Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.

    2006-03-01

    The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.

  3. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyungjin, E-mail: khj.snuh@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Park, Chang Min, E-mail: cmpark@radiol.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Song, Yong Sub, E-mail: terasong@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Lee, Sang Min, E-mail: sangmin.lee.md@gmail.com [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Goo, Jin Mo, E-mail: jmgoo@plaza.snu.ac.kr [Department of Radiology, Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of); Cancer Research Institute, Seoul National University, 101, Daehangno, Jongno-gu, Seoul 110-744 (Korea, Republic of)

    2014-05-15

    Purpose: To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. Materials and methods: CT scans were performed on a chest phantom containing various nodules (10 and 12 mm; +100, −630 and −800 HU) at 120 kVp with tube current–time settings of 10, 20, 50, and 100 mAs. Each CT was reconstructed using filtered back projection (FBP), iDose{sup 4} and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Results: Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p > 0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose{sup 4} at all radiation dose settings (p < 0.05). Conclusion: Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility.

  4. Watermark Detection in Impulsive Noise Environment Based on the Compressive Sensing Reconstruction

    Directory of Open Access Journals (Sweden)

    B. Lutovac

    2017-04-01

    Full Text Available The watermark detection procedure for images corrupted by impulsive noise is proposed. The procedure is based on the compressive sensing (CS method for the reconstruction of corrupted pixels. It is shown that the proposed procedure can extract watermark with a moderate impulsive noise level. It is well known that most of the images are approximately sparse in the 2D DCT domain. Moreover, we can force sparsity in the watermarking procedure and obtain almost strictly sparse image as a desirable input to the CS based reconstruction algorithms. Compared to the state-of-the-art methods for impulse noise removal, the proposed solution provides much better performance in watermark extraction.

  5. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  6. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... variation regularization, and related formulations of the optical flow problem that may be used with a duality based algorithm. We present a highly optimized algorithmic setup to estimate optical flows, and give five novel applications. The first application is registration of medical images, where X......-ray images of different hands, taken using different imaging devices are registered using a TV-L1 optical flow algorithm. We propose to regularize the input images, using sparsity enhancing regularization of the image gradient to improve registration results. The second application is registration of 2D...

  7. Low contrast detectability and spatial resolution with model-based iterative reconstructions of MDCT images: a phantom and cadaveric study

    Energy Technology Data Exchange (ETDEWEB)

    Millon, Domitille; Coche, Emmanuel E. [Universite Catholique de Louvain, Department of Radiology and Medical Imaging, Cliniques Universitaires Saint Luc, Brussels (Belgium); Vlassenbroek, Alain [Philips Healthcare, Brussels (Belgium); Maanen, Aline G. van; Cambier, Samantha E. [Universite Catholique de Louvain, Statistics Unit, King Albert II Cancer Institute, Brussels (Belgium)

    2017-03-15

    To compare image quality [low contrast (LC) detectability, noise, contrast-to-noise (CNR) and spatial resolution (SR)] of MDCT images reconstructed with an iterative reconstruction (IR) algorithm and a filtered back projection (FBP) algorithm. The experimental study was performed on a 256-slice MDCT. LC detectability, noise, CNR and SR were measured on a Catphan phantom scanned with decreasing doses (48.8 down to 0.7 mGy) and parameters typical of a chest CT examination. Images were reconstructed with FBP and a model-based IR algorithm. Additionally, human chest cadavers were scanned and reconstructed using the same technical parameters. Images were analyzed to illustrate the phantom results. LC detectability and noise were statistically significantly different between the techniques, supporting model-based IR algorithm (p < 0.0001). At low doses, the noise in FBP images only enabled SR measurements of high contrast objects. The superior CNR of model-based IR algorithm enabled lower dose measurements, which showed that SR was dose and contrast dependent. Cadaver images reconstructed with model-based IR illustrated that visibility and delineation of anatomical structure edges could be deteriorated at low doses. Model-based IR improved LC detectability and enabled dose reduction. At low dose, SR became dose and contrast dependent. (orig.)

  8. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    Science.gov (United States)

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the ro